Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. My googling isn't working to tell me what the difference is - there doesn't appear to be a GUI option for the sync_window that I can see, so not really sure why it would be provided.
  2. Thanks - then not sure which to put where yet lol. Also, despite what I read online, it seems read ahead is set to 256 not 1024, so setting it to 2048 seems to improve things. i.e # blockdev --setra 2048 /dev/md*
  3. That was the fast test, this was the normal test. I have of course drives connected to the Dell H310 and the onboard SATA, so not sure what that does. Saving doesn't work obviously (tried that) but you can enter them manually. I assume that sync_thresh in Unraid is the same as sync_window in this script.
  4. Hey, so I was poking around these forums looking for performance optimisations and this script seemed like the bees knees. I got it to work on Unraid 6.7.0 by doing the following: Open a root command window (console) in your favourite way. type the folowing root@yoursever:~# ln /usr/local/sbin/mdcmd /root/mdcmd Then run the script as per normal. Notably I'm getting some errors and I haven't bothered to fix them as it doesn't seem like they matter because I'm getting the output I need. Hope this helps you out. I'd be interested if it works for you anyway - I got the following report with a Dell H310 card flashed in IT Mode:
  5. I would think this kind of thing is more elegant that stacking 2 drives in a 5/4"? https://www.ebay.com/itm/163675847557?ssPageName=STRK%3AMEBIDX%3AIT&fromMakeTrack=true though perhaps I misread....
  6. I do still tend to go over to the FreeNAS forums and salivate over ZFS from time to time. While poking around I saw this release announcement on Phoronix: ZFS on Linux 0.8 released with Native Encryption, Trim and Device Removal. That would seem to alleviate all of my remaining concerns, depending on how it's implemented. Now that I'm running Threadripper with so much RAM I expect I won't have performance issues and only advantages.
  7. Yes exactly, which is why I'm asking for if anyone knows of any optimisations. I guess it's the downside of Unraid really, others such as Proxmox and so forth are quite optimised for VM's running under shared resources, as you'd expect. The great thing about Unraid is it can still be done, with a few optimisations. So, if anyone knows of any for emulated CPU setups, if you could post them here or point me at them (haven't found them myself yet). Thanks.
  8. The benefit of now having so many cores available to many of us means that we can dedicate some to a gaming VM. However, for everything else, one option is to share the cores among the remaining apps. That's exactly what I'm doing as I don't need 100% performance with super low latency on my other applications. So, to that end, does anyone know if there are any optimisation tricks for running a Linux VM under an emulated CPU environment? I'm currently doing an encode this way and I would have expected it to be a bit quicker when I compare it to how it performs native on another machine. I'm using Q35 3.1 which I'm about to dive into understanding. Yes I appreciate I haven't given all information, that's intentional at this stage. Thanks.
  9. OK, I guess I have to live with it. It doesn't seem to impact anything, just looks horrible.
  10. Thanks, that's getting closer to what I was interested in.
  11. Thanks for the link - I'm actually not trying to understand more details of how it works - I understand enough I think, it's just I was wondering if it's proprietary or not and if so (or if not) what is it called? So in standard raid on linux you have mdadm and probably some others. What is the Unraid binary that does this and is it open source / closed source? I wouldn't have thought it was fuse, that's usually something completely different, but perhaps not in this case. So not the GUI. Thanks.
  12. Warning: syntax error, unexpected '!' in Unknown on line 29 in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 1324 as per below. I've been getting this error for a few weeks on the main screen. I think it happened since passing through an NVME drive to Windows, or it could be since passing a dual port network card through to VM's. So, when I start Windows the error (per below screenshot) actually goes away, then when I stop windows it comes back. To this end I assume it's the NVME drive as the NIC is not passed through to that VM. I've been achieving this below as per Xen-pciback.hide which in the release notes they say it 'appears to no longer work' - though I think it does for me. Perhaps this is why the error though? The release notes say: "New vfio-bind method. Since it appears that the xen-pciback/pciback kernel options no longer work, we introduced an alternate method of binding, by ID, selected PCI devices to the vfio-pci driver. This is accomplished by specifying the PCI ID(s) of devices to bind to vfio-pci in the file 'config/vfio-pci.cfg' on the USB flash boot device. This file should contain a single line that defines the devices: BIND=<device> <device> ... Where <device> is a Domain:Bus:Device.Function string, for example, BIND=02:00.0 Multiple device should be separated with spaces." So perhaps I should replace with that? Anyone else come across this? kernel /bzimage append isolcpus=12-15,28-31 xen-pciback.hide=(0b:00.0)(0b:00.1) vfio_iommu_type1.allow_unsafe_interrupts=1 initrd=/bzroot obi-wan-diagnostics-20190522-2134.zip
  13. Came here looking for why it takes so long to get to the Tianocore screen. On 6.6.7. So I guess I'm a +1 but nothing educated to base it on yet.
  14. Did these models have USB3? If so, you could try booting off an external USB controller I suppose. Though a kernel argument would be nicer. Or try the UEFI version of the usb stick - which I think is more relevant when using EFI. EFI is the original intel standard I think, with UEFI being an implementation of it, or the other way around. I can't remember.
  15. Yes that's another way of describing it.
  16. I was wondering, is the RAID code that unraid uses, proprietary and developed by lime tech, or is it something open source that lime tech bundle? Thanks.
  17. White this may be somewhat advanced topic - I have heard from time to time that linux on Mac's can run into an issue or two. I guess what I'm saying is don't assume it will work like a PC does. It's possible it needs some kind of boot wrapper or a kernel boot option that you can enter at the beginning. I'd suggest you first google how to install generic linux on your Mac - if there's nothing special then it's probably OK. Then I'd suggest googling if there are any kernel issues with your version of Mac. The Kernel version can be retrieved via the command uname -r (as root). So for the latest Unraid it's Kernel 4.19.41-Unraid. Unraid probably has some customisations - but generally speaking there may be some info this way. Hope that helps.
  18. They've landed now. In stock (note it's NZ price) as per links below. The 8TB might be out of my price range though. https://www.pbtech.co.nz/product/HDDSAM60400/Samsung-860-QVO-MZ-76Q4T0BW-4TB--Samsung-V-NAND-SA https://www.pbtech.co.nz/product/HDDITX45108002/Intel-P4510-Series-8TB-25-PCI-E-NVMe-SSD-3200MBs-r Without reading up on it (so could be wrong), the below statement in 6.7.0 release almost sounds like it would work for a filesystem across multiple ssd's? Added the '--allow-discards' option to LUKS open. This should only have any effect when using encrypted Cache device/pool with SSD devices. It allows a file system to notice if underlying device supports TRIM and if so, passes TRIM commands down. ?
  19. @CHBMBLOL - I knew it would stand out a bit but didn't think I'd get someone choking - perhaps we could rebrand HDD docks to HDD porkers lol. @jonathanm I didn't do a lot of reading - the one thread I read seemed to say you couldn't re-enable - I guess that's what happens when you're tired. Though I'm guessing you meant to say 'the whole disk won't be recreated from parity, filesystem and all? Because by deleting the partition I am now getting the whole disk recreated from parity, filesystem and all. I figured doing it that way rather than finding a way to just re-add it would likely find any bad sectors / issues better.
  20. It's hard to overlook coincidences, but I can't see why it would happen at exactly the same time I stopped the docker service lol. After the reboot those errors have not come back from the log. I have backups so, I've just deleted the partition from the disk and re added to the array. It's running in emulated mode anyway, so may as well give it a go and in the mean while figure out if all those disks are on the same perc cable or what. I did get a new cable about a month ago (have a Dell Perc 310 running in IT mode). The new thread ripper board took me down to only 6 onboard SATA ports so was quite grateful for it! Thanks for the help.
  21. Hi all, just out of interest what would you say major issue with this drive is? I got the red x, drive is disabled, contents emulated etc..... Going to try a restart but... Thankfully it's under warranty Screenshot and logs attached (pre reboot). obi-wan-diagnostics-20190518-0556.zip
  22. Did you ever get this solved? For some time now this is happening to me - though restarting the docker kicks off the move and completes it properly - so it doesn't seem like a configuration problem and it did used to work well on it's own. Hard to know how to resolve this one.
  23. It seems disabling the Dynamix Cachedirs plugin fixed the VM performance issues for anyone else whom has the same problem.
  24. So to cover it off in here, for me disabling the Cachedirs plugin has completely removed all the performance issues I had in the VM. I am not sure why that is, but if anyone else comes here with a similar problem, it may be worth a try.
  25. VM Performance Issues with cachedirs enabled. Hi everyone, I'm just looking for a few hints around where to look within the folder caching options to identify what is causing some fairly serious performance issues. I was suspicious this was actually a networking issue at first but it turns out when I turn off the folder caching plugin my performance issues go away. I have a 32 thread thread ripper, 128GB RAM, and various HDD / SDD / NVME. The windows gaming vm on the NVME works flawlessly with this plugin off, but when I turn it on, the screen freezes while playing, sometimes for multiple seconds between 3-5 by which time I've been killed lol. It's taken a long time to figure this out - because it really wasn't obvious. I have a CrashPlan backup system that runs permanently in the background which is why I need to have almost all folders available to this plugin - otherwise CrashPlan starts up the disks all the time to check for changed files. I could of course change this to daily or something - but really that's not a great solution. See below screenshot for my current settings. (I'm running Unraid 6.7.0 RC8). Do I need to increase the shell memory for a large number of files? I have about 48TB of storage. Thanks.
×
×
  • Create New...