Jump to content
We're Hiring! Full Stack Developer ×

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. In terms of apps, the stuff you want to run all have dockers in the Community Apps "store" (all free stuff so it's not a "store") so you should be able to find something. SMB is natively supported by Unraid. In terms of dual parity, only you can judge if it works for you. File duplication usually means any failure of more than 2 drives will cause at least some data lost. Dual parity also similarly offers protection against up to 2 drive failure. However, (Write) performance will be relatively slower due to parity calculation. Reconstruct write (aka turbo write) is a decent mitigation but it requires all your drives to be spun up. Depending on your HDD speed, your network speed might very well be the bottle-neck anyway instead of the slower write speed. More than 2 drives failing for dual parity means SOME data will be lost. In contrast, depending on the duplication algorithm, it is possible that no data is lost with up to 4 drives fail if the "right" set of drives fail. This is, however, highly not probable based on how I understand Drivepool works but just thought to leave it out there for completeness sake. Personally for 8 drives, I would just go with single parity for more available storage. I calculated based on Backblaze disk failure stat that I generally only need dual parity for array of more than 8 drives or so - but then that's my level of risk tolerance.
  2. Have you made all the necessary changes in your BIOS? You have ACS Override turned on. Do you really need it on? That can cause performance issue in some cases. X570 + 3900X seem to be buggy on the old kernel so perhaps try updating to 6.8.0-rc7 to see if it helps. (but before that, back up your USB stick in case you have trouble reverting to 6.7.2 in the future).
  3. You certainly need to up the fans (by a lot). My experience with these pre-built NAS is that they are overly optimistic with cooling. For tightly-packed HDDs, you want the fans to push air into the drives and not pulling (because they would end up pulling air from the path of least resistance, which is not through the tiny gaps between the HDDs). So to really cool the HDDs, the fans would have to run really hard to create a vacuum that hopefully sucks some air through the gaps.
  4. Complicated answer. The NVMe should have better random performance than SATA but in practice, it takes a lot of IO to have perceivable diff. I have hosted my appdata on various SATA and M.2 (both AHCI and NVMe) devices and have not noticed any diff. Where I have noticed the diff is in terms of temp space. When there's very high IO on the temp space (both sequential and random), SATA tends to cause things to grind down a lot more than NVMe. That's why I'm using the Intel i750 as my temp space at the moment. In terms of shfs load, if your appdata mapping in dockers is /mnt/cache then it will by default take the load off shfs so perhaps start from there. Having said all of the above, multiple webservers (and thus a lot of small files) would probably be a good use case for the NVMe, especially if you are being DDOS.
  5. Tools -> Diagnostics -> attach zip file. (preferably after starting the VM and seeing issues). Also, what were your config prior to the upgrade? What is your current config? What did you do the VM templates?
  6. You can use it for appdata (and even docker image) or any other use really. As far as Unraid is concerned, the UD device is just another mount point. It probably won't take much load off your array though. For example, you don't store appdata on the array (hopefully).
  7. Watch SpaceInvaderOne videos on Youtube.
  8. The fix for AMD was what I referred to in my earlier post. It turned out to be unreliable and was subsequently removed. With regards to error code 43, no it isn't lottery. While there's no guarantee that it won't happen to your GPU, it's not like any random number of GPU has it and a random number of GPU doesn't. The pattern I have seen is passing through the primary (or only) GPU (i.e. the GPU that Unraid displays on at boot) almost always led to having to work around this issue. You have an iGPU (and presumably the ability to pick that to boot with in the BIOS - check that first please) so if Unraid boots on the iGPU then the chance of error 43 happening to you is very low. A few things you can do to make it even lower: Boot Unraid in legacy mode (i.e. not UEFI) Dump vbios specific for your actual GPU and use it I personally have not had any run in with it despite doing the "big no no" of turning on Hyper-V. I chuck that to the 2 points above.
  9. If vfio-pci.ids works then why don't you use that?
  10. No particular reason against AMD GPU. Basically, the 2 most frequently faced issues with passing through GPU with the 2 teams are: For AMD: reset issue (the GPU bind can't be released so restarting a VM caused the GPU to stop working, requiring an Unraid restart for it to work again). For Nvidia: error code 43 (the Nvidia driver detects that its being run in a virtualised environment and refused to load - in the hope that you will spend more on a Quadro, for which this issue doesn't exist). Reset issue is Kernel + model related so if you bought a card that has it with the current kernel version then there isn't really a fix (unless / until patches are released and as seen recently, the patches can also be unreliable). Also note that lately I have seen a few posts about reset issue with Nvidia too so "frequently" doesn't mean exclusivity. There are workarounds to avoid error code 43 that may (or may not) work but it can happen to any model. So it's a sort of pros and cons with both teams. Regardless of team Red or Green, I would recommend you do a quick search (e.g. let's say you want to buy the AMD RX 580 then search for that on the forum to see if others have had any issue. hint: don't get the RX 580).
  11. In addition to what the guys above suggested, you can also give this a try. Change the name of your problematic VM to Test_VM (or create a new template called Test_VM) Install User Scripts plugin from the CA store. Create a new script like below. Run it and wait 3 minutes. Hopefully the script works and kill the offending VM Tools -> Diagnostics -> attach zip file. Script: #!/bin/bash virsh start Test_VM sleep 120s virsh destroy Test_VM
  12. Start your VM, make sure you have the problem and then Tools -> Diagnostics -> attach zip file.
  13. I hope when you picked i440fx, you followed my advice to create a new template and not tried to edit the current template. Editing the xml template does not have any impact on your actual VM. All you need to do is to edit your not-working template in xml mode and then copy-paste your old code (presumably that's the xml you attached in the first post), save and then everything should be back to what it was.
  14. Read the Docker FAQ topic. Splitting vCPU does impact latency, it's a matter of tolerance. If you play the latest games on a 2080 (with graphics appropriately set for a 2080) then sure, it does get very annoying. If you browse the web, edit a few photos, play Rocket League casually at medium-low settings then it's alright. I don't usually recommend specific hardware due to pricing difference across the globe and potential compatibility issues (no way to be 100% sure that something works without actually physically have that exact hardware). In general terms though, the 1050Ti is a good budget choice and usually having iGPU helps reduce the chance the infamous error code 43 when passing through Nvidia GPU. And again, if you just need a display, the GT 710 goes for about £20-£30 this side of the pond. Even my super niche GT 710 (single slot, passively-cooled, PCIe x1) went for about £60.
  15. I would do something like this: 1xNVMe passed through to the main VM as a PCIe device (for best performance). 1xNVMe passed through to the gaming VM as a PCIe device. 1x250GB SSD mounted unassigned for the vdisks of the other VMs. 64GB should be more than sufficient for OS + streaming software of the streaming VM 2x32GB for (4) and (5) (although depending on what you transcode, 5 can be done with the Handbrake docker) Remaining can be another vdisk to be used for the streaming VM as temp space 2x250GB SSD in the cache pool in btrfs RAID 1 for 250GB mirror-protected storage for your most important data. You will almost certainly need to turn on ACS Override and need to find single-slot GPU's due to how your mobo PCIe slots are structured. In addition, I don't remember Asus motherboard allowing you to pick any PCIe slot as primary (i.e. what GPU to boot Unraid with) so expect a bit of gripes (and potentially a run in with error code 43) passing through your primary i.e. gaming GPU. You may also need a USB card if you want hot-pluggable USB ports for all 3 main VM's. The X399 chipset only has 2 pass-through-able USB 3.0 controllers. So the 3rd VM can only be cold plug (USB device passed through at VM boot) or warm plug (there's a plugin that allow you to connect a newly-plugged USB device to any booted VM but you need to control that manually from the GUI).
  16. To use the iGPU for your Windows VM, you need to pass it through and passing through the iGPU to a VM varies from difficult to impossible. (I think the 9-series Intel is in the impossible bucket). Hence, the only way for you to use the iGPU for Windows is to boot it bare-metal, which means Unraid is not running, which means no Plex. Yes, you can run Plex in Windows too but note that your storage is in Unraid which uses xfs file system which Windows doesn't support natively - so still no Plex if booting Windows metal. Nice diagram but you might want to number the vCPU instead of the (physical) core since that's how Unraid will pin cores. So let's say you number the vCPU in the same way (so vCPU 0 + 1 = CORE 0) then vCPU 0 should be left unpinned and unused for Unraid tasks. I think the more recent versions have gone multi-tasking so it's not as critical but it's always a good idea to leave something reserved for Unraid so everything doesn't grind to a halt. vCPU 1, where possible, should also be left unused but you can use it for the VM emulator (since it doesn't use much processing power anyway). Plex can share cores with all other dockers. You can set the Plex docker parameter such that Plex will have a higher priority and thus gets more CPU power when it needs to (instead of having Plex-only cores). A trick that I use to maximize available cores is to only pin half a physical core for VM. So in your example, it would be something like this (number is by vCPU) 0 - reserve for Unraid 1 - VM emulator 2,4,6,8,10 - dockers 3,5,7,9,11 - VM pin Intel / AMD multi-threading algorithm is surprisingly good. By using half a physical core that way: When you are not running VM then your Plex has access to 5 physical cores to transcode. When Plex is not transcoding then your VM has access to 5 physical cores for better performance. When both run simultaneously, in my experience, I can tell there's a bit of latency but it's definitely not annoying.
  17. I would pick the ASRock for no reason other than the PCIe slots being open-ended so you have more flexibility.
  18. @limetech: why 6.9.0-rc1 and not 6.8.1-rc1? I think it will create some complication if you will roll out 6.8.1 in the future and confuse everyone who needs 5.x kernel and is on 6.9.
  19. If they are dockers then you are better off asking in the docker support topics.
  20. If you are only remoting into your VM then a dedicated GPU is not necessary, provided you only play games when booting into Windows bare metal (since the Windows VM won't have a GPU so gaming "performance" will be terrible). I still think for a "complete Media Center Experience", you are better off with a dedicated GPU so you can watch stuff through the VM (via the dedicated GPU) while Unraid does other things e.g. download, manage your media (e.g. Plex), NAS storage etc so you don't have to restart your computer. The dedicated GPU does not have to be expensive. It's pretty easy to get a rather affordable GPU that can beat the iGPU. If all you need is a display, £20 can get you a dedicated GPU. One thing I don't seem to see anyone mentioning about dual-booting is that you might run into issues with Windows activation since every boot the OS sees a new "motherboard".
  21. Tools -> Diagnostics -> attach zip file. Also what Unraid version are you running? I suspect 6.8.0 rc? In your xml, there's this line. <type arch='x86_64' machine='pc-q35-4.1'>hvm</type> Change it to: <type arch='x86_64' machine='pc-q35-3.1'>hvm</type> That should change your Q35 machine back to v3.1 But, (and it's a big but), Q35 below 4.0 makes your PCIe runs at x1 so you need to add these codes to the end of your xml (manually) before </domain> so that it can run at full speed. <qemu:commandline> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.speed=8'/> <qemu:arg value='-global'/> <qemu:arg value='pcie-root-port.width=16'/> </qemu:commandline> This is assuming the root port patch hasn't been removed. If it were removed then you would also need to downgrade to Unraid v6.7.2 (or create a new template and use i440fx machine type, again v3.1). The best fix though is to pass through a USB PCIe device to the VM and use your Yamaha plugged into that. Hence I asked for Diagnostics to check your IOMMU group + available devices.
  22. How are you going to access the VM? Directly on the server (i.e. it's a PC + NAS use case and not a pure NAS server use case)? If that's the case then you need a new GPU for the main VM (let's say Windows) because trying to pass through the iGPU to a VM is more trouble than it's worth. You can then use the main VM to remote access the Linux VM (or have the Linux VM boot up with the new GPU only when the Windows VM is off). For temp folder (i.e. heavy write), mount the old 120GB SSD as Unassigned Devices and use it (assuming you are not doing Linux iso larger than 50GB or so). You already have it (and wanting to discard it) so may as well put it to good use. I would be quite interested in how long it takes for your SSD to die cuz I have been trying to run various SSD's to the ground to no avail. It is ok to put Download folder on the cache but it's good practice, where possible, to separate heavy write activities from the cache because if your cache fails, it's very annoying to try to recover appdata, docker, vdisk etc. If your Windows VM is your main VM then it's a good idea to pass the NVMe through as a PCIe device (i.e. "dedicate" it to the VM) for best performance. You can store the Linux vdisk on the cache. It is simple to add more disks (e.g. in your case add 2 new disks, 1 as parity and 1 more data). Just follow the instruction on the Unraid wiki (or watch SpaceInvaderOne guides on Youtube). A few things to look out for: Preclear your new HDD before adding (use the pre-clear plugin). Don't accidentally add your current data HDD as parity (because parity build will overwrite whatever is on the parity disk). It's actually quite hard to do if you follow the wiki instructions but it's worth noting down the current data disk serial number and double check before clicking "Start" the array.
  23. Just to make your life more complicated, if your router supports it, you can even use it to connect to the building Wifi and use it as Internet for the rest of your local network.
  24. When was the diagnostics run? It doesn't contain any libvirt / qemu log. Please get the diagnostics after you try running the VM (and get blank screens). Also you have a few lines of USB storage devices disconnecting. You may have a bad port / bad USB stick.
  25. "always" is a rather strong recommendation that those who suffered from the recent Ryzen BIOS problems would not vouch for. I would say the more appropriate recommendation is to read the patch notes. If there are security patches (and that you are concerned about such security holes) and/or bug fixes (and that you are suffering from such bugs) then update. If the patch notes are not cleared as to what bugs were fixed (and that you are suffering from some bugs) then sure update and see how it goes. Otherwise, if it ain't broken, don't fix it.
×
×
  • Create New...