Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. (Assuming both have DRAM and are 3D TLC), M.2 NVMe SSD is ALWAYS faster than a SATA SSD. It's less about advantage and more about noticeability. You will tend to notice it more often if your workload is where NVME matters e.g. High IO load (especially simultaneous / parallel IO) Large sequential Random Read Random Write in rapid succession (infrequent random write is cache in RAM and thus is always super fast). One thing nobody seems to mention when talking about SATA vs NVMe is that NVMe is built around parallelism. The biggest implication is that under heavy IO, an NVMe drive is less likely to freeze your system (due to high IO wait). In terms of compatibility, those mobo compatibility lists are never updated and thus would never have any device that come out afterwards. Theoretically, any normal NVMe M.2 will be compatible with any PCIe M.2 slot. What would be an "abnormal" M.2? The Intel H10, for example, requires special bifurcation of an x4 link into x2/x2, which basically nobody supports, even most Intel own chipsets out there. The only important compatibility that you would have to pay attention to is when you want to pass through the NVMe as a PCIe device to a VM. You then need to pay attention to the controller since some just out right won't work (e.g. Intel 660p) and some require special workaround with limitation (e.g. SM2263 controller would not work with more than 15 cores). Note that even in those cases, you still can use the NVMe as a storage device (e.g. put a vdisk on it) and it would still perform better than a SATA SSD.
  2. No, it doesn't - sort of. You are trying to generalise which is highly imprecise when it comes to GPU pass through. Both brands have their problems. All Nvidia GTX / RTX graphic cards can have error code 43 if the Nvidia driver detects that it is being used in VM (that's how Nvidia tries to force users to fork out more money for a Quadro). Error code 43 is a generic "it's not working" error so it muddles the situation so you don't know if you configure things incorrectly or the card has failed or the aforementioned artificial error, etc. All AMD graphic cards can have reset issues which make it impossible to pass through and/or requires the whole server to reboot in order to reboot the VM. This is particularly prevalent with RX500, Vega and Navi (i.e. all the recent AMD GPUs). Resolutions and workarounds center around these common fixes. Pass through all the devices of the graphic card together. E.g. RTX card has FOUR devices (GPU + HDMI Audio + 2 USB devices), typical other graphic cards will have TWO devices (GPU + HDMI Audio). This is one of the most frequently made new user errors. Having a GPU for Unraid to boot with ALWAYS HELPS - even more so than a vbios! This is why I previously recommended you consider Intel offering with iGPU, especially for an ITX build for the reason I already mentioned. The only success stories with AMD RX500 series on here involved booting Unraid with another GPU. Using a vbios can help both AMD and Nvidia but it's easy for new users to download / dump / edit incorrectly. AMD Vega and Navi are basically impossibilities without the vega / navi patches. These patches are not included in Unraid by default because they mess up other otherwise working cards. So the only way to get them is to compile custom kernel. There's a forum member who compiled 6.8.3 with all the patches if you don't know how to compile them yourself. Miscellaneous fixes e.g.: Hyper-V and KVM xml tags to deal with error code 43 Boot Unraid in legacy mode
  3. @MMChris: re-read my post on Monday about separating write heavy and read heavy data with a single-drive cache + unassigned SSD. It is particularly useful if you are going to use it for torrent / any other write-heavy activities. And then read up the below about the still-ongoing performance issue with btrfs multi-drive cache pool. Then ask yourself how important mirror protection is (as opposed to having a backup instead).
  4. What data are you storing in cache? Given SSD is more reliable (than HDD) and even when it fails, it tends to fail gracefully (i.e. giving you time to respond and replace), having a backup can be more useful than mirror redundancy. In fact, running RAID-1 doesn't mean you don't want to have a backup. And separating write-heavy and read-heavy data to 2 different SSDs will help improving the lifespan of both by reducing wear leveling and write magnification. Not saying that redundancy isn't useful but it sounds to me like you might be over-valuing it and dismissing the alternatives.
  5. Yes AMD reset bug. Ideally, you can try compiling your own kernel with the navi / vega patch to see if it helps. A forum member also compiled one with all the patches so you may want to try that if you can't compile your own. Also try dumping vbios from Unraid command line instead of Windows. There is also a SpaceInvader One vid with some tips and tweaks with AMD GPU that may also help. Custom kernel post:
  6. I would still recommend Intel with iGPU for gaming ITX build with a motherboard that allows you to boot with the iGPU instead of the dedicated GPU. Having a GPU for Unraid to boot with, while not guaranteeing anything, does help a lot (e.g. a user I recently helped had success just by booting Unraid with the iGPU). New users tend to struggle a lot trying to pass through with only 1 GPU and with an ITX build, you don't have the option to plug in a cheapo GPU as an easy way out.
  7. You probably need to attach diagnostics (Tools -> Diagnostics -> attach zip file). And describe in details how you install Windows. Also next time when you copy-paste texts from Unraid (e.g. GUI or xml), please use the forum code function (the </> button next to the smiley button) so the text is formatted correctly.
  8. Your (2) will be fraught with difficulty. Passing an iGPU to a VM is even tougher than a Vega and I have not seen any success story at all for the 9th-gen Intel iGPU. And then you add MacOS.
  9. WIth xfs, the first and foremost thing to note is you cannot have a multi-drive cache pool. That means in the future, you can replace the SSD with another but you can't add more drives to the cache pool (without having to reformat the pool to btrfs). That is not necessarily a bad thing because: There are currently performance issue with multi-drive (btrfs) cache pool with some SSDs. Strangely I have never had that issue but surely some users have had issue with it. You can do a quick search on the forum for more details. Unless you NEED mirror protection (i.e. RAID-1), it is generally better to have 1 SSD in the cache pool and 1 unassigned to separate read-heavy and write-heavy data. That will improve the lifespan of both SSDs. I would also prioritise having a backup over having mirror protection. With that out of the way, to move the SSD from disk1 to cache: Back up any data that is critical. Since you were just testing things out I assume there's nothing too important but worth a point to check it. Go to Tools -> New Config -> tick the box to confim your understanding and ok -> Main -> all slots should be empty -> MAKE SURE THE NUMBER OF DRIVES IN CACHE IS ONE -> Make sure there is only ONE slot for cache -> Select the SSD in cache slot + select HDD in the various array slots. Before clicking Start, MAKE SURE THE CACHE SLOT IS NOT BEING FORMATTED. It should be obvious on the GUI which drive is going to be formatted. If you already set it correctly to have a single drive cache pool (i.e. there is only a single slot possible for cache) then you should be fine here but it is always a good idea to double check.. If you are not sure, just take a screenshot and post and people should be able to help. With regards to your HDD, make sure to run a preclear cycle on each of them (install preclear plugin from the App store). It will take a while to complete but it's worth it. Only reconfigure your cache and array AFTER preclear is all clear.
  10. Appdata, system should be on cache (SSD) and shouldn't be on a HDD due to the slower speed. Do you just want to move the SSD from disk1 slot to cache slot? If so, what's the file system of your SSD? xfs or btrfs? If you just want to move the SSD from disk1 slot to cache slot, you don't need to move data in and out of it but I need to know what file system it is using to give you the necessary things to look out for (i.e. the silly new user mistake).
  11. So first and foremost, you cannot use the 2nd NVMe "similarly to the Windows 10" AND still "have Docker containers and some other general data" on it. "Similar to the Windows 10" means passing it through as a PCIe device, which means it is exclusively used by VM so there's no way for Unraid to use it. You can, however, have the OSX image as a vdisk and that's the only way to share the NVMe with Unraid. To make the NVMe show up in Unassigned Device, you need to remove vfio-pci.ids=144d:a808 from your syslinux. Notes: This doesn't automatically nullify your VM ability to use it as a pass-through PCIe device. It just doesn't show up on the Other PCI Devices section of the VM template but in the xml, it is still being passed through. From my own experience, as long as I don't mount the NVMe with Unraid, I can start the VM fine and it automatically grabs the NVMe used in its config. Of course, once the VM uses it, it will disappear from Unassigned Devices until the server reboots. Be careful interacting with the NVMe, especially the one that you use for the Win10 VM. If you mount and write stuff to it, there's a chance it would corrupt the data, making your Windows VM not bootable subsequently. Read is usually fine (e.g. dd from /dev). To make a single device appears in Other PCI Devices, install the VFIO-PCI Config plugin from the app store then go to Settings -> VFIO-PCI.CFG then tick the device you want to appear in Other PCI Devices -> Build VFIO-PCI.CFG and then reboot Unraid. As mentioned above, any device that appears in Other PCI Devices will NOT appear in Unassigned Devices. If you physically change your PCIe devices in anyway (e.g. changing slot, adding, removing, swapping devices), you should disable (untick) all the ticked devices first and rebuild the VFIO-PCI.CFG before making the physical change. This will ensure you don't accidentally stub the wrong device because changing devices can change the bus number which the VFIO-PCI.CFG uses to stub. If you are familiar enough with your config, you can actually guess accurately what is going to change and thus don't have to first disable VFIO-PCI.CFG but it's just safer to disable it. "Stub" = making it appear in Other PCI Devices. Finally if you are familiar with the xml, you can actually make a very simple edit in the xml to pass through the other NVMe without the need to use VFIO-PCI.CFG at all. I have done that many times and it's actually a lot simpler than it seems initially so I recommend spending some time to understand the xml.
  12. The P400 probably can only handle 1x 4k stream or 6x 1080p streams so keep that in mind too with your Quadro selection.
  13. With a single GPU, no you can't do your 3rd scenario. In fact with a single GPU, there is no point in using Unraid for your Linux + Windows use case since you need to shutdown 1 VM to run another, which is parallel to rebooting to another OS bare-metal. With 2 GPU's, it is possible but no guarantee. Your chance of success will improve tremendously if you can boot Unraid with the 7700K iGPU. It's an option in the BIOS but I'm not sure whether Asus BIOS allows that. I know Gigabyte and ASRock do allow to boot with the iGPU while ignoring other dedicated GPU. This is not a requirement (i.e. with the right config, you might still be able to do it with 2 dedicated GPU's, 1 of which Unraid boots with) but it would certainly help a lot (e.g. one user I recently helped had success by simply switching Unraid to boot with the iGPU). Keyboard/mouse switch may need additional software / hardware help. I have a USB 3.0 switch so my input devices switch between 2 machines just at a press of a button. Display switch is a monitor functionality. In terms of MacOS VM, there is a Catch-22 situation. Catalina works better with AMD GPU but AMD GPU has reset issues that make them difficult to pass through to a VM. Nvidia GPU is (relatively!) more cooperative with VM pass-through but you have to go through hoops to make it work with Catalina. So you will have a hard time getting a MacOS VM to work properly with a GPU. Possible but tough. Hence, unless you are well-versed in Hackintosh, I would recommend to just not waste your time trying.
  14. We should just agree to disagree then. At least the OP gets to see it from both sides to make a better decision. 😉
  15. Get a Gigabyte motherboard. It would give you flexibility in GPU placement i.e. you don't have to place the P400 on the 1st PCIe slot. Gigabyte BIOS allows you to pick any PCIe x16 slot to boot with (it's called "Initial Display Output").
  16. Multiple things: Please use the forum code functionality (the </> button next to the smiley button) when copy-paste text from Unraid GUI / xml. It would format the text correctly, helping a lot with identifying issues. Your xml only passes through the GPU without the HDMI audio device and it will 100% not work that way. You need to pass through both together. The RX 570 has reset issue so it would be very tough to pass it through to a VM. The only success story that I have seen is when Unraid does NOT boot with it + a vbios was used. So as a starting point, are you able to make Unraid boot with the 8400? If so, do that. Then watch (all) SpaceInvader One tutorials on Youtube about VM setup. There are a lot of details there that would help. If you reply, please attach diagnostics in your next post (Tools -> Diagnostics -> attach the full zip file)
  17. "Worth it" depends not only on price and budget but also "what can you spend with the money you save". As Benson said, you need to take into account the cost diff of the CPU + cost of an additional budget air cooler (unless you plan to use a 3rd party cooler regardless of CPU choice then exclude the cost of the cooler). That would probably work out to about $30 - $60 (or £30 - £60, the sterling is no longer as powerful as it once was). Let's just use $50 / £50 as a nice rounded estimate. What can you buy with $50/£50 that would give you more than the 3.3% improvement (9700 vs 9700K)? Have a think about it that way and it would make the decision much easier.
  18. If you have problems setting things up in the app store, you can msg Squid for help. I think this would be a good addition to the app store.
  19. Start a new topic please. Attach your diagnostics zip (Tools -> Diagnostics -> attach zip file). Also take a screenshot of what you meant by "only the same one is recognized by Unraid as available to VM’s" <-- which one? Specify which GPU Unraid booted with Did you enable IOMMU in the BIOS? <-- Enable means "Enable" i.e. not leave it at Auto.
  20. You stubbed the device by ID: Mar 30 19:34:07 BBBServer kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot,/bzroot-gui vfio-pci.ids=144d:a808 So all devices with the same ID will be unavailable (i.e. invisible) to Unraid i.e. it will not appear anywhere including Unassigned Devices. You look to have 2 devices with 144d:a808 (which is shared among multiple Samsung M.2 SSDs e.g. 970 Evo, PM983 and apparently now 970 Pro too). Are you looking to have 1 used by Unraid and 1 used by the Windows VM or something?
  21. Remote should be the ip of your server. Alternatively, why not just enable mirroring to flash? It's in the same settings page.
  22. I disagree then. A one-way periodic sync is a backup. That's why people use rSYNC as a backup tool. In fact, a synced copy is an excellent backup because restoring does not require any additional software. Drag-and-drop in Windows Explorer would work. cp -ar in Linux console would work. In fact, I would definitely NOT recommend Duplicati because: It doesn't scale well at all. It worked originally when I only had a few GB to backup my documents. It crapped itself once things got into the hundred GBs range. It requires technical skills to restore. My granny certainly can't use Duplicati but she understands "drag-and-drop". A corrupt incremental volume makes subsequent incrementals untrustworthy to restore. You have to go back to the last full backup, which means you might as well have full backups all the time, which means you might as well have a synced copies. Its other bells and whistles are not as important it seems: Compression is good to have but I would argue that most things that need to be backed up (e.g. photos) don't really compress well at all. Incremental is good to have but see my above point about the disadvantage of incremental. Encryption is good to have but only if you backup to the cloud (and only if you don't have any other way to encrypt e.g. rclone encrypted volume). For most people, I would argue that an encrypted backup adds an unnecessary layer of complexity. Deduplication is good to have but how much data is dup in a typical user's backup? I guess can I can TL;DR my response as: when it comes to backup, "fairly basic" is a complement.
  23. You can add more parity later when you need it. However, considering your array consists of 3-4TB drives, you should first look to replace them with high capacity drives (e.g. 8TB+) instead of adding more low capacity drives + more parity. HDD fails in statistical patterns (after the "infant mortality" have been weeded out by stress testing (e.g. run a preclear cycle *hint* *hint*) each drive before adding to the array) so the more drives you have, the more likely you will have a failed drive. With regards to the cache pool, you can add more drives to the cache pool later as long as it is btrfs format. If your format your cache as xfs, it can only be used in a single-drive cache pool. If you increase the number of drives to >1 (even if the extra slot is not assigned), it will ask you to format the pool as btrfs if it isn't btrfs. My personal recommendation is that you should only use multi-drive cache pool if you want to run RAID-1 (i.e. mirror protection). Otherwise, you almost always have a better arrangement mounting the extra drives as unassigned device. For instance, instead of increasing the size of the cache pool by adding more drive, you can mount the additional SSD as unassigned and separate read-heavy and write-heavy data. That will improve the lifespan of both SSDs by reducing wear leveling and write magnification. Tip on getting SSD: avoid QLC and DRAM-less. They are cheap but in some cases can be even worse than a HDD.
  24. Try to see if your VM would boot without the USB controller. I think it's the USB controller that prevents it from rebooting as it can't be reset. 03:00.0 is USB 3.1 controller, which is typically not pass-through-able. 28:00.3 is USB 3.0 which probably will give your better luck. Before trying the 28:00.3 though, go to the app store and install VFIO-PCI Config plugin then settings -> VFIO-PCI.CFG and look for the 2 USB controllers. It will tell you which USB device is plugged into which controller. Make sure the USB stick is NOT plugged into the controller to be passed through. If you picked 03:00.0 because your USB stick is plugged into 28:00.3 then you need to use a different port (preferably USB 2.0 port). To test which port is on which controller, you use a mouse or ANOTHER USB stick and just refresh the VFIO-PCI.CFG page for each port. (Hint: usually ports in the same "block" in the rear panel are connected to the same controller).
  25. Yes 2 things to note: Dual parity would be an overkill for a 6-drive array (assuming you stress test (e.g. run a preclear cycle) each drive before adding to the array). If you are really risk-averse then fair enough but you have to be super unlucky to not be fine with single parity in a 6-drive array. In case there's any misunderstanding, you can't migrate the 3x 3TB FreeNAS RAID-0 from your current rig to Unraid directly. You have to get data out of that first e.g. into some other drives. Unraid will have to format the 3x 3TB drive before they can be used in the array.
×
×
  • Create New...