Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. Side note on ECC RAM: it's unlikely that memtest will show error with ECC RAM (because ECC would auto-correct errors) unless the stick is really bad. So in your case, your current approach is probably the best one. I.e. test the sticks by pair until you find the one that crashes. Also try swaping the CPU around to see if the errors move with the CPU.
  2. Make sure to boot Unraid with the iGPU so you stand a fighting chance of passing through the RX580 to the MacOS VM. Unraid will need to format all the drives before using them so make sure you have your data backed up before starting. In your case, the "ideal" scenario probably would be to use the 1TB SATA SSD as cache and pass through the NVMe as a PCIe device to your MacOS VM (which presumably is your main VM). The other less important VM can have their vdisk files stored on cache and you will have max performance with the NVMe passed through.
  3. Firstly, the error is with regards to IOMMU group, which was why I suggested you turn ACS Override to "Both". Based on you IOMMU grouping, ACS Override isn't activated (because 41:00.0 is still in group 17) so I would suggest you turn that on first and foremost. Secondly, please dump your own vbios. SIO has a tutorial for that. Downloading from Techpowerup is a last resort if you can't dump your own vbios (which you certainly can because you have Unraid booted on a different graphic card). It's not at all uncommon for new users to download the wrong vbios - and when it comes to vbios, having no rom is better than having wrong rom. Also next time you copy-paste text from Unraid GUI, please use the forum code functionality (the </> button next to the smiley button) so the text is formatted and sectioned correctly. Save you the effort of manually highlighting the device.
  4. SSD is not a hard drive. From my experience, BTRFS RAID-0 carries no perceivable real life benefits. So given the risks of things going wrong with RAID-0, single is better. And that's before we consider the on-going issue with btrfs multi-drive cache pool performance.
  5. Some pointers: Watch SpaceInvader One tutorial on how to dump your own vbios and see if it helps. If you already boot Unraid with the HD6450 and still have black screens then a vbios is needed. It is no guarantee but it should help, unless the card just can't be passed through at all. 2GB for Unraid is probably too little. I would suggest to start with 8GB for each VM and then see your memory usage and then increase it slowly. You want to make sure your GPU issue is sorted before going on in with the RAM. With your plan, you definitely want ACS Override = Both (i.e. downstream + multifunction). It would also be useful to vfio-pci.ids stub your to-be-passed-through graphic cards (there's a SpaceInvader One tutorial on that - it's about stubbing and passing through USB device but the same principle would apply to GPU). Your core assignment is almost certainly not ideal but let's sort the graphics first then we can talk tuning.
  6. Yep, use VNC to install OS + enable whatever remote access software you need. Then change the graphics card to the Nvidia card and access the VM through that remote access software you set up. Note though that some graphics cards won't initialise properly without a monitor plugged in so if that's the case, you will need to buy a dummy HDMI plug.
  7. Personally: For Windows, I use RDP. For Ubuntu, I use screen sharing functionality (which is in itself just a VNC server). For MacOS, I use NoMachine All free and is superior to Unraid VNC in functionalities.
  8. If someone wants to do a plugin, that's fine but this is not even close to being a core Unraid feature that Limetech dev should spend any effort on. Note the warning on the github page: "Exercise caution when clicking the Detect Devices or Dump Device buttons. There have been reports of Gigabyte motherboards having serious issues (bricking the RGB or bricking the entire board) when dumping certain devices." A purely cosmetic feature with the possibility of bricking the motherboard! No thanks.
  9. If you can boot to Repair mode, from the command line just type the below and it will boot back up again (obviously without Hyper-V) bcdedit /set hypervisorlaunchtype off I have given up on nested virtualisation for quite a while now because of this catch-22 situation. Unraid has docker and VM support so it's not like I miss it to be honest.
  10. Ditto what jonathanm said. You seemed to have misread my "can" as "must".
  11. Warning: backup your vdisk before running the below. When I tried Docker Desktop, even on bare metal, it wouldn't work because of that Hyper-V not running error. The command below fixed it (run it as administrator). bcdedit /set hypervisorlaunchtype auto However, when booting as a VM (NVMe passed-through in dual boot), the VM wouldn't boot (stuck in the Tianocore screen). Hence backup your vdisk first before trying in case you need to restore.
  12. The detection issue looks to be hardware / BIOS problem. You have 2 NVMe plugged in presumably M.2, which should disable PCIe5 slot so one of your "some" VEGA would not be detected because the slot is BIOS-deactivated. I would also suggest to update your BIOS to see if it helps. You also have a Marvell controller which has been un-recommended for quite a while now so you should look to replace that too.
  13. Attaching diagnostics may provide more clues (Tools -> Diagnostics -> attach zip file). Also what do you mean by "detect".
  14. NAS and Enterprise drives typically come with additional features such as: Longer warranty Better vibration / drop protection (Free / discounted) data recovery service Better rating for continuous (24/7) operation Let's use ZFS as an example. It is RAID, which stripes data across multiple drives. That means: All drives have to be spun up so a RAID system almost never spin down. That makes (4) rather important. If more drives fail than the number of parity, ALL data is lost. That makes (3) critical if any data at all is to be recovered. It usually deploys in enterprise / corporate envi which (a) staff don't care as much about handling things with care and (b) server racks are usually terribly designed with regards to vibration absorption and vibration is bad for moving parts. That makes (2) important. Companies usually depreciate their assets and once fully depreciated, they will replace the assets, usually with a small accounting profit if the assets are resold. This depreciation is typically done over 5 years, 10 years, etc. Wonder why enterprise drives tend to have 5-year warranty? That's why. Compare that to Unraid, which is NOT RAID. There's no striping, each disk has its own file system. So: Drives can be spun down when not in used. Making (4) not as important. In fact, there's an argument AGAINST NAS drives with Unraid precisely because they are rated for 24/7 operation, not the up-and-down usage pattern of Unraid. Moving parts rated for continuous operation don't necessarily take kindly to being switched on-and-off regularly. If more drives fail than the number of parity, only the failed data drives will lose data. That makes (3) less important because usually some data is recoverable. Depends on the data, "some" may be good enough. Most Unraid users deploy their Unraid servers at home in consumer cases. That makes (2) less important. Consumer cases usually house fewer drives than, for example, a 4U rack-mount case (or a storinator!) so less overall vibration. A lot of consumer cases also have vibration mitigation built-in because vibration means noise which is rather not appreciated at home. Also if you own the stuff, you tend to be more careful than that IT guy who just broke up with his lady. Home users don't replace storage at fix schedule but rather only when it stops working, which typically is WAAAAAAAAYYYYYY after the warranty has expired. That makes (1) less important. Now all those points could be thrown out of the window if consumer drives are terrible and fail significantly more often than their enterprise brothers (including NAS types). Fortunately, we have Backblaze to the rescue with its annual HDD failure analysis. And in short, enterprise drives ARE better, but not by much (like 0.05% - 0.1% diff). So considering you will be paying at least 20% more for NAS drives that at best are 0.1% less likely to fail, for features that are not as important to Unraid. If money is not a concerned (e.g. Linus Sebastian) then OF COURSE go for the NAS and the enterprise drives because they ARE better. But when value is important (e.g. the rest of us), I'd say the benefits don't justify the cost.
  15. Unraid does not need a GPU for itself is irrelevant. It depends on whether the motherboard would initialise the card at boot. And if initialised, some cards won't be happy being passed through to a VM (both AMD and Nvidia are affected). And it's not which card is on the top slot but which card is initialised by the motherboard BIOS. Most motherboard brands will pick 1st slot by default with no option. Gigabyte motherboards usually have the option called Intial Display Output in the BIOS to pick which slot to boot with. Server motherboards with integrated graphics often also has the option to boot with integrated graphics. For Intel, check the motherboard manual to see if you can pick the iGPU at boot which will mean all dedicated cards won't be initialised, which is the best case scenario. For AMD, you just have to pick your poison and select whichever card that doesn't cause issues with pass through and boot with it (either through putting that card on the top slot if using non-Gigabyte mobo or pick that slot in BIOS if using Gigabyte mobo). Passing through primary graphics card (i.e. what the BIOS makes Unraid boot with) will almost certainly need a vbios. However, it is impossible to confirm whether vbios will or will not work without the exact same hardware. I do recommend you to get Gigabyte motherboard for the Initial Display Output functionality (double check the owner manual from Gigabyte website). With an ambitious plan such as yours (3 VM, 1 PC), flexibility is worth a lot more than VRM, RGB, ECC and all those bells and whistles. At the very least, it makes dumping your own vbios for each card easy without having to physically swap cards.
  16. Then the crash might have been due to passing through the controller that the USB stick is on.
  17. First and foremost, when copy-paste text from Unraid, please use the forum code functionality (the </> button next to the smiley button) so the code is sectioned and formatted correctly. It's incredibly hard to read codes when it's all exactly the same. Yours is classic case of error code 43 so: Did you check that your server actually boot with the onboard graphics? i.e. have you plugged a monitor to the ONBOARD display output e.g. VGA and make sure you see the Unraid command prompt in the output? Conversely, if you connect the display to the 1030, do you see nothing at boot? It's not uncommon to unknowingly boot with the wrong card. Next, please create a new VM template with OVMF + Q35 machine type + Hyper V On + everything else the same and see if it works. If it doesn't copy-paste the xml here please (see point above about using the </> button). Also copy-paste the PCI Devices section of Tools -> System Devices (again, </> button) Also please attach diagnostics. Tools -> Diagnostics -> attach whole zip file.
  18. You might want to add 1022:1486, 1022:1485, 1022:1487 to vfio-pci.ids Then pass through the onboard audio (29:00.4) as well as the USB device. Apparently USB + onboard audio are on the same bus (29:00.3 and 29:00.4) so they should be passed through together to the same VM. Failing that, you might want to pass through the whole chain (29:00.0, 29:00.1, 29:00.3, 29:00.4) to the same VM). Before deciding which USB controller to pass through, always double check which one has your Unraid USB stick plugged into. Casually changing USB controller pass through without checking will inevitably cause problems.
  19. If the stick is corrupted or drops offline while the server is running (or during boot up), it always leads to strange issues so keep that in mind.
  20. You can try installing the Unraid Nvidia plugin which will allow you to install the community-built Unraid Nvidia which has the driver compiled in. It hopefully will activate a better fan profile. Alternatively, if you just want to "see what's happening", you can do it over the network. The GUI even has a console functionality. Passively-cooled low-end graphic cards are also very cheap and could be even cheaper if buying used. The GT 710 for example goes for about $30/£30.
  21. Hmmm try turning on Intel Speed Step in BIOS and install the Tips and Tweaks plugin.
  22. Read this topic for the best way to set rclone up to work with Unraid.
  23. No you don't. Set up a bridge on your onboard controller (typically named br0 so if you already have br0 then the bridge should already be up) and use that for all VM. Each VM will have its own virtual network adapter which connects to the router through the host bridge. Each virtual controller even has its own unique mac address.
×
×
  • Create New...