Jump to content

testdasi

Members
  • Posts

    2,812
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by testdasi

  1. https://lmgtfy.com/?q=nvidia+transcode+support
  2. Yes definitely back up your data before playing around. Unraid must be installed on a USB stick and only USB stick with unique GUID. Most branded USB sticks have unique GUID. Preferably plug your USB stick to a USB 2.0 port. (USB 3.0 ports have been known to drop the stick offline for suspected overheating).
  3. Maybe try something simple. From the command line: mkdir /mnt/disk1/cachecopy cp -rav /mnt/cache/* /mnt/disk1/cachecopy/
  4. I'm not talking about the SMART attributes. It's the short / long test that we typically see for SATA devices. I just noticed none of my NVMe SSDs seem to have the option to run short / long smart tests. Not in Unraid, Linux, Windows etc. Are NVMe SMART just missing this functionality? Or is NVMe working in such a way that would render these tests invalid?
  5. Unlikely. Based on what LT has announced, 6.9.0 will be on 5.x kernel (which is why 6.9.0-rc1 will be on 5.x kernel). The reason 6.8.x is still on 4.19 was because they discovered some strange bugs with docker network on 5.x kernel and need to fix them first.
  6. AMD GPU has reset issue. Did you start VM with 1st xml then shut it down then start 2nd xml and it stops working? If so that's reset issue. You need to dump your vbios to have a fighting chance against the problem. May or may not work but without it, highly unlikely to work.
  7. Post a new topic - log filling up can be due to many causes. Also, attach the diagnostics AFTER your log has filled up so it's more apparent what caused it.
  8. That is very much misguided. I thought these "use ram for pagefile" kinda myths should all have been debunked by now. Just assign more RAM to your Windows VM or use baloon functionality if you want flexible RAM assignment.
  9. Tools -> Diagnostics -> attach zip file Or you can always view the SMART report on the GUI.
  10. As always please go to Tools -> Diagnostics -> attach zip file to any post regarding issues. From the look of it, you didn't stub all the devices in group 33. You need to watch SpaceInvade One guide on youtube about vfio-pci.ids stubbing (preferably just watch his whole playlist about Unraid VM). Alternatively, install the vfio-pci config plugin (just search for "VFIO-PCI Config" in Community Applications) and follow the plugin instruction to stub the devices in group 33 (it's just tick some boxes and click a button) Note that with this method, BEFORE you install new PCIe devices and/or move devices around, you have to remember to turn the stubbing off first. This method stub by address and installing new device / moving devices around will change the addresses (i.e. you may end up stubbing the wrong device). Also note that the RTX 2280 has 4 devices that need to be passed through together. i.e. b3:00.0 -> b3:00.3
  11. With regards to the 1950X (and Threadripper in general), you might want to watch Spaceinvader One guide on lstopo and numa node. Once you identify the NUMA nodes, these tweaks will improve gaming performance (and playability). Isolate cores from the numa node connected to the GPU Pin cores from the GPU numa node to the appropriate VM (i.e. only the one that uses that GPU). Allocate RAM from the GPU numa node to the appropriate VM Emulator and IOThread pin (especially if you are using vdisk) Do not over assign cores (e.g. if you can limit to 4 cores of the same CCX, that would be best) If multiple CCX is required, spread the core assignment evenly across multiple CCX (e.g. 3 + 3 is better than 3 + 4) You should dump vbios for all the GPUs. The procedure (if you follow SpaceInvader One guide) is very straight-forward. Note: you have multiple GPUs so there's no excuse to download vbios from Techpowerup. One of the most annoying issues on here is downloading the wrong vbios.
  12. You mentioned "current Intel CPU/GPU combinations seem to limit PCI lanes to 16.". That statement reminded me of a myth that was floating around that if one uses the iGPU, it reduces the speed of other PCIe peripherals because the total speed is limited to 16 lanes. That is entirely not true. From your subsequent reply though, I think you meant the current generation of Intel CPU with iGPU has a maximum of to 16 lanes. And yes, that is true. If high number of PCIe lanes are critical for you then you would have no choice but to use a different platform and either forego hardware transcoing or use Nvidia NVENC.
  13. Dumping vbios is recommended for all GPUs, not just Nvidia. Nvidia is mainly to resolve error code 43. AMD is mainly to help with reset issue. RX 580, in particular, has reset issue so you definitely should dump vbios for it. The only success story I have seen so far with the RX 580 is when it's NOT being used as a primary GPU (what Unraid boots with). So given your mobo doesn't allow picking any PCIe slot as initial display output, your only solution is to put it in a different slot instead of slot 1. What's wrong with putting the RX 580 in the 3rd slot given you do have multiple GPUs? What's preventing you from dumping vbios? It's a rather simple procedure.
  14. That is a myth probably by the confusion about the theoretical max performance of the iGPU (within the CPU) and the PCIe lanes (out of the CPU). The iGPU is not connected via the standard PCIe pipes, or at least none of the Intel schematics (and 3rd party technical analysis e.g. Anandtech) has ever indicated as such.
  15. To share each drive outside of the array, there's already Unassigned Devices plugin. I am, however, concerned about your statement of "2x hardware arrays". Unraid arrays are software-based. If you are mixing hardware-based stuff into Unraid, it will not end well.
  16. Remember to go to new motherboard BIOS and change the boot order to the USB stick. Otherwise should be swap-and-play.
  17. Great post by ramblinreck47 but I need to make 2 corrections to point number 3. 3. With the Intel CPU and QuickSync, you should be able to easily do 15+ 1080p transcodes with little effort except for a few changes in your BIOS and in your go file. Otherwise, you can run it with any form of UnRAID as long at it has a newer kernel where your iGPU is supported (generally anything 6.8.0rc-1 and above 6.8.0-rc1 to 6.8.0-rc7, or 6.9.0-rc1 whenever it comes out. Other versions run on pre-5.x kernel which may not support newer iGPU.). With Ryzen and the P2000, you’ll need to install the Nvidia version of UnRAID which isn’t updated as fast as the regular UnRAID version (it’s still relatively fast but if you want to update when a newer version of UnRAID comes out you’ll need to wait for the Nvidia LinuxServer.IO guys to bake in their drivers. Nvidia doesn't lift a fingernail. Unraid Nvidia is the excellent and laborious work of the LSIO guys.). Granted, with the P2000, you’ll be able to do 20+ 1080p transcodes with ease.
  18. Can you include the full syslog instead of quoting a few lines.
  19. When dealing with multi-function devices (e.g. GPU with GPU + HDMI audio), Unraid GUI will assign a new bus for each additional device by default. This can cause compatibility / performance issues in some cases, most notably but not exclusive to MacOS VM. The workaround is adding multifunction='on' and change the bus + function values in the xml. If any edit is done via the GUI, it will revert the bus + function back to the default method, requiring additional edits. New users are also unlikely to be able to make these manual xml edits. It would be a good idea to enhance the VM GUI to detect and make the appropriate edits in the xml automatically for these devices. E.g. group devices by bus + function and create the bus + function in the xml accordingly (adding multifunction='on' for the first device of a multi-function group). At least, I would imagine it would not too complicated to apply it as a priority to GPU and HDMI audio devices since they have their own dedicated GUI boxes so matching them is rather simple.
  20. Did you make the edit using the GUI? If so, you need to do additional manual edits of the xml to make it boot e.g. the Macinabox VM uses a custom BIOS, your xml has reverted back to the default BIOS. Read the Macinabox topic by Spaceinvader One for more details.
  21. Are you able to access the VM using RDP (Remote Desktop Protocol)? It's built into Windows. If not, change the display to VNC (remember to remove the GPU USB and Audio devices too) and install a remote desktop software (e.g. NoMachine) and verify that it works. Also set it to auto login and make sure the remote desktop software starts at startup. Then change the GPU back to the 1660 (+ the other 3 devices) and start the VM, make sure it's marked as started, wait a few minutes and then see if you can access the VM. Then go to Device Manager and check what error you see on the GPU device.
  22. You should publish the raw data table instead of drawing it in a graph. Just using your graph, the estimated write speed is zero (because the Unraid columns are invisible). Let's use 1 MB / s just for sensibility. My write performance is consistently WAY above 1 MB/s. I am fairly certain 1 MB/s performance is a show-stopper for everyone. In fact with regards to access to cache via SMB, I can get 500MB/s write speed using a simple test of copying a 50GB file from a UD share (NVMe) to cache (NVMe) through SMB on my Windows VM. I did (only recently) notice that 6.8.2 SMB performance is not as good as 6.7.2 but it's only perceptible with NVMe drives (and presumably also with RAID 0/5/6/10 cache pool). It is absolutely irrelevant to SATA-based devices. So I'm sure there's a bug to fix somewhere but I don't think it's anywhere near the level you are reporting.
  23. Reading the post instead of the TL;DR: it sounds like the potential mitigations are: Use 2nd-gen TR Allocate RAM and core of the numa node (and a single numa node) that is connected to the GPU Direct pass through of NVMe drive (i.e. PCIe method) to avoid IOThread bottleneck Otherwise, pin emulator and IOThread to the same numa node as RAM and GPU So I would suggest starting with the 2nd and 4th bullet points and see if it improves things.
×
×
  • Create New...