DaNoob

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DaNoob's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Thanks. Will try that at the next shutdown opportunity. The longer I use unraid, the more it does... I'm at the point where it is the only computer in the house (excl. router&phone&laptop): media server, file server with remote replication, personal cloud, work VM, gaming VM, dev environment... I had so much fun playing with unraid, I created the most massive SPoF of my career, in my own house...
  2. Yes, I still have the issue. What did you change? As a last resort, I was starting to look at the new Ryzen 5700G with integrated graphics. It should be compatible with my mobo and fix the issue. But that is a decent chunk of change for a relatively small upgrade. It would be a different story if they added a 12 cores to the skew but it seems unlikely...
  3. Alright, I keep trying to get my 1080ti to work in the first slot, to no avail. I've be able to extract a vbios with @SpaceInvaderOne's script. I've also removed the header from one I found on techpowerup. I've tried both and get the same result when I boot the VM: Screen goes black for a couple of seconds, then displays the kernel output that was on the screen previously but kinda "zoomed in" The VM logs fill with "Device or resource busy" errors, and quickly fills the unraid server logs too. This only happens when the GPU is plugged in the first slot (marked for vfio or not). It works fine in the second one. Any ideas on what else I could try?
  4. You are are right. I read the Asus TUF Gaming X570-Plus Manual (page 21) and it appears that the top 16X connector is controlled by the CPU, while the bottom 16X, and the 3 1X, are all controlled by the chipset. That means, with Ryzen 3rd gen, the top slot has 16 lanes (whatever gen), and the bottom slot has only 4, at best. So there would indeed be a benefit if I could plug my 'main' GPU in the top slot. I'm still going with the vertical mounting of the 1080Ti, because whichever slot I use, I'm blowing hot air right at one of my M.2s, the bracket and extension should arrive by the end of the week and greatly improve my airflow... And allow for easier switches/troublshooting. So we are back to the issue of not being unable to select a primary GPU in the X570 BIOS (damn you Asus). And the VM refusing to start with the "Device or resource busy" error message in the logs. Maybe I don't understand how the 'System devices' menu works in 6.9.1? I assumed having the device marked for vfio-pci meant that's all I had to do. Apparently, it is not. Does it work differently in 6.9? Or do I still have to dump/reload the vbios like in old documentations/videos (thanks @SpaceInvaderOne btw, you taught me a lot! But I still have a lot to learn apparently...)? Also, a way to tell the kernel to use the 1650 for its framebuffer/GUI would be a big help since it would allow me to see the console and manage the server locally, while having the nvidia driver loaded, hence reducing the power consumption...
  5. I've check in my old Asrock's BIOS and it does indeed have an option to pick the 'primary' GPU. This seems to be an X570 issue as outlined by the Reddit thread I linked to in my previous post. So, I have decided to sidestep the issue by ordering a vertical GPU mounting riser from Fractal Design, so at least the GPU won't blow directly on my NVMe drive: it reaches 67° at full load (thanks Dyson Sphere Program 😉), which is way too hot for comfort (0-70°C according to Samsung)... According to Unraid, that second NVMe is 10 to 15°C hotter than the other one, with the GPU under load, and 6-7° hotter when idling. Yes the bottom slot is "only" 8X, but it is PCIe gen4, and the GPU is gen3. I'm not sure if the conversion is done, giving me 16X gen 3 or not. But I have honestly never noticed a difference in performance. Maybe in synthetic benchmarks or other applications, but for gaming and everyday work, 8X seems to be fine...
  6. Sadly no, PCI express settings only offer me the option to choose the PCIe gen (1-4) for each port. They are all on auto by default and seem to handle my graphics cards (gen3) and sata controller (gen2) fine. I can confirm the trick of enabling CSM does not work on X570. I think it was more of a side effect on 370-350 that was fixed with the later chipsets... Maybe it can be worked around by passing options to the kernel through grub? Telling the kernel to only load its framebuffer on the cards that are not marked for vfio use?
  7. I was unable to find such an option in my BIOS. I'm running an Asus TUF GAMING X570-PLUS. I found something about enabling CSM that switches the primary GPU (no idea why). I'll give that a try and keep you posted.
  8. I have a similar issue. My setup: - GTX 1080Ti that I want to passthrough to a Windows VM, in the top (first slot), viewed as 0B:00... and marked as "bound to vfio at boot" - GTX 1650 that I want to use as an Unraid display and in Docker containers, in the bottom slot, viewed as 05:00 Both have displays attached At boot, both primary displays (from both cards) display the startup menu. They are mirrored until "Loading /bzroot ...ok" shows up. At that point, the display connected to the 1650 freezes and the boot sequence continues on the primary display of the 1080Ti, until the vfio driver is loaded. Then, the display on the 1080Ti also freezes, which kinda makes sense. However, when I start the Windows VM, it starts up and the screen goes black. And the VM's logs is filled with "2021-04-03T14:48:00.897611Z qemu-system-x86_64: vfio_region_write(0000:0b:00.0:region1+0x13550a, 0x0,1) failed: Device or resource busy" And very soon after, I get a warning about Unraid logs being full. Note: If I flip the GPUs, it works fine, but the 1080Ti gets poor airflow and blows directly on my NVMe drive and heats it up to 67°C, which makes me very uncomfortable...