technicavivunt

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by technicavivunt

  1. Can confirm @DeadDevil6210 method works for me as well with the ASRock A380 on 6.12 RC2 for emby. Still struggling to get handbrake going. For emby Arc Cards aren't supported just yet except in the :beta branch. Once I did that the GPU popped up along side my nvidia gpu. Anyone got it working in handbrake yet?
  2. The handbrake container would need to be updated for support like tdarr already has.
  3. Looks like my VM config is almost identical. Changed my BIOS to match yours and reinstalled the OS and still looks like I'm getting a Code 43.
  4. Looks like it had no effect. Tried my other vm just see if it was an OS thing instead, but seems that something else is up.
  5. Disabling ReBAR right now and trying the 4032 driver to see if that helps.
  6. Same thing here while using my threadripper build. 2 GPUs, one 1060 and one a380
  7. Looks like this was a bust; guess it’s back to the drawing board
  8. I just noticed a comment in a Reddit thread regarding pass through. Apparently it worked if starting up the VM while no display was connected to the GPU then connected once booted. I’ll give that a try once I’m home to confirm
  9. I'm in the same boat; I also noticed that the Arc Cards audio looks to show up as an additional device in VIFO on my machine rather than the same slot with an additional function. Also my NVIDIA card likes to drop out too; time to do some investigating
  10. After some testing, looks like a combination of using the AMI firmware for the nvme drives and @JorgeB's solution in Unraid the drive seems stable for the last few days.
  11. Didn't find much regarding to C States, but in the BIOS there's a mention under the NVMe configuration for AMI Firmware versus vendor firmware. Selecting AMI Firmware has seemed to resolve the issue (at least over the past 12+ hours). I'll test it with some less important stuff over the next few days just in case and will post an update.
  12. Looks like I'm still getting this while the SSD is passed through after the the Syslinux Configuration change.; I'm going to poke around in my BIOS and turn off C States if possible when I get home to see if that's the root cause. Definitely feels power management related. vfio-pci 0000:02:00.0: VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update
  13. Looks like my Google-Fu has failed me, but maybe someone can shed some insight. My XPG S70 Blade seems to have connection dropouts with no rhyme or reason that I can find. It'll be fine for a few hours then all of a sudden I get a notification that the device is missing. Mar 29 11:00:27 TheRedQueen kernel: nvme nvme2: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0xFFFF I tried passing it through to a VM to check on it's firmware to see if there's an update, but to no avail. Also the error has shifted from the passthrough now to TheRedQueen kernel: vfio-pci 0000:02:00.0: VPD access failed. This is likely a firmware bug on this device. Contact the card vendor for a firmware update I should mention that in the VM the drive never actually drops out of the OS, but unraid it definitely does and a reboot is necessary to get it back on track. I've reseated it once just in case, but seems the issue seems to happen once every 24 hours. Thoughts? Supermicro M12SWA-TF AMD Ryzen Threadripper PRO 3955WX NVIDIA GTX 1060 6GB (For Transcoding Purposes) 2x LSI 9202-16e HBAs LSI 9272-8i HBA 2x T-Force Cardea 1TB (Cache) in a ASUS Hyper M.2 Expansion (Bifurcated x4x4x4x4) Seasonic PRIME 1000W Platinum PSU. theredqueen-diagnostics-20220329-1129.zip
  14. I love the ability to add one drive at a time in unraid, and a feature I’d love to see is a native and secure remote access solution for outside the home.