sjb007

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by sjb007

  1. Thanks for the update, can I ask how you determine that from the diagnostic report?
  2. Hi, thank you for the prompt response, The Parity Drive and drives 1,2 & 3 are all connected to the backplate, the server is a HP Proliant ML310e Gen8. Drive 4 and the Cache drive may share a power splitter. What did you read in the info I posted?
  3. Hi, I have been using Unraid successfully now for a number of years. Last week, I noticed my docker service had stopped working and I was getting warnings about drives with read errors. I restarted the server and everything was ok for about 5 days when the same warnings came back and the docker service had stopped again, this time none of my shares were available. I have restarted the server again and pulled a diagnostic report. I have attached 2 reports one prior to the shutdown and 1 after the shutdown. What are peoples thoughts on this issue? sjb007ur-diagnostics-20240414-1958.zip sjb007ur-diagnostics-20240414-1945.zip
  4. Was any further work carried out on this issue? I was running the RMRR patched version, then recently installed the nvidia plugin, which means I cannot now passthrough some devices to my VW. Is there an nvidia unraid plugin with RMRR patch available?
  5. Hi, hopefully one of you can help me. I am running Unraid 6.7.2 on a HP Proliant ML310e Gen8, i have installed a ASUS ROG Stryx 1060 6Gb GPU and I am having trouble with passing through to a VM. Various searches of the internet lead me to here, I have replaced the bzimage on my flash drive with the current version on Github. When I start the VM I don't get an error on the screen anymore, however I gat a blank screen and nothing else. If I go into the system logs I get the following:- Aug 16 21:38:40 sjb007 kernel: vfio-pci 0000:05:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem Aug 16 21:38:40 sjb007 kernel: br0: port 2(vnet0) entered blocking state Aug 16 21:38:40 sjb007 kernel: br0: port 2(vnet0) entered disabled state Aug 16 21:38:40 sjb007 kernel: device vnet0 entered promiscuous mode Aug 16 21:38:40 sjb007 kernel: br0: port 2(vnet0) entered blocking state Aug 16 21:38:40 sjb007 kernel: br0: port 2(vnet0) entered forwarding state Aug 16 21:38:42 sjb007 avahi-daemon[2933]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:feaf:4392. Aug 16 21:38:42 sjb007 avahi-daemon[2933]: New relevant interface vnet0.IPv6 for mDNS. Aug 16 21:38:42 sjb007 avahi-daemon[2933]: Registering new address record for fe80::fc54:ff:feaf:4392 on vnet0.*. Aug 16 21:38:42 sjb007 kernel: vfio_ecap_init: 0000:05:00.0 hiding ecap 0x19@0x900 Aug 16 21:38:42 sjb007 kernel: vfio-pci 0000:05:00.1: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. Aug 16 21:38:42 sjb007 kernel: vfio-pci 0000:05:00.1: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor. Aug 16 21:38:44 sjb007 kernel: vfio-pci 0000:05:00.0: No more image in the PCI ROM Aug 16 21:38:44 sjb007 kernel: vfio-pci 0000:05:00.0: No more image in the PCI ROM I have only copied the part of the log that mention the GPU, why am I still getting the device is ineligible error? I hope someone can help. Many Thanks in advance.