chris_netsmart Posted October 30, 2022 Share Posted October 30, 2022 I have just moved my Unraid into Proxmox with little to no problems, but after the unraid finished rebuilding my Parity, I have discovered that I am not able to spin down my hard drivers within Unraid my current thinking - is that Unraid see them as Vm Hard disks and not a physical hard drive. all with default settings As may other people have done, I have pass through my Hard drivers as SCSI drivers, within proxmox Could my problem be that as I am passing through the Hard drives as SCSI's and not via a PCI controller that Unraid is not able to see the full hard drive details ? I did try and pass through a PCI controller but this just course unraid and proxmox to crash if I am wrong or you know how to fix this then please let me know, as I don't like having my disks spinning 24/7 when I don't need them to. Quote Link to comment
uldise Posted November 5, 2022 Share Posted November 5, 2022 On 10/30/2022 at 10:55 AM, chris_netsmart said: is that Unraid see them as Vm Hard disks and not a physical hard drive. that's true - if you have such config, forget about SMART monitoring, disk spindown.. you must pass-through whole disk controller to get these features working. On 10/30/2022 at 10:55 AM, chris_netsmart said: I did try and pass through a PCI controller but this just course unraid and proxmox to crash look at IOMMU group of you controller - if there are many devices in same group, you are in trouble. as per your pic, i see group "11" more than once. if you pass this one device, then all others in this group will be inaccessible to Host. there are some options to split PCIe devices to each own group, but it depends on you motherboard used. Quote Link to comment
chris_netsmart Posted November 6, 2022 Author Share Posted November 6, 2022 On 11/5/2022 at 11:38 AM, uldise said: that's true - if you have such config, forget about SMART monitoring, disk spindown.. you must pass-through whole disk controller to get these features working. look at IOMMU group of you controller - if there are many devices in same group, you are in trouble. as per your pic, i see group "11" more than once. if you pass this one device, then all others in this group will be inaccessible to Host. there are some options to split PCIe devices to each own group, but it depends on you motherboard used. thanks for the information I am looking at something line Genuine LSI 6Gbps SAS HBA LSI 9211-8i (=9201-8i) P20 IT Mode ZFS FreeNAS unRAID to pass my HD's through but before I do. I would like to confirm that the following devices are not up to the job Quote Link to comment
uldise Posted November 6, 2022 Share Posted November 6, 2022 1 hour ago, chris_netsmart said: I would like to confirm that the following devices are not up to the job i'm not sure about your question - if you add LSI card, then you will see group number. if it on it's own group then you should pass it to the VM. Quote Link to comment
chris_netsmart Posted November 6, 2022 Author Share Posted November 6, 2022 ok, by the look of it, I have a lot of devices on IOMM port 11. I will see if I can spilt them up into there own groups and then retry Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.