darrenyorston Posted April 6, 2021 Share Posted April 6, 2021 Hello all. I attempted to upgrade my server from 6.8.3 to 6.9.1 and have encountered a problem with my cache disks. I would appreciate some advice as how to proceed. I have 8 WD Red Disks in my array and three WD Black NVME drives in an array cache. The NVME drives are on the mother board (a Gigabyte Aorus xtreme x399 board). I had (have) PCIe ACS override in the VM Manager set to "Both" as I was previously passing through one of the MB's USB controllers and a GP to various VMs. For one reason or another I have stopped using pass through as I was having some performance issues. As a result I turned off PCIe ACS override and rebooted the server. When the server restarts only one of the NVME drives appears in the Cache. The other two drives are missing, they dont show up in unassigned devices either. This seems to indicate that soem of the NVME drives need PCIe ACS pass through. Is this how it should be configured? I can upgrade to version 6.9.1. All three NVME drives show up in assigned devices. However, if I try to add them to the cache it wants to format them. I downgraded to 6.8.3, turned on PCIe ACS override and my NVME cache is working fine. I note the NVME cache is using BTRFS. I dont know how to proceed to upgrade to 6.9.1. Quote Link to comment
JorgeB Posted April 6, 2021 Share Posted April 6, 2021 Please post the diagnostics: Tools -> Diagnostics Quote Link to comment
darrenyorston Posted April 6, 2021 Author Share Posted April 6, 2021 52 minutes ago, JorgeB said: Please post the diagnostics: Tools -> Diagnostics This is running 6.9.1 with PCIe ACS set to both. Two of the cache NVME dissapear if I disable PCIe ACS. According to System Devices all three NVME are on the same bus (c0a9:2263). Nothing else is in that group. PCIe ACS set to both splits them into there own IOMMU groups. tower-diagnostics-20210406-1746.zip Quote Link to comment
JorgeB Posted April 6, 2021 Share Posted April 6, 2021 All looks normal like that, if you want post new diags with PCIe ACS disable. Quote Link to comment
darrenyorston Posted April 6, 2021 Author Share Posted April 6, 2021 1 hour ago, JorgeB said: All looks normal like that, if you want post new diags with PCIe ACS disable. New diagnostics with PCIe ACS disabled. 2 of the NVME drives dissapeared. tower-diagnostics-20210406-2048.zip Quote Link to comment
JorgeB Posted April 6, 2021 Share Posted April 6, 2021 41:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03) Subsystem: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] Kernel driver in use: vfio-pci Kernel modules: nvme 42:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] (rev 03) Subsystem: Micron/Crucial Technology P1 NVMe PCIe SSD [c0a9:2263] Kernel driver in use: vfio-pci Kernel modules: nvme These two disappeared because they are bound to vfio-pci, unbind them or delete config/vfio-pci.cfg 1 Quote Link to comment
darrenyorston Posted April 6, 2021 Author Share Posted April 6, 2021 I deleted it and now all is working fine. I dont know how it occurred as I added the two additional NVME drives (months) after I had set PCIe ACS override to "Both". I didnt even have them at the time. Thank you for your help. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.