Traumfaenger Posted September 26, 2022 Share Posted September 26, 2022 (edited) Greetings, once I have updated unraid os from 6.9.2 to 6.11.0 my nvme drives were gone. They are still listed in systems devies though. When I do an os restore to 6.9.2 my drives are available again. I don't know what's going on here and hope to get some advice. Thanks in advance! Traumfaenger tower-diagnostics-20220926-1059.zip Edited September 26, 2022 by Traumfaenger Quote Link to comment
Traumfaenger Posted September 26, 2022 Author Share Posted September 26, 2022 It's definitely a configuration issue...I have created a new unraid stick, booted and my two nvme drives are available. Quote Link to comment
JorgeB Posted September 26, 2022 Share Posted September 26, 2022 From v6.11.0 please post output of: udevadm info -q property -n /dev/nvme0n1 udevadm info -q property -n /dev/nvme1n1 Quote Link to comment
Traumfaenger Posted September 26, 2022 Author Share Posted September 26, 2022 root@Tower:~# udevadm info -q property -n /dev/nvme0n1 DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk DISKSEQ=28 ID_PART_TABLE_TYPE=dos ID_PART_TABLE_UUID=67d59d2b MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=98717936 root@Tower:~# udevadm info -q property -n /dev/nvme1n1 DEVNAME=/dev/nvme1n1 DEVPATH=/devices/pci0000:00/0000:00:01.2/0000:02:00.0/0000:03:01.0/0000:04:00.0/nvme/nvme1/nvme1n1 DEVTYPE=disk DISKSEQ=29 ID_PART_TABLE_TYPE=dos ID_PART_TABLE_UUID=67d59d29 MAJOR=259 MINOR=2 SUBSYSTEM=block USEC_INITIALIZED=98721161 Quote Link to comment
JorgeB Posted September 26, 2022 Share Posted September 26, 2022 That's missing a lot of info, including device brand/model/serial, typical output looks like this: DEVLINKS=/dev/disk/by-id/nvme-Samsung_SSD_970_EVO_1TB_S5H9NS0NB15476D /dev/disk/by-id/nvme-eui.0025385b01410415 DEVNAME=/dev/nvme3n1 DEVPATH=/devices/pci0000:00/0000:00:03.3/0000:07:00.0/nvme/nvme3/nvme3n1 DEVTYPE=disk DISKSEQ=20 ID_MODEL=Samsung SSD 970 EVO 1TB ID_PART_TABLE_TYPE=dos ID_SERIAL=Samsung SSD 970 EVO 1TB_S5H9NS0NB15476D ID_SERIAL_SHORT=S5H9NS0NB15476D ID_WWN=eui.0025385b01410415 MAJOR=259 MINOR=4 SUBSYSTEM=block USEC_INITIALIZED=32096967 Which brand/model are the devices? Quote Link to comment
Traumfaenger Posted September 26, 2022 Author Share Posted September 26, 2022 Yeah, true. under 6.9.2. it looks more like what you have posted. Those nvmes are from Crucial Technology. Quote Link to comment
JorgeB Posted September 26, 2022 Share Posted September 26, 2022 Looks more like a device problem, but you should create a bug report, and post the output from v6.9 and v6.11. Quote Link to comment
Traumfaenger Posted September 26, 2022 Author Share Posted September 26, 2022 27 minutes ago, JorgeB said: Looks more like a device problem, but you should create a bug report, and post the output from v6.9 and v6.11. Thanks, I have done that just now. However, the fact that it works perfectly under the other version speaks against a defect. Quote Link to comment
JorgeB Posted September 26, 2022 Share Posted September 26, 2022 Not saying defect, I meant device related, just looks like the devices are not supported by the newer kernel, might be a device or kernel issue. Quote Link to comment
sebz29a Posted October 7, 2022 Share Posted October 7, 2022 hi I think I have a similar problem In my unraid server I have 2 pairs of crucial NVME drive : - First pool = 2 x crucial P2 NVME 250 GB (exactely the same models and firmware version) the two are connected on M2 slot on the MB - Second pool = 2 x crucial P2 NVME 2 TB (exactely the same models and firmware version) the two connected with PCI express adapter MB = Supermicro X11SCA-F when I update to 6.11 one of my two crucial nvme drive of the first pool is not reconized and not present in the system devices screen (with iommu group) I have to rollback to 6.10.3 for this drive to be recognized (so we can expect the nvme drive is not defect) In my case it can't be a device not supported by the newer kernel because they are exactly she same model size and firmware version (in the first pool). one is OK and the other is not reconised In 6.11 the first is reconized not the second (command executed with unraid 6.10.3 after the rollback) root@pyramiden:~# udevadm info -q property -n /dev/nvme2n1 DEVLINKS=/dev/disk/by-id/nvme-CT250P2SSD8_2022E2A7F6CD /dev/disk/by-id/nvme-eui.6479a7fff0000007 DEVNAME=/dev/nvme2n1 DEVPATH=/devices/pci0000:00/0000:00:1b.4/0000:04:00.0/nvme/nvme2/nvme2n1 DEVTYPE=disk DISKSEQ=18 ID_MODEL=CT250P2SSD8 ID_PART_TABLE_TYPE=dos ID_SERIAL=CT250P2SSD8_2022E2A7F6CD ID_SERIAL_SHORT=2022E2A7F6CD ID_WWN=eui.6479a7fff0000007 MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=24785941 root@pyramiden:~# udevadm info -q property -n /dev/nvme3n1 DEVLINKS=/dev/disk/by-id/nvme-CT250P2SSD8_2022E2A7F6C1 /dev/disk/by-id/nvme-eui.6479a7fff0000007 DEVNAME=/dev/nvme3n1 DEVPATH=/devices/pci0000:00/0000:00:1d.0/0000:0c:00.0/nvme/nvme3/nvme3n1 DEVTYPE=disk DISKSEQ=19 ID_MODEL=CT250P2SSD8 ID_PART_TABLE_TYPE=dos ID_SERIAL=CT250P2SSD8_2022E2A7F6C1 ID_SERIAL_SHORT=2022E2A7F6C1 ID_WWN=eui.6479a7fff0000007 MAJOR=259 MINOR=2 SUBSYSTEM=block USEC_INITIALIZED=24756285 Thanks U for your help Quote Link to comment
JorgeB Posted October 7, 2022 Share Posted October 7, 2022 41 minutes ago, sebz29a said: hi I think I have a similar problem Please post the diagnostics after booting v6.11.1 Quote Link to comment
sebz29a Posted October 8, 2022 Share Posted October 8, 2022 Hi, With unraid 6.11.1 it's the same problem one of the first pool drive is not recognized. you can find my diagnstics file attached Thank U for your help unraidprd-diagnostics-20221008-0756.zip Quote Link to comment
sebz29a Posted October 8, 2022 Share Posted October 8, 2022 In the 6.11.1 syslog file I can find error : globally duplicate IDs for nsid 1 and Ignoring bogus Namespace Identifiers In 6.10.3 this errors are not presents and the device is recognized Quote Link to comment
JorgeB Posted October 8, 2022 Share Posted October 8, 2022 1 hour ago, sebz29a said: globally duplicate IDs for nsid 1 Yes, it's the same issue, some quirks were added for v6.11.1, but LT can only add the ones that are reported, I added yours to the list, it should be added for the next release. Quote Link to comment
sebz29a Posted October 8, 2022 Share Posted October 8, 2022 Thank U very much. I have downgraded to 6.10.3 waiting for the next stable release with this quirks. Quote Link to comment
puffdadbod Posted October 18, 2022 Share Posted October 18, 2022 (edited) Sorry to hijack the thread, but I was having the exact same issue as the OP, but on 6.11.1. Downgrading to 6.10.3 fixed it, but I'm running Alder Lake so thought it best to bring this to your attention in hopes of a patch/quick fix. Logs are attached, and the drive is Patriot P400 1TB Internal SSD - NVMe PCIe M.2 Gen4 x 4 - P400P1TBM28H. Thanks! server-diagnostics-20221017-2045.zip Edited October 18, 2022 by puffdadbod Quote Link to comment
JorgeB Posted October 18, 2022 Share Posted October 18, 2022 6 hours ago, puffdadbod said: Logs are attached, and the drive is Patriot P400 1TB Internal SSD - NVMe PCIe M.2 Gen4 x 4 - P400P1TBM28H. Added to the list, should work on the next release. 1 Quote Link to comment
JorgeB Posted November 5, 2022 Share Posted November 5, 2022 On 10/8/2022 at 11:48 AM, sebz29a said: Thank U very much. I have downgraded to 6.10.3 waiting for the next stable release with this quirks. On 10/18/2022 at 2:20 AM, puffdadbod said: Sorry to hijack the thread, but I was having the exact same issue as the OP, All NVMe devices for both of you should now be detected with v6.11.2. Quote Link to comment
sebz29a Posted November 9, 2022 Share Posted November 9, 2022 Hi, Thanks U very much for the quirk !!! I just upgrade to 6.11.3 and I can confirm It's now working all my NVME drives are recognized👍😎. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.