NDDan Posted October 22, 2022 Share Posted October 22, 2022 Been putting off moving above 6.9.2 but got tired of the nagging red triangle. Followed the process, backed-up the flash, upgraded and the array didn't start because two of the four cache drives weren't found. Reverted back to 6.9.2 and drives were found and array automatically restarted. Cache is a single pool with two SSDs and two NVE's (on mobo) (four drives total in cache pool). I've read this could require a Quirk for these two NVMe's. These were two hardware pulls from Dell laptops, and they're Toshiba 256GB NVMe's (identical models). In principal I'm willing to replace them with new Samsung NVMe's but there isn't a real 'need' in my configuration to do so other than to overcome this problem. That said, I don't want to have the same issue with two replacements, and I don't know enough to know if this is mobo related or the NVMe's themselves. For good measure I did patch the mobo FW to current, which just listed 'compatibility' fixes. Upgraded to 6.11.1 again and no dice. Currently reverted back to 6.9.2 until I can get a path forward. Attached diags for failed 6.11.1 upgrade. hgunraid01-diagnostics-20221022-1249.zip Quote Link to comment
JorgeB Posted October 23, 2022 Share Posted October 23, 2022 14 hours ago, NDDan said: I've read this could require a Quirk for these two NVMe's. Unfortunately that's not the problem since that's an easy fix, quirk is needed when you only see one of the two devices, you are not seeing both, looks like the same issue as another user with a coupe of Crucial devices where they stopped reporting the brand/model, please post output from both v6.9.2 and v6.11.1 of: devadm info -q property -n /dev/nvme0n1 Output from one of the devices is enough, since they will be the same, expect the serial. Quote Link to comment
NDDan Posted October 23, 2022 Author Share Posted October 23, 2022 6 hours ago, JorgeB said: Unfortunately that's not the problem since that's an easy fix, quirk is needed when you only see one of the two devices, you are not seeing both, looks like the same issue as another user with a coupe of Crucial devices where they stopped reporting the brand/model, please post output from both v6.9.2 and v6.11.1 of: devadm info -q property -n /dev/nvme0n1 Output from one of the devices is enough, since they will be the same, expect the serial. Thank you Jorge, I've posted that output and it got me thinking about a change I made quite a while ago to modify 60-persistent-storage.rules. I pulled (on 6.9.2) lib/udev/rules.d/60-persistent-storage.rules copied to /boot/config/rules.d, made some edits (though not to any section that should impact an NVMe--this was for an optical drive that would hang under certain media conditions and force me to hard power down the unraid server uncleanly), then modified my go file to copy it over to /etc/udev/rules.d, chmod 644, and call a udevadm control --reload-rules, and a udevadm control --attr-match=subsystem=block I'm wondering now if this kernel's build of lib/edev/rules.d version added new device methods that are needed for this kernel build, and my old storage.rules is prevailing? If I wanted to revert to this build's standard GO file, I'm honestly not sure what it looks like, I'm also copying my current GO file below in case you believe that's to blame here. I'm thinking that I need to copy this build's lib/udev/rules.d/60-persistent-storage.rules, to /boot/config/rules.d, make my edits there, so it includes any new device handling, but as a starting place, I'm fine to simply change the GO file to remove that copy and reload-rules if you think that's to blame. As I said I'm just not sure (because I was stupid and didn't comment my GO file, as to what the 'base' UnRaid GO file should contain? I've not made any changes so far, but wanted to give you that background to see what your thoughts were. 6.9.2 output from udevadm: DEVLINKS=/dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_99NPCDN9PQEN DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk ID_MODEL=KBG40ZNS256G_NVMe_TOSHIBA_256GB ID_MODEL_ENC=KBG40ZNS256G\x20NVMe\x20TOSHIBA\x20256GB\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=10410104 ID_SERIAL=KBG40ZNS256G_NVMe_TOSHIBA_256GB_99NPCDN9PQEN ID_SERIAL_SHORT=99NPCDN9PQEN ID_TYPE=nvme MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=14797517 6.11.1 output from udevadm: DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk DISKSEQ=20 ID_PART_TABLE_TYPE=dos MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=26250968 GO File (imported same into 6.11.1 during upgrade): #!/bin/bash # Copy and apply udev rules with additional handling for misbehaving optical drive cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-persistent-storage.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block # Start the Management Utility /usr/local/sbin/emhttp & Quote Link to comment
Solution JorgeB Posted October 23, 2022 Solution Share Posted October 23, 2022 15 minutes ago, NDDan said: # Copy and apply udev rules with additional handling for misbehaving optical drive cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-persistent-storage.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block Try commenting out these lines, add # to the beginning of every one. Quote Link to comment
NDDan Posted October 23, 2022 Author Share Posted October 23, 2022 12 minutes ago, JorgeB said: Try commenting out these lines, add # to the beginning of every one. Bingo! Drives are back, and array started on 6.11.1. I compared the /lib version of 60-persistent-storage and the ONLY notable changes in the new kernel version include substantial modifications to the NVMe handling section, so this makes sense. I will use the new /lib version and copy it out make my edits and call my modified version back in the GO file. Thank you very much for getting me on the right track of thought here, and for reminding me that the Unraid default go file contains only a single notable entry and that everything else was 'mine'! I appreciate your expert assistance! Quote Link to comment
JorgeB Posted October 24, 2022 Share Posted October 24, 2022 Great news, good that you remembered about changing the udev rules, I would never think of checking the go file for that. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.