NDDan

Members
  • Posts

    7
  • Joined

NDDan's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Bingo! Drives are back, and array started on 6.11.1. I compared the /lib version of 60-persistent-storage and the ONLY notable changes in the new kernel version include substantial modifications to the NVMe handling section, so this makes sense. I will use the new /lib version and copy it out make my edits and call my modified version back in the GO file. Thank you very much for getting me on the right track of thought here, and for reminding me that the Unraid default go file contains only a single notable entry and that everything else was 'mine'! I appreciate your expert assistance!
  2. Thank you Jorge, I've posted that output and it got me thinking about a change I made quite a while ago to modify 60-persistent-storage.rules. I pulled (on 6.9.2) lib/udev/rules.d/60-persistent-storage.rules copied to /boot/config/rules.d, made some edits (though not to any section that should impact an NVMe--this was for an optical drive that would hang under certain media conditions and force me to hard power down the unraid server uncleanly), then modified my go file to copy it over to /etc/udev/rules.d, chmod 644, and call a udevadm control --reload-rules, and a udevadm control --attr-match=subsystem=block I'm wondering now if this kernel's build of lib/edev/rules.d version added new device methods that are needed for this kernel build, and my old storage.rules is prevailing? If I wanted to revert to this build's standard GO file, I'm honestly not sure what it looks like, I'm also copying my current GO file below in case you believe that's to blame here. I'm thinking that I need to copy this build's lib/udev/rules.d/60-persistent-storage.rules, to /boot/config/rules.d, make my edits there, so it includes any new device handling, but as a starting place, I'm fine to simply change the GO file to remove that copy and reload-rules if you think that's to blame. As I said I'm just not sure (because I was stupid and didn't comment my GO file, as to what the 'base' UnRaid GO file should contain? I've not made any changes so far, but wanted to give you that background to see what your thoughts were. 6.9.2 output from udevadm: DEVLINKS=/dev/disk/by-id/nvme-KBG40ZNS256G_NVMe_TOSHIBA_256GB_99NPCDN9PQEN DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk ID_MODEL=KBG40ZNS256G_NVMe_TOSHIBA_256GB ID_MODEL_ENC=KBG40ZNS256G\x20NVMe\x20TOSHIBA\x20256GB\x20\x20\x20\x20\x20\x20\x20\x20\x20 ID_PART_TABLE_TYPE=dos ID_REVISION=10410104 ID_SERIAL=KBG40ZNS256G_NVMe_TOSHIBA_256GB_99NPCDN9PQEN ID_SERIAL_SHORT=99NPCDN9PQEN ID_TYPE=nvme MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=14797517 6.11.1 output from udevadm: DEVNAME=/dev/nvme0n1 DEVPATH=/devices/pci0000:00/0000:00:06.0/0000:01:00.0/nvme/nvme0/nvme0n1 DEVTYPE=disk DISKSEQ=20 ID_PART_TABLE_TYPE=dos MAJOR=259 MINOR=0 SUBSYSTEM=block USEC_INITIALIZED=26250968 GO File (imported same into 6.11.1 during upgrade): #!/bin/bash # Copy and apply udev rules with additional handling for misbehaving optical drive cp /boot/config/rules.d/60-persistent-storage.rules /etc/udev/rules.d/ chmod 644 /etc/udev/rules.d/60-persistent-storage.rules udevadm control --reload-rules udevadm trigger --attr-match=subsystem=block # Start the Management Utility /usr/local/sbin/emhttp &
  3. Been putting off moving above 6.9.2 but got tired of the nagging red triangle. Followed the process, backed-up the flash, upgraded and the array didn't start because two of the four cache drives weren't found. Reverted back to 6.9.2 and drives were found and array automatically restarted. Cache is a single pool with two SSDs and two NVE's (on mobo) (four drives total in cache pool). I've read this could require a Quirk for these two NVMe's. These were two hardware pulls from Dell laptops, and they're Toshiba 256GB NVMe's (identical models). In principal I'm willing to replace them with new Samsung NVMe's but there isn't a real 'need' in my configuration to do so other than to overcome this problem. That said, I don't want to have the same issue with two replacements, and I don't know enough to know if this is mobo related or the NVMe's themselves. For good measure I did patch the mobo FW to current, which just listed 'compatibility' fixes. Upgraded to 6.11.1 again and no dice. Currently reverted back to 6.9.2 until I can get a path forward. Attached diags for failed 6.11.1 upgrade. hgunraid01-diagnostics-20221022-1249.zip
  4. bump...anything? Bottom line, seems like USB attached optical drives might be able to make the system unstable...whole point of using dockers for me was isolation and underlying stability. So far this is less stable than my Win10 and Win7 systems...
  5. Thank you for the suggestion. It does sound exceedingly similar. I'm not savvy enough with unRAID to translate this to a potential solution for unRAID however. I've posted it on the unRAID:Docker Engine forum, but have not gotten any traction yet. If someone picks it up I've reference any solution here for others.
  6. I'm new to unRAID and am having an issue with a Buffalo BU40N USB passed through to a MakeMKV docker, which in some circumstances appears to make the entire system unstable. (I've tested the hardware on another platform, it's fine). Most recently, when loading a DVD, the drive seeks infinitely without mounting the disk (within MakeMKV at least). I've also noted this behavior with a BluRay that it had difficulty reading. Under these error read conditions the docker becomes unstable and then in turn the unRAID system becomes unstable. The result is that I have to bring the server hard down. Involved docker is Djoss' MakeMKV FWIW, last item in the docker's logfile is "[s6-finish] syncing disks"...it hangs there when it becomes unresponsive. Can't kill the docker through the GUI, bringing it down via CLI with docker stop/kill don't work. Docker container is finally stopped via SIGKILL. That said array won't shutdown via poweroff script at that point despite no locks on mounted volumes. Notably when this condition occurs at least one CPU core is pegged with nothing else running. Seems that this USB device creates an unstable system state. DJoss graciously pointed me over to a support thread on the MakeMKV forum that references a very similar situation with Ubuntu: https://forum.makemkv.com/forum/viewtopic.php?f=3&t=25357&p=109149&hilit=hang#p109149 Unfortunately I'm not savvy enough to translate this to unRAID or indeed to even know if there's reason to believe there an overlap. Any insights?
  7. Starting off--new to unRAID and to Dockers in general. Not having an issue with any other docker -- strange situation with the MakeMKV one. Firstly, I have a Buffalo USB drive, I've exposed both linux devices to the container. It can read *some* disks, but when it has a problem and just reads forever--either loading the disk or during ripping--the container hangs and can't be shutdown...it's worse than that as even killing the associated processes still don't allow the array to be shutdown successfully and I have to go unclean hard down. I've pulled the drive over to the same release of MakeMKV on Windows10--it does not become unresponsive under those conditions. So in other words I was trying to eliminate a specific disk or drive issue. In the container log it always hangs on 'syncing disks'--even if I haven't gotten past the actual identification of the disk, when trying to stop the container it hangs there. I'm not running the container in priveleged mode. I am exposing both drive devices through extra parameters. This is happening with the latest release and with the last most recent container release. Bottom line is that everything's golden if the disk reads perfectly, but when it doesn't, then I have to hard down the server because I can't get the container to shut down. My instinct tells me there is something here with permissions and possibly the fact this is USB hardware. What's not clear is why the container is stuck on synching disks and won't allow the array to shutdown or the container to stop gracefully. Thoughts?