Drives on ARECA raid not spinning down after upgrade from 6.8.3 -> 6.9.2


Recommended Posts

I have an ARECA raid, I did do some things to make it a bit better and play nicer with Unraid, mostly from these forums.  This means the drives could be addressed/appear normally.  Now I'm worried my drives aren't spinning down which will generate heat and consume power.  Beyond the not spinning down and no temperature, I don't see any crashes/errors.

 

Prior to the upgrade the drives spun down.

 

In my /boot/config/go I do have an areca specific trick

# Areca RAID config
if [[ ! -z `lspci | grep -i areca` ]]; then
  cp /boot/custom/lib/udev/rules.d/60-persistent-storage.rules /lib/udev/rules.d
  udevadm control --reload-rules
  udevadm trigger
  sleep 5
fi

 

 

60*-rules:

# do not edit this file, it will be overwritten on update
# See
# https://answers.launchpad.net/ubuntu/+source/udev/+question/203863

# persistent storage links: /dev/disk/{by-id,by-uuid,by-label,by-path}
# scheme based on "Linux persistent device names", 2004, Hannes Reinecke <[email protected]>

# forward scsi device event to corresponding block device
ACTION=="change", SUBSYSTEM=="scsi", ENV{DEVTYPE}=="scsi_device", TEST=="block", ATTR{block/*/uevent}="change"

ACTION=="remove", GOTO="persistent_storage_end"
SUBSYSTEM!="block", GOTO="persistent_storage_end"

# skip rules for inappropriate block devices
KERNEL=="fd*|mtd*|nbd*|gnbd*|btibm*|dm-*|md*", GOTO="persistent_storage_end"

# ignore partitions that span the entire disk
TEST=="whole_disk", GOTO="persistent_storage_end"

# for partitions import parent information
ENV{DEVTYPE}=="partition", IMPORT{parent}="ID_*"

# virtio-blk
KERNEL=="vd*[!0-9]", ATTRS{serial}=="?*", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}"
KERNEL=="vd*[0-9]", ATTRS{serial}=="?*", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/virtio-$env{ID_SERIAL}-part%n"

# USB devices use their own serial number
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="usb", IMPORT{program}="usb_id --export %p"
# ATA devices with their own "ata" kernel subsystem
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="ata", IMPORT{program}="ata_id --export $tempnode"
# ATA devices using the "scsi" subsystem
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", SUBSYSTEMS=="scsi", ATTRS{vendor}=="ATA", IMPORT{program}="ata_id --export $tempnode"
# scsi devices
KERNEL=="sd*[!0-9]|sr*", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $tempnode", ENV{ID_BUS}="scsi"
KERNEL=="cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SERIAL}!="?*", IMPORT{program}="scsi_id --export --whitelisted -d $tempnode", ENV{ID_BUS}="cciss"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SCSI_SERIAL}=="", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}"
KERNEL=="sd*|sr*|cciss*", ENV{DEVTYPE}=="disk", ENV{ID_SCSI_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}_$env{ID_SCSI_SERIAL}"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SCSI_SERIAL}=="", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_SERIAL}-part%n"
KERNEL=="sd*|cciss*", ENV{DEVTYPE}=="partition", ENV{ID_SCSI_SERIAL}=="?*", SYMLINK+="disk/by-id/$env{ID_BUS}-$env{ID_MODEL}_$env{ID_SCSI_SERIAL}-part%n"

# firewire
KERNEL=="sd*[!0-9]|sr*", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}"
KERNEL=="sd*[0-9]", ATTRS{ieee1394_id}=="?*", SYMLINK+="disk/by-id/ieee1394-$attr{ieee1394_id}-part%n"

# scsi compat links for ATA devices
KERNEL=="sd*[!0-9]", ENV{ID_BUS}=="ata", PROGRAM="scsi_id --whitelisted --replace-whitespace -p0x80 -d$tempnode", RESULT=="?*", ENV{ID_SCSI_COMPAT}="$result", SYMLINK+="disk/by-id/scsi-$env{ID_SCSI_COMPAT}"
KERNEL=="sd*[0-9]", ENV{ID_SCSI_COMPAT}=="?*", SYMLINK+="disk/by-id/scsi-$env{ID_SCSI_COMPAT}-part%n"

KERNEL=="mmcblk[0-9]", SUBSYSTEMS=="mmc", ATTRS{name}=="?*", ATTRS{serial}=="?*", ENV{ID_NAME}="$attr{name}", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/mmc-$env{ID_NAME}_$env{ID_SERIAL}"
KERNEL=="mmcblk[0-9]p[0-9]", ENV{ID_NAME}=="?*", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/mmc-$env{ID_NAME}_$env{ID_SERIAL}-part%n"
KERNEL=="mspblk[0-9]", SUBSYSTEMS=="memstick", ATTRS{name}=="?*", ATTRS{serial}=="?*", ENV{ID_NAME}="$attr{name}", ENV{ID_SERIAL}="$attr{serial}", SYMLINK+="disk/by-id/memstick-$env{ID_NAME}_$env{ID_SERIAL}"
KERNEL=="mspblk[0-9]p[0-9]", ENV{ID_NAME}=="?*", ENV{ID_SERIAL}=="?*", SYMLINK+="disk/by-id/memstick-$env{ID_NAME}_$env{ID_SERIAL}-part%n"

# by-path (parent device path)
ENV{DEVTYPE}=="disk", ENV{ID_PATH}=="", DEVPATH!="*/virtual/*", IMPORT{program}="path_id %p"
ENV{DEVTYPE}=="disk", ENV{ID_PATH}=="?*", SYMLINK+="disk/by-path/$env{ID_PATH}"
ENV{DEVTYPE}=="partition", ENV{ID_PATH}=="?*", SYMLINK+="disk/by-path/$env{ID_PATH}-part%n"

# skip unpartitioned removable media devices from drivers which do not send "change" events
ENV{DEVTYPE}=="disk", KERNEL!="sd*|sr*", ATTR{removable}=="1", GOTO="persistent_storage_end"

# probe filesystem metadata of optical drives which have a media inserted
KERNEL=="sr*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="?*", IMPORT{program}="/sbin/blkid -o udev -p -u noraid -O $env{ID_CDROM_MEDIA_SESSION_LAST_OFFSET} $tempnode"
# single-session CDs do not have ID_CDROM_MEDIA_SESSION_LAST_OFFSET
KERNEL=="sr*", ENV{ID_CDROM_MEDIA_TRACK_COUNT_DATA}=="?*", ENV{ID_CDROM_MEDIA_SESSION_LAST_OFFSET}=="", IMPORT{program}="/sbin/blkid -o udev -p -u noraid $tempnode"

# probe filesystem metadata of disks
KERNEL!="sr*", IMPORT{program}="/sbin/blkid -o udev -p $tempnode"

# watch for future changes
KERNEL!="sr*", OPTIONS+="watch"

# by-label/by-uuid links (filesystem metadata)
ENV{ID_FS_USAGE}=="filesystem|other|crypto", ENV{ID_FS_UUID_ENC}=="?*", SYMLINK+="disk/by-uuid/$env{ID_FS_UUID_ENC}"
ENV{ID_FS_USAGE}=="filesystem|other", ENV{ID_FS_LABEL_ENC}=="?*", SYMLINK+="disk/by-label/$env{ID_FS_LABEL_ENC}"

# by-id (World Wide Name)
ENV{DEVTYPE}=="disk", ENV{ID_WWN_WITH_EXTENSION}=="?*", SYMLINK+="disk/by-id/wwn-$env{ID_WWN_WITH_EXTENSION}"
ENV{DEVTYPE}=="partition", ENV{ID_WWN_WITH_EXTENSION}=="?*", SYMLINK+="disk/by-id/wwn-$env{ID_WWN_WITH_EXTENSION}-part%n"

LABEL="persistent_storage_end"

 

Attaching a screenshot and the diagnostics.

prob1.PNG

beyonder-nas-diagnostics-20210515-1319.zip

Edited by spamalam
Link to comment
56 minutes ago, spamalam said:

Should I submit this as a bug for 6.9.2 or is there another avenue of investigation?  Spin-down worked in 6.8.3.

Have you tried 6.9.1?

 

There are two bug reports already.

 

 

and 

 

 

Which I think covers your issues,

 

Link to comment

 

@SimonF For what it's worth, my bug report is about SMART settings in smart-one.cfg being erased by a badly written config handler, any potential relation here is that my RAID controllers (and MOST RAID controllers) don't pass most SCSI Generic commands through (that's the invalid opcode stuff) and block the spindown requests.

 

 

Looking at your logs, it took me about two seconds to notice that your diagnostic is absolutely massive. Syslog is bloated with messages from Docker regarding renaming interfaces, which it does on start/stop. Further inspection from your docker.txt log shows the same. You have Docker containers starting/stopping CONSTANTLY. Like, constantly.

time="2021-05-14T15:01:00.939164349+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/9daba11afa5b1539423fbfcaa502e85ca37f44511d3757ceb9fe8056091dfd6e pid=9625
time="2021-05-14T15:01:02.395197081+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/8815f62b37eff81428302a45ff320d6c76751e8ca3308c2263daf9115b40f74b pid=10397
time="2021-05-14T15:01:03.293534909+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7e3fb4527bdc2a445aef61e6b3f9af809ab9cb36d611eefda72fdde9ef079bda pid=11011
time="2021-05-14T15:01:04.999049158+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ff736b719aff2ebb8ce67c46d004d045a18c587fbc87d633be5a5d2c8996b119 pid=12017
time="2021-05-14T15:01:07.716413644+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/64c3621669d986744d9125dc9a5b55726019b51292b61e48df796ce3d68be50d pid=13557
time="2021-05-14T15:01:08.773384420+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/efa3d7d9abf05c6ecbbf0aed96416403c1379fd4f36591b98b0682ef774c8f7d pid=14225
time="2021-05-14T15:01:12.045508457+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/4c813ad16fe529af568973fdc650a9bd3de57bd6e55d6367a93ddd4084665c4f pid=15538
time="2021-05-14T15:01:15.746417852+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/1e52f4e57041c2f7893d8a6e1abf7e37e27fcf818fab52effef960f5db12aa34 pid=17061
time="2021-05-14T15:01:16.448800458+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/839c48777be381e1f8f1b714c9d14abf5f40311930fb276cecda12a2cdb4fea8 pid=18076
time="2021-05-14T15:01:17.235544165+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fa6e0d3a14af579e27803b51d02dabddca368af83821a8d6c945e8787b781b33 pid=18530
time="2021-05-14T15:01:18.245830784+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/7dac976c24b8e028fe6dd0b01eddc81d0f0cd81e9f6cb8dab8d5ecec2f8300f8 pid=19635
time="2021-05-14T15:01:44.028428540+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/55a2bd752bd9b5168e218a67fc6b6e2235701169c4a90731bd35bc5e1cf42005 pid=25056
time="2021-05-14T15:01:44.949010842+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/fe519f83d40335b9cef8c78ff77d6280695288cf150e408c7c1ddac80672c77c pid=26477
time="2021-05-14T15:01:48.531324200+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/ef6a7ac83c489c6571bfc5840666293ee6e989f0908abfece313ac818d78b6b9 pid=28876

 

That's one minute and 14 container restarts. The array could be failing to spin down because a container is hammering it with constant IO while trying to start up. Look at your Docker containers page, under the Uptime column, and see which one never grows old. Check logs, repair the cause, reboot, and see if it still fails to spin down -- if it does, please submit another diagnostic afterward.

Link to comment

So I checked the ARECA web interface and it has temp, smart and the drives spindown, but Unraid just shows it as green with no temp.  It feels like a regression. i.e. behind the scenes it works fine, smart alerts are there and the raid controller is handling them, also temperature alerts and spindown, but unraid doesn't know how to get these.

 

I used to have the green ball synchronized so it matched, and temperature reading but they've disappeared.

 

On the docker, I don't find a problem matching a description of crash loop:

19676803_Screenshot2021-05-21at12_30_25pm.thumb.png.ecfecc73e2d24ae82ca24838a98a50a5.png

 

root@beyonder-nas:/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby# tail -f /var/log/docker.log
time="2021-05-19T12:56:55.911757867+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f9d8e70b84929ad9d599abfba015c94c3b181d4d15de5efb3c2d77af6afd73e7 pid=6602
time="2021-05-19T12:57:56.797954264+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f9d8e70b84929ad9d599abfba015c94c3b181d4d15de5efb3c2d77af6afd73e7 pid=12169
time="2021-05-19T12:58:17.184904439+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f9d8e70b84929ad9d599abfba015c94c3b181d4d15de5efb3c2d77af6afd73e7 pid=14262
time="2021-05-19T12:59:15.477801768+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f9d8e70b84929ad9d599abfba015c94c3b181d4d15de5efb3c2d77af6afd73e7 pid=19670
time="2021-05-19T12:59:40.947945726+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/f9d8e70b84929ad9d599abfba015c94c3b181d4d15de5efb3c2d77af6afd73e7 pid=22221
time="2021-05-19T14:45:54.868357844+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/36befcd1e3e31a69336faa85eefc2c73e6d02d1efa96c13d4dde81d0db6a5616 pid=8474
time="2021-05-19T14:46:25.781235398+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/6df3dc5d93d01d17cfd2f359a4e318f32892aee860150429aab93fc6e7a6607f pid=11545
time="2021-05-19T14:46:41.688252358+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/0df12644a729e47cdb2fe912a0bf3869a8a52a3b8dc37920dea4e0aac1b7537f pid=13862
time="2021-05-19T22:57:12.753844515+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/94cad75e2187c12d85ba7fad360c8ff2cfa6eddfd462a17ed9a6ba8ca14560cf pid=10903
time="2021-05-20T00:18:43.113226885+02:00" level=info msg="starting signal loop" namespace=moby path=/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/683554712dd48026bc97978b002b98aaa9508803ed6464a417a1117b7ba04a4c pid=23602
^C
root@beyonder-nas:/var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby# date
Fri May 21 13:34:24 CEST 2021

 

Looks more like the plex scanner starting and stopping within the plex container than the entire container closing?  Plex doesn't write to the array, it writes to a dedicated SSD outside so it shouldn't cause any activity.  Very odd, I don't have a container that matches the uptime and those paths no longer exist unfortunately to find out which container this was :(

 

Edited by spamalam
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.