CS01-HS

Members
  • Posts

    475
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. I have a USB drive (sdb) mounted with UD: v6.9.1 spins it down correctly but the (substandard) interface returns bad/missing sense data: root@NAS:~# hdparm -y /dev/sdb /dev/sdb: issuing standby command SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ...which this line in /usr/local/sbin/sdspin catches and records as an error: [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1 So although the drive's spun down unRAID thinks it isn't causing (a) the activity LED to stay green and (b) unRAID to try to spin it down every spindown interval. I've worked around it by modifying the code (below) to exclude USB drives from the test. Is that risky, should I not have? I'm not sure what cases this test is meant to handle. I've also updated the status check for USB drives to use smartctl since (at least with mine) hdparm can't detect standby. #!/bin/bash # spin device up or down or get spinning status # $1 device name # $2 up or down or status # ATA only # hattip to Community Dev @doron RDEVNAME=/dev/${1#'/dev/'} # So that we can be called with either "sdX" or "/dev/sdX" get_device_id () { LABEL="${RDEVNAME:5}" DEVICE_ID=`ls -l /dev/disk/by-id/ | grep -v " wwn-" | grep "${LABEL}$" | rev | cut -d ' ' -f3 | rev` echo "$DEVICE_ID" } smartctl_status () { OUTPUT=$(/usr/sbin/smartctl --nocheck standby -i $RDEVNAME 2>&1) RET=$? # Ignore Bit 1 error (Device open failed) which usually indicates standby [[ $RET == 2 && $(($RET & 2)) == 2 ]] && RET=0 } hdparm () { OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1) RET=$? # ignore missing sense warning which might be caused by a substandard USB interface if [[ ! "$(get_device_id)" =~ ^usb-.* ]]; then [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1 fi } if [[ "$2" == "up" ]]; then hdparm "-S0" elif [[ "$2" == "down" ]]; then hdparm "-y" else # use smartctl (instead of hdparm) for USB drives if [[ "$(get_device_id)" =~ ^usb-.* ]]; then smartctl_status else hdparm "-C" fi [[ $RET == 0 && ${OUTPUT,,} =~ "standby" ]] && RET=2 fi exit $RET
  2. Okay, mystery solved. Compare the return code from your version to unRAID's: root@NAS:~# sdspin /dev/sdb down; echo $? SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 0 root@NAS:~# cp /usr/local/sbin/sdspin.unraid /usr/local/sbin/sdspin root@NAS:~# sdspin /dev/sdb down; echo $? 1 root@NAS:~# Now the two scripts: Unraid's returns 1 (RET=1) hdparm () { OUTPUT=$(/usr/sbin/hdparm $1 $RDEVNAME 2>&1) RET=$? [[ $RET == 0 && ${OUTPUT,,} =~ "bad/missing sense" ]] && RET=1 } ... exit $RET Yours executes this block else # Not SAS $DEBUG && echo $HDPARM -y $RDEVNAME $HDPARM -y $RDEVNAME > /dev/null fi without catching a return code then hits the last line, returning 0 exit 0 # Just in case :-)
  3. sdspin! That's what I was looking for, thank you. I mistakenly assumed that script was specific to your plugin. And I see the plugin helpfully saves the default version so I'll dig in and figure out the difference.
  4. Good eye re: hdparm printout, I missed that. Okay but I'm pretty sure unRAID doesn't call hdparm directly - it issues some other command (smartctl?) that then calls hdparm. That call (and its parameters for UDs) is what I'm after - if you happen to know. Re: USB spindown, I can bore you with details if you want?
  5. Plugin was installed but debug was disabled. I re-enabled it (with touch /tmp/spindownsas-debug) and tested again - same results, no additional printouts: root@NAS:~# touch /tmp/spindownsas-debug root@NAS:~# sdspin /dev/sdb down /usr/sbin/hdparm -y /dev/sdb SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 root@NAS:~# echo $? 0 root@NAS:~# sdspin /dev/sdb root@NAS:~# echo $? 0 root@NAS:~# It's a USB drive so I can tell by feel when it's spun down and it is. Spin down worked even without this plugin, problem was unRAID wouldn't detect it. My guess is this wrapper suppresses the error code from "bad/missing sense data" which is fortunate for me but it probably shouldn't. I still don't know the exact command unRAID calls to spin down Unassigned Devices. Do you? I assume it's smartctl but with what parameters exactly? (This is getting outside the scope of your plugin.)
  6. Done. Same results whether the drive is active or already spun down. root@NAS:~# sdspin /dev/sdb down SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 root@NAS:~# echo $? 0 root@NAS:~# sdspin /dev/sdb root@NAS:~# echo $? 0
  7. v6.9.1 I changed DEBUG to true in /usr/local/emhttp/plugins/sas-spindown/functions and ran: touch /tmp/spindownsas-debug
  8. Interesting: I don't have SAS drives but I do have a USB drive (Unassigned Device) with a faulty interface that prevents unRAID from properly detecting idle status - so (a) the webGUI shows it spunup regardless of status and (b) unraid issues it a spindown command every spindown interval. Faulty interface (sdb is the USB): root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown I installed this plugin thinking I'd use the code to write my own wrapper but mysteriously it's solved my problem - the drive's status in the webGUI is accurate and no unnecessary spindown commands. Thanks? Debug log in case you're curious:
  9. Nothing obvious but maybe because I did the fix. I'll wait 'till it happens again and compare. Thanks.
  10. Attached. Thanks. It's really minor so no need to fix my case, I only posted in case it indicated a general problem that affected more users. (If it's relevant, I think it popped up some time in the last couple/few months.) nas-diagnostics-20210314-1537.zip
  11. Strange problem where after every reboot (or maybe array stop/start) my attached USB's partition is visible as an SMB share despite having sharing disabled at the disk and partition level. Enabling then disabling sharing in the partition's settings fixes it. Anyone else seeing this?
  12. Because the lowest allowed value there is 2! Disabling restrictive validation solved it though, thank you.
  13. Thanks for the work on this. No rush at all but I wanted to suggest a (simple?) fix before I forget. My use case is weekly backups to the array which are then backed up to a versioned system. So I only need one backup on the array at a time. The problem is the lowest number allowed for Number of days to keep backups is 7 (inclusive) which ends up keeping two. To allow the value I want (1) I run it with this patch with no apparent problems for several months: root@NAS:~# diff /usr/local/emhttp/plugins/vmbackup/include/javascript/vmbackup.js.orig /usr/local/emhttp/plugins/vmbackup/include/javascript/vmbackup.js 1195c1195 < change_attr("#number_of_days_to_keep_backups", "pattern", "^(0|([7-9]|[1-8][0-9]|9[0-9]|1[0-7][0-9]|180))$"); --- > change_attr("#number_of_days_to_keep_backups", "pattern", "^(0|([1-9]|[1-8][0-9]|9[0-9]|1[0-7][0-9]|180))$"); root@NAS:~#
  14. My HDDs also spin down properly (with autofan and telegraf running.) Not only that but my non-cache pool drives which never spun down own their own do now. Perfect.
  15. With 6.9 on my Asrock j5005 (UHD 605) I'd get occasional lockups not with Emby, as far as I could tell, but with Handbrake GPU encoding and the intel-gpu-telegraf docker. Installing an HDMI dummy plug seems to have fixed it - it hasn't happened since. I don't know if that's specific to my board or more general. NOTE: I'm using the new modprobe method to load the driver.
  16. I might have a simple fix, unless I'm missing something. sdg is Disk2 of my array. Here are diskstats for sdg and its only partition sdg1: root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980673 22075722 182230404 6487392 284407 19425162 157662544 2604401 0 6664111 9193913 0 0 0 0 1749 102119 8 97 sdg1 681558 22075722 182067420 1286206 282656 19425162 157662544 2491636 0 1400537 3777843 0 0 0 0 0 0 And here are diskstats after a smart call: root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980680 22075722 182230408 6487516 284407 19425162 157662544 2604401 0 6664239 9194037 0 0 0 0 1749 102119 8 97 sdg1 681558 22075722 182067420 1286206 282656 19425162 157662544 2491636 0 1400537 3777843 0 0 0 0 0 0 You can see several fields have increased on sdg but none on sdg1 Now I open a file on Disk2: root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980734 22077348 182243848 6487581 284407 19425162 157662544 2604401 0 6664306 9194102 0 0 0 0 1749 102119 8 97 sdg1 681612 22077348 182080860 1286271 282656 19425162 157662544 2491636 0 1400604 3777908 0 0 0 0 0 0 Which shows up as reads on the partition sdg1. Could the fix be a simple as monitoring partitions for activity rather than devices? I've been using this method to monitor and spin down an attached USB drive and my 2nd pool (3 spinners in BTRFS RAID5) for a few months now with no apparent problems.
  17. I don't know enough about the plugin to know if that's the best solution but you can overwrite the rules file on boot by saving your custom version on the flash drive (I created the directory /boot/extras/ for custom scripts) then adding something like the following to your /boot/config/go file: # Custom autofan cp /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan.orig cp /boot/extras/autofan /usr/local/emhttp/plugins/dynamix.system.autofan/scripts/autofan
  18. Do not use because (a) it won't be reflected in the GUI or (b) it may cause more serious problems like data corruption? If (b) does it apply to only array disks or pool disks as well?
  19. Spindown works only if I disable autofan and every other addon that queries smart data (telegraf, etc.) Once spun down these queries don't seem to wake them but too soon to say definitively.
  20. @BurntOC I've only used that procedure once and with a different plugin. I hope I didn't lead you astray.
  21. If like me you upgraded to 6.9 before installing the latest version I think you'd be fine - v0.2.1 works for me. I'm pretty sure there's a way to install specific versions manually with the webGUI but I can't remember how. Edit: I remembered - go to plugins -> Install Plugin then paste the following in the URL field: https://raw.githubusercontent.com/jtok/unraid.vmbackup/v0.2.1/vmbackup.plg
  22. For what it's worth I've been running 6.9-RC2 since it was released and although the latest version of the plugin prevents me from installing it, the version prior has worked consistently in my weekly runs.
  23. Thanks, thought maybe I missed something. I should also mention I set an autostart delay (to give it time to establish the connection.)
  24. Using this container (which I've renamed vpn) I create a new network with the following command and route all related containers through it: docker network create container:vpn It's straightforward and (from what I can tell) works reliably, no issues with the latest update. Many seem to use proxies instead - are there advantages to that method?
  25. For anyone following my earlier instructions I've switched to using /dev/shm/ as the transcode directory. I think there may be a timing issue where the container tries to access the tmpfs before it's created which obviously caused problems. Referencing a permanent directory (/dev/shm/) avoids that. So far so good.