CS01-HS

Members
  • Posts

    475
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. No activity on array disks between spindown and unraid spinning them up. Alternate pool support was just added to the plugin but it introduced a bug that prevents starting the service. Limetech said there'd be spindown fixes in RC3 so maybe waiting's the best strategy.
  2. Anyone else seeing this every 15 minutes in their log? I'm not sure if it's a health check or a warning. If it's bound though it shouldn't need to re-bind...
  3. The only incompatibility I can see is it can't track activity on alternate pools. I thought enabling cache might pick it up but no. Otherwise, works just as in 6.8.3.
  4. Ignore my earlier advice which applied to a specific telegraf container. I didn't know "GUS" existed but I've just switched over, it's a lot neater. Smartctl seems to be working fine with it on RC2, I get drive temps.
  5. To solve the missing smartctl problem: Go to advanced view in the telegraf docker and add the following to Post Arguments: /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf' NOTE: smartctl calls in RC1/RC2 prevent drive spindown but that's a separate problem.
  6. Here for reference is my full setup within Debian: [debian@debian]:~ $ sudo crontab -l # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command # update youtube-dl once a week 05 00 * * 0 /usr/bin/wget https://yt-dl.org/downloads/latest/youtube-dl -O /usr/local/bin/youtube-dl [debian@debian]:~ $ [debian@debian]:~ $ crontab -l # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Output of the crontab jobs (including errors) is sent through # email to the user the crontab file belongs to (unless redirected). # # For example, you can run a backup of all your user accounts # at 5 a.m every week with: # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/ # # For more information see the manual pages of crontab(5) and cron(8) # # m h dom mon dow command # download nightly 00 01 * * * /home/debian/scripts/youtube-playlist-download.sh > /var/log/youtube-playlist-download.log 2>&1 [debian@debian]:~ $ more /home/debian/scripts/youtube-playlist-download.sh #!/bin/bash # Download my youtube playlists # OPTIONS # YOUTUBE_CMD="/usr/local/bin/youtube-dl" SAVE_DIRECTORY="/mnt/unraid/Private/Youtube/" ARCHIVE_FILE="/home/debian/scripts/youtube-playlist-download.downloaded" YOUTUBE_USERNAME="PUT_YOUR_YOUTUBE_USERNAME_HERE" CMD_OPTIONS=(-i --yes-playlist --embed-thumbnail --add-metadata --embed-subs --all-subs -f bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4 --download-archive ${ARCHIVE_FILE} ) ${YOUTUBE_CMD} ${CMD_OPTIONS[@]} -o "${SAVE_DIRECTORY}%(playlist)s/%(playlist_index)03d - %(title)s.%(ext)s" https://www.youtube.com/user/${YOUTUBE_USERNAME}/playlists exit 0 [debian@debian]:~ $
  7. I may be going about this the wrong way but I have a Debian VM in unraid always running that even on my underpowered board uses barely any CPU (< 1% at idle.) With /mnt/user mapped in the VM it gives full access to the unraid shares and makes tasks like installing youtube-dl and other miscellaneous CLI apps much easier.
  8. I keep a single, weekly appdata backup on the array (in a non-cached share.) That shouldn't consume much space. It protects against cache drive failure but not complete system failure. For that (and versioning) I use Duplicacy to backup the backup share to an external drive.
  9. Given the coincident timestamps I assume they were calls from the Web GUI. No calls to sdspin in the autofan script. I'll test that but I suspect not. I wrote a script that uses /proc/diskstats to track disk activity and spin them down after X minutes. It works for the pool drives which don't spin up again until they're active. One thing I noticed in my investigation - calling smartctl -A without -n standby on these WD Blacks doesn't wake them from standby. When I spin them down with hdparm -y (bypassing unRAID's management) the Web GUI continues to report their declining temperatures. Thanks I appreciate the help.
  10. Right, I missed the hdparm check right before it. Full context: Dec 20 17:53:31 NAS hdparm wrapper[26550]: caller is sdspin, grandpa is sdspin, device /dev/sdj, args "-C /dev/sdj" Dec 20 17:53:31 NAS hdparm wrapper[26560]: caller is sdspin, grandpa is sdspin, device /dev/sdh, args "-C /dev/sdh" Dec 20 17:53:31 NAS hdparm wrapper[26570]: caller is sdspin, grandpa is sdspin, device /dev/sdg, args "-C /dev/sdg" Dec 20 17:53:31 NAS hdparm wrapper[26580]: caller is sdspin, grandpa is sdspin, device /dev/sde, args "-C /dev/sde" Dec 20 17:53:31 NAS hdparm wrapper[26590]: caller is sdspin, grandpa is sdspin, device /dev/sdf, args "-C /dev/sdf" Dec 20 17:53:31 NAS hdparm wrapper[26601]: caller is sdspin, grandpa is sdspin, device /dev/sdi, args "-C /dev/sdi" Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj, args "-A /dev/sdj" Dec 20 17:53:31 NAS smartctl wrapper[26646]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdf, args "-A /dev/sdf" Dec 20 17:53:31 NAS smartctl wrapper[26655]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdi, args "-A /dev/sdi" EDIT: And now that I'm looking at it I see the smartctl calls in the original autofan are wrapped in hdparm -C calls, so my -n standby isn't necessary.
  11. Huh it looks like the Web GUI is calling smartctl without -n standby. Oversight in RC2 or am I misreading? Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj, args "-A /dev/sdj" Dec 20 17:53:31 NAS smartctl wrapper[26646]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdf, args "-A /dev/sdf" Dec 20 17:53:31 NAS smartctl wrapper[26655]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdi, args "-A /dev/sdi"
  12. A few tweaks to scripts and commands but I got it working, thanks.
  13. Huh, interesting. Yes it does and it seems to work when I run it manually (although state change is not reflected in the Web GUI.) root@NAS:~# /usr/local/sbin/sdspin sdf up root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# /usr/local/sbin/sdspin sdf down root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~#
  14. Yes and good intuition. The standard version uses smartctl -A but I customized mine to include --nocheck standby That may still cause problems but it queries the array disks too which until recently spun down fine.
  15. Yes, I followed your advice. With default timer set to 15: I changed the Fast Pool disks from default to 15 and waited ~1hr for spindown: no spindown I then changed them back to default and waited ~1hr for spindown: no spindown
  16. No they all work as expected: root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: active/idle root@NAS:~# hdparm -y /dev/sdf /dev/sdf: issuing standby command root@NAS:~# hdparm -y /dev/sdi /dev/sdi: issuing standby command root@NAS:~# hdparm -y /dev/sdj /dev/sdj: issuing standby command root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: standby root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: standby root@NAS:~#
  17. hdparm -C doesn't seem to work with it but smartctl and hdparm -y do. root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 ff 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Elements / My Passport (USB, AF) Device Model: WDC WD40NMZW-11GX6S1 Serial Number: WD-WX11D9660PZ7 LU WWN Device Id: 5 0014ee 65c75afb2 Firmware Version: 01.01A01 User Capacity: 4,000,753,472,000 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Form Factor: 2.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-3 (minor revision not indicated) SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sun Dec 20 12:41:35 2020 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled Power mode is: ACTIVE or IDLE root@NAS:~# hdparm -y /dev/sdb /dev/sdb: issuing standby command SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org Device is in STANDBY mode, exit(2) root@NAS:~# Thanks. Like I said it's not a result of the recent changes - it's been a problem since I added the pool in beta25.
  18. Ha! Okay then. I'm still curious about the pool problem (unless the whole setup's changing) because that's been a problem for me since beta-25 (the first one I installed.)
  19. I don't know if this is a bug but I wasn't sure where else to post it. I have two spindown problems in RC2: I've set default spindown to 15 minutes to help with testing. (1) The USB disk in Unassigned devices Unraid tries to spin it down (good!) but doesn't actually do it. That may be a problem with the drive's USB interface so I've cron'd a user script to spin it down with hdparm -y /dev/sdb (which works.) It looks like unRAID never detects that it's spun down (or a successful return from its own spindown call) because absent drive activity I see this every 15 minutes in the log whatever the drive's status: Dec 20 06:45:05 NAS emhttpd: spinning down /dev/sdb Dec 20 07:00:06 NAS emhttpd: spinning down /dev/sdb (2) The Fast pool These are 2.5" 7200rpm WD Blacks which also don't spin down. I see nothing in the log indicating unRAID attempts to spin them down except when I trigger it manually with the Web GUI Dec 19 21:09:57 NAS emhttpd: spinning down /dev/sdj Dec 19 21:09:57 NAS emhttpd: spinning down /dev/sdf Dec 19 21:09:57 NAS emhttpd: spinning down /dev/sdi I've attached diagnostics but this is more a request for information. Should I see spindown calls for Pool drives in syslog? (I should mention I haven't seen the array disks spin down either but they're set to 2 hours and haven't really had a chance. I'll lower it and keep an eye out.) nas-diagnostics-20201220-0734.zip
  20. Sorry about that. I meant it sincerely though, an unclean shutdown (and subsequent parity check) should be a big deal. Auto-pause mitigates one small consequence of that - the larger consequence is potential data loss.
  21. That's extreme. I think my initial backup over wireless averaged 7MB/s or at worst 5MB/s. 5MB/s (if my math's right) puts 850GB at around 2 days run time. Cache will function differently depending on the share's setting. If it's cache-yes, share data is written to cache and will remain there until mover's run at which point it's transferred to the (parity-protected) array. If new files are then written they'll be written to the cache and go through the same process. If it's cache-only, all share data, assuming it can fit, will remain permanently on the cache. It shouldn't touch the array. If it's cache-prefer, all share data will remain on the cache and if any's on the array, maybe due to an earlier, different cache setting, it will be moved from array to cache when mover runs, assuming it can fit. If you edit a share and click help on Use cache pool you'll see a more thorough explanation. NOTE: Data on a single-drive cache is not parity-protected which is why unRAID offers the option to run two cache drives in RAID1 - I use this. The way I've set it up Time Machine never touches the array. That seemed like a bad idea performance-wise and in terms of wear on a critical disk. If my cache pool were large enough I'd have set the share to cache-prefer, run mover and that'd be that. Because it's not, and I'm running RC2 (which supports multiple cache pools), I set up a new cache pool just for time machine (with cache-prefer.) I can confirm I saw little if any difference in Time Machine speed between SSD and the time-machine pool (7200rpm drives in RAID5) but I don't how much of that was due to RAID5 and/or 7200rpm.
  22. You can also pause it manually and parity-tuning will handle the resume. If unclean shutdowns are so frequent the extra click is burdensome you've got bigger problems
  23. I'm running the original (more basic) version of this plugin with RC1 and two cache pools. So far so good.
  24. I think so but I'm running RC1 and telegraf and that combination prevents spindown. hdparm -y works for a time.