CS01-HS

Members
  • Content Count

    288
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. Ignore my earlier advice which applied to a specific telegraf container. I didn't know "GUS" existed but I've just switched over, it's a lot neater. Smartctl seems to be working fine with it on RC2, I get drive temps.
  2. To solve the missing smartctl problem: Go to advanced view in the telegraf docker and add the following to Post Arguments: /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf' NOTE: smartctl calls in RC1/RC2 prevent drive spindown but that's a separate problem.
  3. Here for reference is my full setup within Debian: [debian@debian]:~ $ sudo crontab -l # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Out
  4. I may be going about this the wrong way but I have a Debian VM in unraid always running that even on my underpowered board uses barely any CPU (< 1% at idle.) With /mnt/user mapped in the VM it gives full access to the unraid shares and makes tasks like installing youtube-dl and other miscellaneous CLI apps much easier.
  5. I keep a single, weekly appdata backup on the array (in a non-cached share.) That shouldn't consume much space. It protects against cache drive failure but not complete system failure. For that (and versioning) I use Duplicacy to backup the backup share to an external drive.
  6. Given the coincident timestamps I assume they were calls from the Web GUI. No calls to sdspin in the autofan script. I'll test that but I suspect not. I wrote a script that uses /proc/diskstats to track disk activity and spin them down after X minutes. It works for the pool drives which don't spin up again until they're active. One thing I noticed in my investigation - calling smartctl -A without -n standby on these WD Blacks doesn't wake them from standby. When I spin them down with hdparm -y (bypassing unRAID's management) the Web GUI continues to report their decl
  7. Right, I missed the hdparm check right before it. Full context: Dec 20 17:53:31 NAS hdparm wrapper[26550]: caller is sdspin, grandpa is sdspin, device /dev/sdj, args "-C /dev/sdj" Dec 20 17:53:31 NAS hdparm wrapper[26560]: caller is sdspin, grandpa is sdspin, device /dev/sdh, args "-C /dev/sdh" Dec 20 17:53:31 NAS hdparm wrapper[26570]: caller is sdspin, grandpa is sdspin, device /dev/sdg, args "-C /dev/sdg" Dec 20 17:53:31 NAS hdparm wrapper[26580]: caller is sdspin, grandpa is sdspin, device /dev/sde, args "-C /dev/sde" Dec 20 17:53:31 NAS hdparm wrapper[26590]: caller is sdspin, grandpa
  8. Huh it looks like the Web GUI is calling smartctl without -n standby. Oversight in RC2 or am I misreading? Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj,
  9. A few tweaks to scripts and commands but I got it working, thanks.
  10. Huh, interesting. Yes it does and it seems to work when I run it manually (although state change is not reflected in the Web GUI.) root@NAS:~# /usr/local/sbin/sdspin sdf up root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# /usr/local/sbin/sdspin sdf down root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~#
  11. Yes and good intuition. The standard version uses smartctl -A but I customized mine to include --nocheck standby That may still cause problems but it queries the array disks too which until recently spun down fine.
  12. Yes, I followed your advice. With default timer set to 15: I changed the Fast Pool disks from default to 15 and waited ~1hr for spindown: no spindown I then changed them back to default and waited ~1hr for spindown: no spindown
  13. No they all work as expected: root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: active/idle root@NAS:~# hdparm -y /dev/sdf /dev/sdf: issuing standby command root@NAS:~# hdparm -y /dev/sdi /dev/sdi: issuing standby command root@NAS:~# hdparm -y /dev/sdj /dev/sdj: issuing standby command root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: standby root@NAS:~# hdparm
  14. hdparm -C doesn't seem to work with it but smartctl and hdparm -y do. root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 ff 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Elements / My Passport (USB, AF) Device Model: WDC
  15. Ha! Okay then. I'm still curious about the pool problem (unless the whole setup's changing) because that's been a problem for me since beta-25 (the first one I installed.)
  16. I don't know if this is a bug but I wasn't sure where else to post it. I have two spindown problems in RC2: I've set default spindown to 15 minutes to help with testing. (1) The USB disk in Unassigned devices Unraid tries to spin it down (good!) but doesn't actually do it. That may be a problem with the drive's USB interface so I've cron'd a user script to spin it down with hdparm -y /dev/sdb (which works.) It looks like unRAID never detects that it's spun down (or a successful return from its own spindown call) because absent drive activity I see
  17. Sorry about that. I meant it sincerely though, an unclean shutdown (and subsequent parity check) should be a big deal. Auto-pause mitigates one small consequence of that - the larger consequence is potential data loss.
  18. That's extreme. I think my initial backup over wireless averaged 7MB/s or at worst 5MB/s. 5MB/s (if my math's right) puts 850GB at around 2 days run time. Cache will function differently depending on the share's setting. If it's cache-yes, share data is written to cache and will remain there until mover's run at which point it's transferred to the (parity-protected) array. If new files are then written they'll be written to the cache and go through the same process. If it's cache-only, all share data, assuming it can fit, will remain permanently on the c
  19. You can also pause it manually and parity-tuning will handle the resume. If unclean shutdowns are so frequent the extra click is burdensome you've got bigger problems
  20. I'm running the original (more basic) version of this plugin with RC1 and two cache pools. So far so good.
  21. I think so but I'm running RC1 and telegraf and that combination prevents spindown. hdparm -y works for a time.
  22. I don't want to derail the OP's thread but do you mean this generally or specific to BTRFS/SSDs? Like the OP I have a BTRFS RAID1 cache pool so if the risk is low and in any case recovery unlikely it seems like real cost (half the space) for no real benefit.
  23. Instructions for an Email-to-Growl relay to get growl notifications from any app that supports email alerts, like unRAID. I have this setup on a pi but any debian (and maybe other) distros should work. This is a dirty first draft so if you're one of the two people who still use growl and run into difficulty let me know and I'll clean it up. NOTE: This assumes you have Growl setup and working on all client machines. I'm not sure where to download 2.x from anymore. If I find the installer I'll make it available. CAUTION: This assumes the target machine DO
  24. Great, added to config/go. Easy fix, thanks. The LED difference is still curious. I always assumed the "access" signal was caught when the MB (or card) detected HD access but this suggests it relies on the drive to say "I have been accessed."