CS01-HS

Members
  • Content Count

    240
  • Joined

  • Last visited

Everything posted by CS01-HS

  1. Was this ever resolved? I'm seeing it now.
  2. Couldn't you run a user script every X minutes to chown the files to root and remove write permission from everyone else? I haven't tested that (or the command below) but something like: find '/mnt/RecycleBin/User Shares' -type f -follow -exec chown root:root {} \; -exec chmod ag-w {} \; Or use inotify if you want to get fancy.
  3. Limetech's aware and on it from what they posted. Fixes in RC3
  4. Right. Specifically it's the smartctl calls in telegraf (part of GUS.) Alternatively you can comment out the [[inputs.smart]] block in Grafana-Unraid-Stack/telegraf/telegraf.conf to disable them. Seems for whatever reason (new kernel, new smartctl) these calls are recorded as disk activity which causes the disks to fail unraid's "If inactive for X minutes, spindown" test.
  5. I think you're seeing smartctl/hdparm polling. The 85B/s is when the drive's in standby and the 341B/s when it's active.
  6. No activity on array disks between spindown and unraid spinning them up. Alternate pool support was just added to the plugin but it introduced a bug that prevents starting the service. Limetech said there'd be spindown fixes in RC3 so maybe waiting's the best strategy.
  7. Anyone else seeing this every 15 minutes in their log? I'm not sure if it's a health check or a warning. If it's bound though it shouldn't need to re-bind...
  8. The only incompatibility I can see is it can't track activity on alternate pools. I thought enabling cache might pick it up but no. Otherwise, works just as in 6.8.3.
  9. Ignore my earlier advice which applied to a specific telegraf container. I didn't know "GUS" existed but I've just switched over, it's a lot neater. Smartctl seems to be working fine with it on RC2, I get drive temps.
  10. To solve the missing smartctl problem: Go to advanced view in the telegraf docker and add the following to Post Arguments: /bin/sh -c 'apk update && apk upgrade && apk add smartmontools && telegraf' NOTE: smartctl calls in RC1/RC2 prevent drive spindown but that's a separate problem.
  11. Here for reference is my full setup within Debian: [debian@debian]:~ $ sudo crontab -l # Edit this file to introduce tasks to be run by cron. # # Each task to run has to be defined through a single line # indicating with different fields when the task will be run # and what command to run for the task # # To define the time you can provide concrete values for # minute (m), hour (h), day of month (dom), month (mon), # and day of week (dow) or use '*' in these fields (for 'any'). # # Notice that tasks will be started based on the cron's system # daemon's notion of time and timezones. # # Out
  12. I may be going about this the wrong way but I have a Debian VM in unraid always running that even on my underpowered board uses barely any CPU (< 1% at idle.) With /mnt/user mapped in the VM it gives full access to the unraid shares and makes tasks like installing youtube-dl and other miscellaneous CLI apps much easier.
  13. I keep a single, weekly appdata backup on the array (in a non-cached share.) That shouldn't consume much space. It protects against cache drive failure but not complete system failure. For that (and versioning) I use Duplicacy to backup the backup share to an external drive.
  14. Given the coincident timestamps I assume they were calls from the Web GUI. No calls to sdspin in the autofan script. I'll test that but I suspect not. I wrote a script that uses /proc/diskstats to track disk activity and spin them down after X minutes. It works for the pool drives which don't spin up again until they're active. One thing I noticed in my investigation - calling smartctl -A without -n standby on these WD Blacks doesn't wake them from standby. When I spin them down with hdparm -y (bypassing unRAID's management) the Web GUI continues to report their decl
  15. Right, I missed the hdparm check right before it. Full context: Dec 20 17:53:31 NAS hdparm wrapper[26550]: caller is sdspin, grandpa is sdspin, device /dev/sdj, args "-C /dev/sdj" Dec 20 17:53:31 NAS hdparm wrapper[26560]: caller is sdspin, grandpa is sdspin, device /dev/sdh, args "-C /dev/sdh" Dec 20 17:53:31 NAS hdparm wrapper[26570]: caller is sdspin, grandpa is sdspin, device /dev/sdg, args "-C /dev/sdg" Dec 20 17:53:31 NAS hdparm wrapper[26580]: caller is sdspin, grandpa is sdspin, device /dev/sde, args "-C /dev/sde" Dec 20 17:53:31 NAS hdparm wrapper[26590]: caller is sdspin, grandpa
  16. Huh it looks like the Web GUI is calling smartctl without -n standby. Oversight in RC2 or am I misreading? Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj,
  17. A few tweaks to scripts and commands but I got it working, thanks.
  18. Huh, interesting. Yes it does and it seems to work when I run it manually (although state change is not reflected in the Web GUI.) root@NAS:~# /usr/local/sbin/sdspin sdf up root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# /usr/local/sbin/sdspin sdf down root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~#
  19. Yes and good intuition. The standard version uses smartctl -A but I customized mine to include --nocheck standby That may still cause problems but it queries the array disks too which until recently spun down fine.
  20. Yes, I followed your advice. With default timer set to 15: I changed the Fast Pool disks from default to 15 and waited ~1hr for spindown: no spindown I then changed them back to default and waited ~1hr for spindown: no spindown
  21. No they all work as expected: root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: active/idle root@NAS:~# hdparm -y /dev/sdf /dev/sdf: issuing standby command root@NAS:~# hdparm -y /dev/sdi /dev/sdi: issuing standby command root@NAS:~# hdparm -y /dev/sdj /dev/sdj: issuing standby command root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: standby root@NAS:~# hdparm
  22. hdparm -C doesn't seem to work with it but smartctl and hdparm -y do. root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 ff 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Elements / My Passport (USB, AF) Device Model: WDC
  23. Ha! Okay then. I'm still curious about the pool problem (unless the whole setup's changing) because that's been a problem for me since beta-25 (the first one I installed.)