-
Posts
475 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by CS01-HS
-
-
My HDDs also spin down properly (with autofan and telegraf running.)
Not only that but my non-cache pool drives which never spun down own their own do now. Perfect.
-
14 hours ago, limetech said:
Executing any kind of SMART operation increments both number I/O's to the device (in this case Reads) and number sectors transferred (in this case sectors read). This is true whether device is in standby or not. HOWEVER, I have a fix for this coming in 6.9.1
I might have a simple fix, unless I'm missing something.
sdg is Disk2 of my array.
Here are diskstats for sdg and its only partition sdg1:
root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980673 22075722 182230404 6487392 284407 19425162 157662544 2604401 0 6664111 9193913 0 0 0 0 1749 102119 8 97 sdg1 681558 22075722 182067420 1286206 282656 19425162 157662544 2491636 0 1400537 3777843 0 0 0 0 0 0
And here are diskstats after a smart call:
root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980680 22075722 182230408 6487516 284407 19425162 157662544 2604401 0 6664239 9194037 0 0 0 0 1749 102119 8 97 sdg1 681558 22075722 182067420 1286206 282656 19425162 157662544 2491636 0 1400537 3777843 0 0 0 0 0 0
You can see several fields have increased on sdg but none on sdg1
Now I open a file on Disk2:
root@NAS:~# grep sdg /proc/diskstats 8 96 sdg 980734 22077348 182243848 6487581 284407 19425162 157662544 2604401 0 6664306 9194102 0 0 0 0 1749 102119 8 97 sdg1 681612 22077348 182080860 1286271 282656 19425162 157662544 2491636 0 1400604 3777908 0 0 0 0 0 0
Which shows up as reads on the partition sdg1.
Could the fix be a simple as monitoring partitions for activity rather than devices?
I've been using this method to monitor and spin down an attached USB drive and my 2nd pool (3 spinners in BTRFS RAID5) for a few months now with no apparent problems.
- 2
-
On 3/2/2021 at 11:58 AM, limetech said:
If you remove Dynamix System Autofan plugin, does issue persist?
Spindown works only if I disable autofan and every other addon that queries smart data (telegraf, etc.) Once spun down these queries don't seem to wake them but too soon to say definitively.
- 1
-
2 hours ago, John_M said:
This is a place for reporting bugs in the current pre-release version Unraid, not other operating systems.
I assumed Apple was phasing out old model types, in which case it'd make sense for unraid to use a more recent one. But right, maybe they just forgot and plan to fix it.
-
I set it up as a user script to run hourly but honestly can't remember why.
It would make sense to put those lines in config/go so it's run on startup, but maybe that doesn't work because smb.service is overwritten on array start?
The fix for that would be to set the user script to run on array start but I didn't do that either, strange.
-
37 minutes ago, mgutt said:
Which CPU has your server?
-
I assumed this slowness was specific to my Mac environment - painfully slow Time Machine backups. Looks like it's unRAID, huh.
-
I've done some digging on this - here's what I found.
Mount unraid share system on my mac and check its spotlight status:
[macbook-pro]:~ $ mdutil -s /Volumes/system /System/Volumes/Data/Volumes/system: Server search enabled. [macbook-pro]:~ $
But as best I can tell "server search" is not in fact enabled.
Turns out samba 4.12.0 changed this search-related default:
Note that when upgrading existing installations that are using the previous default Spotlight backend Gnome Tracker must explicitly set "spotlight backend = tracker" as the new default is "noindex".
If I add the following to SMB extras:
[global] spotlight backend = tracker
in addition to this share-specific code:
[system] path = /mnt/user/system spotlight = yes
Search works again!
When I check spotlight status I get the following:
[macbook-pro]:~ $ mdutil -s /Volumes/system /System/Volumes/Data/Volumes/system: Indexing disabled. [macbook-pro]:~ $
Hopefully this is useful toward a general fix.
I'd rather avoid a custom entry for each share.
- 2
- 3
-
Changed Status to Closed
Closing this because I don't think it's unRAID-specific. Maybe it's the new kernel or new driver but I had a bunch of video-related problems (including system freezes) until I added intel_iommu=on,igfx_off to syslinux config and installed a dummy HDMI plug. It's been stable since.
-
Limetech's aware and on it from what they posted. Fixes in RC3
-
12 hours ago, TRusselo said:
Found my issue. RESOLVED for me.
Grafana Unraid Stack (GUS) Docker. ill go there for support.
Right.
Specifically it's the smartctl calls in telegraf (part of GUS.) Alternatively you can comment out the [[inputs.smart]] block in Grafana-Unraid-Stack/telegraf/telegraf.conf to disable them.
Seems for whatever reason (new kernel, new smartctl) these calls are recorded as disk activity which causes the disks to fail unraid's "If inactive for X minutes, spindown" test.
- 1
-
I think you're seeing smartctl/hdparm polling. The 85B/s is when the drive's in standby and the 341B/s when it's active.
-
No activity on array disks between spindown and unraid spinning them up. Alternate pool support was just added to the plugin but it introduced a bug that prevents starting the service. Limetech said there'd be spindown fixes in RC3 so maybe waiting's the best strategy.
- 1
-
35 minutes ago, SimonF said:
Not sure if autofan would be using sdspin
Given the coincident timestamps I assume they were calls from the Web GUI. No calls to sdspin in the autofan script.
38 minutes ago, SimonF said:Do you have the file activity plugin to see if any reads/writes on files which may stop spin down from happening.
I'll test that but I suspect not. I wrote a script that uses /proc/diskstats to track disk activity and spin them down after X minutes. It works for the pool drives which don't spin up again until they're active.
One thing I noticed in my investigation - calling smartctl -A without -n standby on these WD Blacks doesn't wake them from standby. When I spin them down with hdparm -y (bypassing unRAID's management) the Web GUI continues to report their declining temperatures.
46 minutes ago, SimonF said:To check if it was an issue with multiple drives in a pool, I added a second disk into my testpool and on my system spins them down fine.
Thanks I appreciate the help.
-
Right, I missed the hdparm check right before it. Full context:
Dec 20 17:53:31 NAS hdparm wrapper[26550]: caller is sdspin, grandpa is sdspin, device /dev/sdj, args "-C /dev/sdj" Dec 20 17:53:31 NAS hdparm wrapper[26560]: caller is sdspin, grandpa is sdspin, device /dev/sdh, args "-C /dev/sdh" Dec 20 17:53:31 NAS hdparm wrapper[26570]: caller is sdspin, grandpa is sdspin, device /dev/sdg, args "-C /dev/sdg" Dec 20 17:53:31 NAS hdparm wrapper[26580]: caller is sdspin, grandpa is sdspin, device /dev/sde, args "-C /dev/sde" Dec 20 17:53:31 NAS hdparm wrapper[26590]: caller is sdspin, grandpa is sdspin, device /dev/sdf, args "-C /dev/sdf" Dec 20 17:53:31 NAS hdparm wrapper[26601]: caller is sdspin, grandpa is sdspin, device /dev/sdi, args "-C /dev/sdi" Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj, args "-A /dev/sdj" Dec 20 17:53:31 NAS smartctl wrapper[26646]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdf, args "-A /dev/sdf" Dec 20 17:53:31 NAS smartctl wrapper[26655]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdi, args "-A /dev/sdi"
EDIT: And now that I'm looking at it I see the smartctl calls in the original autofan are wrapped in hdparm -C calls, so my -n standby isn't necessary.
-
Huh it looks like the Web GUI is calling smartctl without -n standby.
Oversight in RC2 or am I misreading?
Dec 20 17:53:31 NAS smartctl wrapper[26619]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdg, args "-A /dev/sdg" Dec 20 17:53:31 NAS smartctl wrapper[26642]: caller is smartctl_type, grandpa is emhttpd, device /dev/sde, args "-A /dev/sde" Dec 20 17:53:31 NAS smartctl wrapper[26649]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdh, args "-A /dev/sdh" Dec 20 17:53:31 NAS smartctl wrapper[26645]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdj, args "-A /dev/sdj" Dec 20 17:53:31 NAS smartctl wrapper[26646]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdf, args "-A /dev/sdf" Dec 20 17:53:31 NAS smartctl wrapper[26655]: caller is smartctl_type, grandpa is emhttpd, device /dev/sdi, args "-A /dev/sdi"
-
A few tweaks to scripts and commands but I got it working, thanks.
- 1
-
3 minutes ago, SimonF said:
You can do sdspin sdf up and sdspin sdf down
Huh, interesting. Yes it does and it seems to work when I run it manually (although state change is not reflected in the Web GUI.)
root@NAS:~# /usr/local/sbin/sdspin sdf up root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# /usr/local/sbin/sdspin sdf down root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~#
-
13 minutes ago, SimonF said:
I dont use autofan, does that query the disks for temps?
Yes and good intuition.
The standard version uses smartctl -A
but I customized mine to include --nocheck standby
That may still cause problems but it queries the array disks too which until recently spun down fine.
-
35 minutes ago, SimonF said:
Did you try changing the spin down timer option for the fast pool
Yes, I followed your advice.
With default timer set to 15:
I changed the Fast Pool disks from default to 15 and waited ~1hr for spindown: no spindown
I then changed them back to default and waited ~1hr for spindown: no spindown
-
28 minutes ago, SimonF said:
Do you get the same results with sdj/f/i
No they all work as expected:
root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: active/idle root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: active/idle root@NAS:~# hdparm -y /dev/sdf /dev/sdf: issuing standby command root@NAS:~# hdparm -y /dev/sdi /dev/sdi: issuing standby command root@NAS:~# hdparm -y /dev/sdj /dev/sdj: issuing standby command root@NAS:~# hdparm -C /dev/sdf /dev/sdf: drive state is: standby root@NAS:~# hdparm -C /dev/sdi /dev/sdi: drive state is: standby root@NAS:~# hdparm -C /dev/sdj /dev/sdj: drive state is: standby root@NAS:~#
-
5 hours ago, limetech said:
With this device spun up, take a look at drive state reported by 'hdparm -C'
It should show as active/idle.
Next spin it down with 'hdparm -y' and then again take a look at drive state reported by 'hdparm -C'
Does it say "standby"?
hdparm -C doesn't seem to work with it but smartctl and hdparm -y do.
root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 ff 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Elements / My Passport (USB, AF) Device Model: WDC WD40NMZW-11GX6S1 Serial Number: WD-WX11D9660PZ7 LU WWN Device Id: 5 0014ee 65c75afb2 Firmware Version: 01.01A01 User Capacity: 4,000,753,472,000 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Form Factor: 2.5 inches Device is: In smartctl database [for details use: -P show] ATA Version is: ACS-3 (minor revision not indicated) SATA Version is: SATA 3.1, 6.0 Gb/s (current: 6.0 Gb/s) Local Time is: Sun Dec 20 12:41:35 2020 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled Power mode is: ACTIVE or IDLE root@NAS:~# hdparm -y /dev/sdb /dev/sdb: issuing standby command SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 00 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 root@NAS:~# hdparm -C /dev/sdb /dev/sdb: SG_IO: bad/missing sense data, sb[]: f0 00 01 00 50 40 00 0a 80 00 b4 00 00 1d 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 drive state is: unknown root@NAS:~# smartctl --nocheck standby -i /dev/sdb smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.1-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org Device is in STANDBY mode, exit(2) root@NAS:~#
5 hours ago, limetech said:This not spinning down is puzzling... investigating.
Thanks. Like I said it's not a result of the recent changes - it's been a problem since I added the pool in beta25.
-
30 minutes ago, JorgeB said:
Spindown fixes are already coming with rc3, probably best to wait for that and retest.
Ha! Okay then. I'm still curious about the pool problem (unless the whole setup's changing) because that's been a problem for me since beta-25 (the first one I installed.)
-
6 hours ago, dlandon said:
You shouldn't spin down the disks this way. Unraid needs to manage the spin up/down. It keeps track of the disk spindown status so it doesn't have to query the disks. Use the UI to spin up/down disks.
I have a USB drive mounted with unassigned devices and 3 SATA disks in 2nd pool that don't spin down on their own, so I wrote a script to spin them down which runs periodically.
Is there a risk? I've been running it for months but not on the array disks which (prior to rc1) spun down on their own.
[6.9.1] R/W counters don't increment despite disk activity
in Stable Releases
Posted
Is there a risk of data corruption if unraid spins down the disk while another partition is being written to? That'd be my only concern.