Dulanic Posted January 28, 2017 Share Posted January 28, 2017 Hello, OK I am going crazy over this. None of my disks will stay spun down. Even my empty drives will spin up immediately after I spin one down. The longest I can get ANY drive to spin down is maybe 3-4 seconds. Here is what I have checked so far: 1. All appdata is on a cache drive. 2. All actively used data is on cache data. The shares that are not on cache are one's I might access a few times a day. 3. Checked open files plugin and all open process/files are on the cache drive only. 4. Spinup groups is disabled. Any other ways to find out what is the cause? Quote Link to comment
Squid Posted January 28, 2017 Share Posted January 28, 2017 Could be basically anything. An app scanning your files. Windows scanning the directories. Limited RAM and cachedirs causing the disks to spin back up. Appdata for your docker apps not set to be a cache only share (or cache prefer) causing unRaid to actively think about where to put the data Diagnostics is the first tool to help Quote Link to comment
Dulanic Posted January 28, 2017 Author Share Posted January 28, 2017 Memory usage is under 40%. App data is cache only. Not sure... I just don't see anything that actively would be seeking. I guess maybe it could be a renamer from CP or something... but if no files are on the drive(s) that keep spinning right back up, that wouldn't cause it right? I posted my diagnostics also. mediaserver-diagnostics-20170128-1359.zip Quote Link to comment
Squid Posted January 28, 2017 Share Posted January 28, 2017 Disk 4 looks like its disabled, so any read from the array will cause all drives to spin up. Quote Link to comment
Dulanic Posted January 28, 2017 Author Share Posted January 28, 2017 I don't understand? It had a xfs file issue (unknown cause) and the xfs repair tool wouldn't fix it, because it didn't see any issue, so I formatted it and readded it to the array. Then it reinitialized it, I dont see it as disabled? It shows green? https://snag.gy/JM0EuW.jpg Quote Link to comment
Dulanic Posted January 28, 2017 Author Share Posted January 28, 2017 So I think I completely misunderstood the cache system and had cache off on most of my shares. Hopefully that will fix my problem. Quote Link to comment
Squid Posted January 28, 2017 Share Posted January 28, 2017 Sorry. I missed that it rebuilt the drive Quote Link to comment
garycase Posted January 29, 2017 Share Posted January 29, 2017 So I think I completely misunderstood the cache system and had cache off on most of my shares. Hopefully that will fix my problem. I suspect that was indeed the issue -- i.e. any reference to any of your non-cached shares would cause at least one drive to spin up. I suspect with all of your shares cached you'll see far fewer drives spin-up from writes. [they will still, of course, spin up if you're reading data that's on one of the data drives] Quote Link to comment
Dulanic Posted January 30, 2017 Author Share Posted January 30, 2017 Yeah so even /w cache on I am seeing spin up within seconds of spinning down. It seems something is writing, is there any possible way to track what is writing? I see small party write increases but I have no way to know what is doing the writes so far. The longest I have ever seen a full spin down is 30 seconds. From my knowledge what I have running shouldn't be actively writing... but something is, at least small amounts. Quote Link to comment
garycase Posted January 30, 2017 Share Posted January 30, 2017 Are you running some Dockers or VM's that might be writing to the array? If you're not sure which one(s), just disable all of them, and then enable them one-at-a-time to see when your drives start being accessed. Clearly if there are writes to the parity drive, there are also writes to at least one of your data drives -- so these are what are causing the drives to spin up. The trick now is simply to isolate WHAT is doing the writing Quote Link to comment
Dulanic Posted January 31, 2017 Author Share Posted January 31, 2017 No VM's some dockers but all mappings are to cached drives. Just tried turning off ALL dockers and issue continues. Quote Link to comment
Fiservedpi Posted April 7, 2020 Share Posted April 7, 2020 (edited) Any resolution to this? Seems like since 6.8.3 my spin down groups aren't working Edited April 7, 2020 by Fiservedpi Quote Link to comment
MothyTim Posted April 9, 2020 Share Posted April 9, 2020 On 4/7/2020 at 6:48 AM, Fiservedpi said: Any resolution to this? Seems like since 6.8.3 my spin down groups aren't working I have the same issue since 6.8.3 drives don’t spin down at all! 1 Quote Link to comment
Ong Hui Hoong Posted May 3, 2020 Share Posted May 3, 2020 On 4/10/2020 at 2:32 AM, MothyTim said: I have the same issue since 6.8.3 drives don’t spin down at all! I'm having the same issue as well Quote Link to comment
Traygar Posted May 5, 2020 Share Posted May 5, 2020 Same here disks won't spin down Quote Link to comment
Kaldek Posted June 15, 2020 Share Posted June 15, 2020 (edited) Hi everyone, I have recently had the same issue of disks not spinning down. I seem to have managed to solve this (I believe) but I wanted to document what I believe the causes were. Whenever disks were not spinning down I could see a consistent ~3.4KB/s read from the disks in question. I could not see what was reading these disks as it was not listed in "Open Files" or "File Activity" tools/plugins that I have installed. So in my case what it looks like the cause was that I had shares which I had reconfigured to exist only on one disk, or to live purely on the cache drive (cache "only"), but the file allocation system of unRAID appeared to not be fully respecting this. What it looks like is that even if there is a blank directory on a disk that previously contained components of a share, then any disk writes to that share still keep probing that disk and keeping the disk spun up. The solution in my case was to do the following: Shut down all VMs and Docker containers Edit all shares that need to only be on the cache to "Prefer" for the cache setting Run the Mover Edit the same shares again to use "Only" for the cache setting Review all shares that should only be one some (or one) of the disks and make a list of these Download and install unBALANCE and run it Use unBALANCE to "Gather" any of those shares that should only be on specific disks, and place those shares only on the disk targets that I wanted them on Stop unBALANCE Spin down all the disks I know should be spun down during idle times Start all the VMs and Docker Containers I need running After that, I confirmed that my disks stayed spun down. For the past 30 minutes, I've not seen any of these disks spin up so it looks stable (so far). Note that all my VMs and containers run off the SSD cache which is a 1TB mirror. You might ask "why care about this"? Well since my server is running 24/7 the difference in power consumption between the disks being spun up and not being spun up is about 35 watts. This seems "like nothing" but power is not cheap here in Australia. During peak times we pay 38c cents per kw/h. Every little bit of my base load ends up going towards a yearly power bill of $1,500 and that's after all the rebates from my 5kw/h solar array (which pays me 12c kw/h for exported power). My entire server rack - at idle now - pulls 151 watts. Two weeks ago that idle was 240 watts, which is a near 90 watt saving. I've gained about 40-45 watts of this saving from unRAID tuning. The rest came from replacing outdated Unifi UAP-AC access points with newer UAP-AC-LR access points that pull much less power. Edited June 15, 2020 by Kaldek Quote Link to comment
JonathanM Posted June 15, 2020 Share Posted June 15, 2020 7 hours ago, Kaldek said: Every little bit of my base load ends up going towards a yearly power bill of $1,500 and that's after all the rebates from my 5kw/h solar array (which pays me 12c kw/h for exported power). Wow, that's low. I know it's a matter of perspective, but my annual power spend is typically around $4,800 USD, and I'm in the southeastern US. Quote Link to comment
mika91 Posted September 6, 2020 Share Posted September 6, 2020 Hi everyone, I have trouble with disk spinning down too, but only with one of them. The others go and stay spin down correctly. In the dashboard tab, the disk status is'standby', but it spins in reality. If I run a hdparm -C command, its state is active. So what command uses unraid/dashboard to say the disk is active/standby ? Quote Link to comment
chris_netsmart Posted September 6, 2020 Share Posted September 6, 2020 2 minutes ago, mika91 said: Hi everyone, I have trouble with disk spinning down too, but only with one of them. The others go and stay spin down correctly. In the dashboard tab, the disk status is'standby', but it spins in reality. If I run a hdparm -C command, its state is active. So what command uses unraid/dashboard to say the disk is active/standby ? Which disk is it ? Have you check the properties for tbat disk, to see if tbe spin down function is set Quote Link to comment
mika91 Posted September 6, 2020 Share Posted September 6, 2020 (edited) It's an empty data disk. If I press manually the spindown/up buttons in the GUI, it responds accordingly. (and hdparm -C command reflects it) But after a certain time, the disk spin up (hdparm confirm it) but stay 'standby' in the dashboard. No file activity. (excpect on /mnt/cache) Edited September 6, 2020 by mika91 Quote Link to comment
mika91 Posted September 8, 2020 Share Posted September 8, 2020 (edited) Done some more tests tonight. All docker containers stopped. Manually spin-down the 'faulty' disk -> hdparm staus is 'standby' After 15 minutes exactly, the disk spin up (butunraid continue to mark it as spun-down) I checked 'file activity', it's clear: nothing on any disk. Could it be a firmware issue of the disk (seagate barracuda 7200 ST3000DM001), or a tweak needed ? However, it sounds unraid rely on the command it sends to the disks to determine their status (no 'real-time' status) Edited September 8, 2020 by mika91 Quote Link to comment
JonathanM Posted September 8, 2020 Share Posted September 8, 2020 2 hours ago, mika91 said: seagate barracuda 7200 ST3000DM001 The drive so famous for high failure rate it has its own wiki. https://en.wikipedia.org/wiki/ST3000DM001 Quote Link to comment
slikone27 Posted March 10, 2021 Share Posted March 10, 2021 I think I read someone somewhere saying that the SMART test was keeping the drives awake by resetting the spin down timer. If you use 1 hour spin down delay try changing the Tunable (poll_attributes) setting to something above the hour (or above the spin down delay you have set). I have mine set to 1 hour spin down and 7200 for the tunable setting. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.