Disk Spin Down Delay.....whats a good starting point?.,.


Recommended Posts

Hey guys, can't seem to find any help in this area on the forums, here's my situation....

 

My File-server will be running 24/7, its really small compared to others on this forum, but will grow as my income grows:)))

 

Quickspecs:

 

3 x 500GB HDD for Data

1 x 500GB HDD for Parity

 

3in3 drive cage with cooling...my drives run no hotter than 34c when at full speed and constant...only parity drive gets this hot, the others are less...

 

Anyway...for the most part my file-server doesnt really do much during the day (serves movies for the most part, always in the evening though)....i do however have one drive that is what i call my important data drive, which has alot of files that id be pulling back and forth between the network consistently.

 

My question is:

 

1) What do most have their disk spun delay set to?

 

2) How long does it generally take for a disk to spin up from sleep?

 

3) is there anyway to set certain disks to never be spun down, or less than others?

 

thanks

Link to comment

Once you have upgraded to the latest versions of unRAID, you can set individual spin down controls on each drive.  System default begins as 1 hour, a decent figure for most usage.  You can change that on the Settings tab, then override it on an individual drive basis by clicking the drive link next to the green ball.  Every one has their own idea about what is best for their system, so it's hard to make a recommendation.  Disks generally take from 5 to 8 seconds to spin up, which is usually not long enough to cause most applications to give up waiting.  Your drive cooling sounds very good.

Link to comment

Thanks - ive set mine individually and all seems to be fine so far:)...

 

Now to my next question.....is spinning drives up and down ok for the HDD?...i mean doing it too often wont cause it to start deteriorating no???

 

You ask a LOT of questions.  It would be really helpful, as a new and inquisitive new user, to document your Q&As into a short wiki articile so that future new inquisitive users can benefit from your experiences.

 

Back to the question. The short answer is we don't know.  The number of people that have means of spinning down their drives is small compared to the huge population that just keep them running all the time (at least whenever their computer is on).  This means that most people spin up and down their drives once a day.  Setting up unRAID to do it every 15 minutes will result in starting and stopping them dozens of times each day.  I'm not sure the disks are designed for this.

 

Repeatedly heating and cooling of a drive can't be good for longevity.  The hotter the disks run compared to ambient, IMO, the worse it is to spin them up and down frequently.

 

I have my disk spindown set to 5 hours, but do manually spin the disks down individually using myMain when I know I am done with a disk or set of disks.

 

When the disks are spun up for a parity check or other reason, I tend to go to the SMART view.  This avoids spinning up the drives. 

 

I am working on some changes to the myMain smart view to only process drives that are already spun up.  A user can spin them up if desired, but this way people can check the status of the ones that are spinning (and any WD drives which report SMART stats without spinning up the drive) whenever they want.

Link to comment

I use 45 mins with the ls -R trick and 4GB ram

 

This results in drives being almost always spun down except from the one I am pulling data from and at the same time i can browse shares without spinup.

 

However i have a heat problem and wouldnt use this low a setting if I didnt.

 

In a perfect world I would set spin down based on temperature and the number of drives spun up (e.g. the amount of heat a system produces when all spun up is way more than if 2 are spun up.)

Link to comment

I use 45 mins with the ls -R trick and 4GB ram

 

This results in drives being almost always spun down except from the one I am pulling data from and at the same time i can browse shares without spinup.

 

I've tried to read up on that "ls -R trick" but there seem to be different scripts out there and I wouldn't know what I'm doing... Most of my disks spin up when browsing through user shares and it's really starting to annoy me. Today I installed a cash drive hoping that I could reduce the number of spin-ups that way but it only works when I copy the files directly to the cache drive.

 

I only have 1GB ram installed but I wouldn't mind adding more if it actually helped with the spin-ups. It would really help if one of the experts here could provide a simple step-by-step tutorial for n00bs like me.  :-[

 

 

Thanks in advance!

Link to comment

I have four hdd's total which includes one parity.....i have one data drive that is solely used for filing and data work which i would be constantly using during the day, whilst the others not so much as they are media disks and only get used when watching films or shows ( i may watch 1-3 titles daily)

 

How about the parity drive?...should this be fine spinning down whilst others are spun up?....no problem when writing to a disk whilst your parity takes time to spin up?

 

......Im with CHRIZ, i would like to try the LS-R trick (also checked on forums to ALSO find variations to the script).... Any help would be much appreciated:)...i have 2GB ddr667, i dont know if it would suffice, but can easily upgrade to 4gb for FREE:)

 

Thanks!

Link to comment

Hey NAS...

 

I set up this line in my custom script in /custom/etc/rc.d:

 

echo "*/1 * * * * /boot/bin/ls-r.sh >/dev/null 2>&1" >> /var/spool/cron/crontab.5000

 

my ls-r.sh contains:

 

ls -R /mnt/user >/dev/null 2>&1

sleep 30

ls -R /mnt/user >/dev/null 2>&1

 

Did I do it right?  How do I tell if cron is really executing this...  I have a feling it isn't, my drives are still spinning up.

 

  ???

 

EDIT:  ooopppss... I'm not currently using user shares  :-[

What would I use for just 10 disk shares?

 

Link to comment

You would need to add an ls for each disk.

 

Also I nohup the script rather than cron as i was unsure what would happen when two instance were called i.e. it takes 10 mins to ls mine for the first time so  does that mean you have 10 scripts running?

 

Apart from that looks good to me.

 

Link to comment

 

......Im with CHRIZ, i would like to try the LS-R trick (also checked on forums to ALSO find variations to the script).... Any help would be much appreciated:)...i have 2GB ddr667, i dont know if it would suffice, but can easily upgrade to 4gb for FREE:)

 

Thanks!

 

Anyone?  :( If someone took the time to write a short guide, we could add it to the wiki as well. This is the #1 issue that's bothering me and I'm sure it will only get worse once I add more drives.

Link to comment

Just going by what JimWhite said....does this look about right?

 

Ok, so i obviously i must have to create some sort of custom script to let Unraid know to run my Ls-R.sh script?....is this correct?, How do i create this?....and from what he's said i guiess i should add this line to this custom script:

 

echo "*/1 * * * * /boot/bin/ls-r.sh >/dev/null 2>&1" >> /var/spool/cron/crontab.5000

 

.....

 

I then need to create an ls-r.sh script with my contents as follows:

 

ls -R /mnt/user >/dev/null 2>&1

sleep 30

ls -R /mnt/user >/dev/null 2>&1

 

.....does this mean it will not spin up the drives so often when browsing the contents of my User Shares?.....i have one drive i dont use as a user share, and browse directly off the disk....would this mean i'd use this command also:

 

ls -R /mnt/hda >/dev/null 2>&1

sleep 30

ls -R /mnt/hda >/dev/null 2>&1

 

"hda" is the disk im referring to, which is Disk3 in my unraid system....

 

Help much appreciated..

 

 

 

Link to comment

Dont just copy this blindly as it setup for me and a bit of debugging:

 

#!/bin/sh

i=1
while [ 1 ]
do
        ls -R /mnt/user  >/dev/null 2>&1
  # Modify sleep time (in seconds) as needed below
  sleep 10
  #let i=i+1
  # echo $i;
done

 

Then call it with something like:

 

nohup nice /boot/scripts/cachedaemon.sh &

 

I am not a fan of the cron way as if one script takes a while it will be called again blindly the next cron

Link to comment

sorry im really new to Linux...but i hope i got this right...

 

Do i copy the contents below to a .txt file and save it as cachedaemon.sh in /boot/scripts/

 

then call on the script with:

 

nohup nice /boot/scripts/cachedaemon.sh &

 

......

 

I noticed yours is for User Shares.....if i wanted it for a specific disk would i use:

 

#!/bin/sh

 

i=1

while [ 1 ]

do

        ls -R /mnt/hda  >/dev/null 2>&1

  # Modify sleep time (in seconds) as needed below

  sleep 10

  #let i=i+1

  # echo $i;

done

 

OR

 

#!/bin/sh

 

i=1

while [ 1 ]

do

        ls -R /mnt/Disk3  >/dev/null 2>&1

  # Modify sleep time (in seconds) as needed below

  sleep 10

  #let i=i+1

  # echo $i;

done

 

Regards,

 

Link to comment

feel free to remove anything with just # at the beginning. Also you dont need ... actually hold on...here...

 

#!/bin/sh

while [ 1 ]
do
  ls -R /mnt/user  >/dev/null 2>&1
  sleep 10
done

 

 

If you install the lsof package you can do this to see its still running (if you called it cachedamemon):

 

lsof | grep cachedaemon

cachedaem 29108  root  255r      REG      8,97      157        40 /boot/scripts/cachedaemon.sh

 

If you are running this command on several drives only have one "sleep" in the loop and not one per drive.

 

Link to comment

yes but you dont want to mess with PROC.

 

The best way to test is in the real world. Wait for your drives to spin down then try to browse your shares.

 

You should fine them fast and no drives should spin up.

 

It is NOT however perfect. moving files around, playing videos and other things can cause it to lose its cache. But it IS very very good.

 

That nohup command jimwhite is better than mine although using nice is always a good idea and so are full paths.

 

Untested but it would be somehting like:

 

nohup nice /boot/scripts/cached.sh > /dev/null 2>%1 < /dev/null &

 

Also this script can be improved considerably by parsing PROC buffer counts. i.e. modify the sleep timer based on deltas in /proc counts. Might be worth doing

Link to comment

Building on the script below.

 

I call it cache_user.sh

 

This has the benefit of logging to syslog.

Creating a pid file as /var/run/cache_user.pid (or whatever you name it).

 

To stop the daemon, just remove the pid file (or kill the shell, it will eventually remove the pid file and end the loop).

 

One benefit here is you just run it.

It should automatically nohup/disown itself into the background.

 

I may just create an /etc/rc.d/rc.cache_user script and create an installable package to simplify it.

 

One thing that has me curious.

This does an ls -R but I'm not sure it fully caches all the "stat" blocks.

The directory entries are read fully and cached, but the information about the file (type, time, size may not be).

 

 

It may be more effective to use

 

find /mnt/user -ls >/dev/null 2>&1

 

or

find /mnt/user -type d >/dev/null 2>&1

This has the effect of doing a stat on each name to see if it is a directory and only printing directories (which go to null anyway)

 

or do an

 

ls -lR >/dev/null 2>&1

 

correct me if I'm wrong.

 

 

Example of times on my system with various commands.

 

root@unraid /mnt/user #time ls -R /mnt/user >/dev/null 2>&1       

 

real    0m55.372s

user    0m8.390s

sys     0m11.540s

 

root@unraid /mnt/user #time ls -lR /mnt/user >/dev/null 2>&1

 

real    1m4.935s

user    0m13.310s

sys     0m15.350s

 

root@unraid /mnt/user #time find /mnt/user >/dev/null 2>&1         

 

real    0m17.524s

user    0m1.390s

sys     0m3.480s

 

root@unraid /mnt/user #time find /mnt/user -type f >/dev/null 2>&1

 

real    0m17.341s

user    0m1.610s

sys     0m3.020s

root@unraid /mnt/user #time find /mnt/user -type d >/dev/null 2>&1

 

real    0m17.060s

user    0m1.320s

sys     0m2.980s

 

 

 

#!/bin/bash

if [ ${DEBUG:=0} -gt 0 ]
   then set -x -v
fi

P=${0##*/}              # basename of program
R=${0%%$P}              # dirname of program
P=${P%.*}               # strip off after last . character


# Controls the tendency of the kernel to reclaim the memory which is used for
# caching of directory and inode objects.
# At the default value of vfs_cache_pressure=100 the kernel will attempt to
# reclaim dentries and inodes at a "fair" rate with respect to pagecache and
# swapcache reclaim.  Decreasing vfs_cache_pressure causes the kernel to prefer
# to retain dentry and inode caches.  Increasing vfs_cache_pressure beyond 100
# causes the kernel to prefer to reclaim dentries and inodes.
# This was 100, I changed to 0.

sysctl vm.vfs_cache_pressure=0


cache_loop()
{

    echo "$$" > /var/run/${P}.pid
    trap "rm -f /var/run/${P}.pid" EXIT HUP INT QUIT TERM

    logger -is -t${P} "Starting"

    while [ -f /var/run/${P}.pid ]
    do ls -R /mnt/user  >/dev/null 2>&1
       sleep 10
    done

    logger -is -t${P} "Terminating"

}

if [ -f /var/run/${P}.pid ]
   then echo "$0: already running? pidfile: /var/run/${P}.pid"
        ps -fp $(</var/run/cache_user.pid)
        exit
fi

cache_loop > /var/log/${P}.log 2>&1 &
JPID=$!

logger -is -t${P} "Spawned (Pid=$JPID)"
# ps -fp "$JPID"
disown "$JPID"

Link to comment

Is there a way to flush the Directory Cache so I can test this to see if/how well it is working???

 

http://linux-mm.org/Drop_Caches

 

 

 

To use /proc/sys/vm/drop_caches, just echo a number to it.

 

To free pagecache:

 

# echo 1 > /proc/sys/vm/drop_caches

 

To free dentries and inodes:

 

# echo 2 > /proc/sys/vm/drop_caches

 

To free pagecache, dentries and inodes:

 

echo 3 > /proc/sys/vm/drop_caches

 

As this is a non-destructive operation and dirty objects are not freeable, the user should run "sync" first!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.