Suse User Posted February 15, 2013 Share Posted February 15, 2013 I've just installed a cache disk for the first time and I have a few questions... Where is my mover script file? Reading the Wiki:- The mover is just a script called ‘/usr/share/sbin/mover’ which invokes 'find' to traverse the cache disk and move files to the array using the 'mv' command. Advanced users may edit this script to fine-tune the mover. For example, it's possible to set conditions such as ‘move only files older than N days’, or ‘only move files greater than N bytes in size’, etc. Refer to the script itself and the 'man' page of the 'find' command. I don't seem to have a folder called 'sbin' in '/usr/share' so consequently it doesn't contain a file called 'mover'. Directory Listing of /usr/share:- drwxr-xr-x 2 root root 0 2012-06-06 18:14 aclocal/ drwxr-xr-x 2 root root 0 2012-02-19 14:55 applications/ drwxr-xr-x 3 root root 0 2012-02-19 14:55 avahi/ drwxr-xr-x 2 root root 0 2010-05-10 04:10 awk/ drwxr-xr-x 3 root root 0 2010-05-05 07:42 common-lisp/ drwxr-xr-x 5 root root 0 2012-02-19 14:55 dbus-1/ drwxr-xr-x 2 root root 0 1993-11-26 03:40 dict/ drwxr-xr-x 2 root root 0 2010-02-13 00:53 empty/ drwxr-xr-x 2 root root 0 2010-04-30 08:18 et/ drwxr-xr-x 2 root root 0 2010-04-30 08:17 getopt/ drwxr-xr-x 3 root root 0 2009-12-06 02:47 gtk-doc/ drwxr-xr-x 3 root root 0 2010-05-15 14:13 icu/ drwxr-xr-x 5 root root 0 2010-05-10 04:09 mc/ drwxr-xr-x 2 root root 0 2011-01-27 23:19 smartmontools/ drwxr-xr-x 2 root root 0 2010-04-30 08:18 ss/ drwxr-xr-x 6 root root 0 2013-01-29 01:25 terminfo/ drwxr-xr-x 11 root root 0 2013-01-29 01:25 zoneinfo/ Has it's location changed in v5.x? or is mover something I need to install manually? Where is the schedule string changed? I'm overly not comfortable with waiting until the early hours of each morning to have items move to the protected array, so I would be interested in changing the schedule of mover. By default the mover is scheduled to run every day at 3:40AM. This may be changed by defining your own Mover schedule string in crontab format. Where would I find this schedule string? Is it a line in the mover script it'sself? Is frequent scheduling of mover safe? ... file(s) will only be moved if they are not open for reading/writing – they will move the next night, when they are no longer open. Would I then be safe to schedule mover to run every 4 hours for example? If I was in the process of copying files to the cache drive when the mover script is invoked will everything be ok? How do I manually run mover? Is there a simple way to manually invoke mover when I know my unraid is not in use? Hopefully via the Web interface. What would be the command line to invoke the mover script via telnet? Thanks for nay help on these point, Mark. Link to comment
itimpi Posted February 15, 2013 Share Posted February 15, 2013 If you have enabled the cache disk under the Shares option on the Settings tab, then you are given the option to schedule mover. There is also a 'Run Mover Now' button available there. Link to comment
Suse User Posted February 15, 2013 Author Share Posted February 15, 2013 If you have enabled the cache disk under the Shares option on the Settings tab, then you are given the option to schedule mover. There is also a 'Run Mover Now' button available there. Fantastic, thanks. Link to comment
PeterB Posted February 15, 2013 Share Posted February 15, 2013 Would I then be safe to schedule mover to run every 4 hours for example? I have it scheduled to run every 2 hours with no obvious ill effects. Link to comment
Joe L. Posted February 16, 2013 Share Posted February 16, 2013 Would I then be safe to schedule mover to run every 4 hours for example? I have it scheduled to run every 2 hours with no obvious ill effects. The wiki is incorrect for the location (it might have been correct in some older version of unRAID) The correct location is /usr/local/sbin/mover On my 4.7 server I had my schedule set to */5 * * * * /boot/are_disks_idle.sh && It ran every 5 minutes... however, it invoked a script that ran first and then, if successful ran the actual mover script. The idea was to not invoke the mover script unless all disks are idle. (this was to let it not impact anything using the disks in any way) The resulting cron line was then: */5 * * * * /boot/are_disks_idle.sh && /usr/local/sbin/mover 2>&1 | logger The /boot/are_disks_idle.sh script is: #!/bin/bash ############################################### # are_disks_idle.sh # exit status = 0 if all disks idle # exit status = 1 if any disk spinning # exit status = 2 if array is not started # # April 2010 Joe L. # Check if the array is started if [ -d /mnt/disk1 ] then # rdevLastIO will be non-zero if a disk is spinning last=`/root/mdcmd status | grep -a rdevLastIO | grep -v '=0'` if [ "${last}" = "" ] then # all disks are idle exit 0 else # all disks are not idle exit 1 fi else # array is not started exit 2 fi Link to comment
Suse User Posted February 18, 2013 Author Share Posted February 18, 2013 The wiki is incorrect for the location (it might have been correct in some older version of unRAID) The correct location is /usr/local/sbin/mover On my 4.7 server I had my schedule set to */5 * * * * /boot/are_disks_idle.sh && It ran every 5 minutes... however, it invoked a script that ran first and then, if successful ran the actual mover script. The idea was to not invoke the mover script unless all disks are idle. (this was to let it not impact anything using the disks in any way) The resulting cron line was then: */5 * * * * /boot/are_disks_idle.sh && /usr/local/sbin/mover 2>&1 | logger The /boot/are_disks_idle.sh script is: #!/bin/bash ############################################### # are_disks_idle.sh # exit status = 0 if all disks idle # exit status = 1 if any disk spinning # exit status = 2 if array is not started # # April 2010 Joe L. # Check if the array is started if [ -d /mnt/disk1 ] then # rdevLastIO will be non-zero if a disk is spinning last=`/root/mdcmd status | grep -a rdevLastIO | grep -v '=0'` if [ "${last}" = "" ] then # all disks are idle exit 0 else # all disks are not idle exit 1 fi else # array is not started exit 2 fi Thanks Joe, That sounds like a really useful script, and from your description perfect for my needs. Looking through your script the only thing I see happening if all disks are idle is the script exiting with a value of 0? I'm confused how this will invoke the mover script? Which part of your script starts mover? Also, and now I sound like the linux lightweight I am, I don't know how add the 'are_disks_idle.sh' script to the cron tasks to be run. I assume theres a line to add to my go script? By the way, adding a cache drive has been a fantastic performance boost when uploading to the server, I'd recommend it to everyone. I don't know why I didn't do it earlier as I always had smaller HDDs lying around. Mark. Link to comment
PeterB Posted February 19, 2013 Share Posted February 19, 2013 Looking through your script the only thing I see happening if all disks are idle is the script exiting with a value of 0? That's correct - a return value of zero indicates success, non-zero indicates a failure. I'm confused how this will invoke the mover script? Which part of your script starts mover? The crontab entry invokes two tasks - the "are_disks_idle.sh" script, followed by mover. If the first task fails (ie returns non-zero), then the second task will not be started. Also, and now I sound like the linux lightweight I am, I don't know how add the 'are_disks_idle.sh' script to the cron tasks to be run. I assume theres a line to add to my go script? Yes, either you need to add the crontab entry to the default crontab when the system starts, or you need to 'restore' a copy of the crontab which already includes the extra entry. By the way, adding a cache drive has been a fantastic performance boost when uploading to the server, I'd recommend it to everyone. I don't know why I didn't do it earlier as I always had smaller HDDs lying around. Mark. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.