WeeboTech

Moderators
  • Posts

    9457
  • Joined

  • Last visited

Everything posted by WeeboTech

  1. Are these machines co-located? If so, rsync as a server would work well and be fast. You can also keep dated backups and only rsync new data from the main tower, thus saving lots of space. There is an option called --link-dest=DIR hardlink to files in DIR when unchanged. If you keep dated directories, and use the last directory as the name to this parameter, then a newly dated directory hardlinks all files from the prior directory. This equates to a ghost of the prior backup, Now only modified files are copied across the network replacing the prior files. This equates to 1x the FULL space of the first copy and an incremental update per date. Each date can pretty much stand on it's own, so it's more of a differential backup. i.e. if you were to remove the last 6 months of backup dates, then the most current directory would still contain 1 full backup. Here is what it looks like from this month's backups in one of my host. root@rgclws:/storage/backups/npgvm7 # du -hs 20151201 1.6G 20151201 root@rgclws:/storage/backups/npgvm7 # du -hs 20151205 1.6G 20151205 root@rgclws:/storage/backups/npgvm7 # du -hs 201512* 1.6G 20151201 17M 20151202 17M 20151203 17M 20151204 17M 20151205 17M 20151206 18M 20151207 17M 20151208 17M 20151209 25M 20151210 17M 20151211 17M 20151212 17M 20151213 18M 20151214 24M 20151215 17M 20151216 17M 20151217 17M 20151218 17M 20151219 17M 20151220 18M 20151221 17M 20151222 17M 20151223 root@rgclws:/storage/backups/npgvm7 # find 20151201 -type f | wc -l 31260 root@rgclws:/storage/backups/npgvm7 # find 20151223 -type f | wc -l 31318 root@rgclws:/storage/backups/npgvm7 # du -hs 20151223 1.6G 20151223 root@rgclws:/storage/backups/npgvm7 # ls -l 20151223/home/rcotrone/.bash_profile -rw-r--r-- 44 10350 20506 546 Jan 30 2009 20151223/home/rcotrone/.bash_profile So there are 44 links to the same file here. Granted if the source file or one of the links is modified directly on the backup, they all change. So this may not be something you want to prevent visibility and/or write access to. Without the link option it would be 1.6GB per day for this backup. What I personally do to age out backups is keep the first of the month for 6-9 months, keep the sunday backups for 6-8 weeks. and age off the other directories with a remove. FWIW, this can also be done on some other type of rotation. That's where I got the idea from only I changed it to be date specific, instead if count specific. In my use, if I needed hourly backups, I would change the date so it was YYYYMMDD-HH. on some backups I use the Week so it's YYYY-WWW. This works when the backup server is pulling the data via rsync over ssh or an rsync server.
  2. true... I failed the trurl test lol, he'll be very disappointed in you.... As another comment on a previous post in this thread, I don't believe the mods are paid, other than jonp, Eric and Big Tom, who are all employed by or own LT. I may be wrong and happy to be corrected but I rather think we should be praising them for the work they do rather than getting them to do more.... Many of us are not paid, but motivated in other ways by limetech's grace. It's not totally about work load. It's about doing things wisely in the same amount of limited time each of us have to contribute. If someone has to go around making child boards allot, that could get tedious. On the other hand, splitting topics and consolidating is also quite tedious. That's been part of the reason the announcement threads get overrun with tangents. Once it gets away from you, It's time consuming to split, re-merge and contain. I'm all for better ways of organizing. Keep in mind that mods have no ability to create new sub boards. A few suggested ones can be brought up with limetech. Keep the ideas flowing, I'm sure some good ones will crop up!
  3. It's possible, The mover can be modified and installed from the config/go file or you can set the cron to enter the commands at the time the mover runs. I believe limetech was going to set the turbo-write to be automatic at some point in the future. i.e. If all drives are spinning, use turbo write automatically. However that still means something needs to spin up all the drives in the mover. This particular topic of enabling turbo-write for the mover should probably be a feature request where ideas on how to implement can be discussed.
  4. FWIW, when doing a high volume load onto a single disk in a single threaded manner, like a backup or restore, you can turn on turbo write. This provides a significant burst for single threaded high volume writes. The side effect is all disks would be spinning for the duration of active writes until turbo write is turned off. Also parallel reads/writes to other drives can affect the speed of both activities. After turbo write is disabled, spin down timers can take effect on idle drives. It can be enabled/disabled manually in a script or via cron. I do it via cron during my normal waking/working hours with this file in /etc/cron.d root@unRAID:/boot/local/bin# cat /etc/cron.d/md_write_method 30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd 30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd # # * * * * * <command to be executed> # | | | | | # | | | | | # | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) # | | | +------ Month of the Year (range: 1-12) # | | +-------- Day of the Month (range: 1-31) # | +---------- Hour (range: 0-23) # +------------ Minute (range: 0-59)
  5. These are settings that I have used in my go file. YMMV sysctl vm.vfs_cache_pressure=10 sysctl vm.swappiness=100 sysctl vm.dirty_ratio=20 # (you can set it higher as an experiment). # sysctl vm.min_free_kbytes=8192 # sysctl vm.min_free_kbytes=65535 sysctl vm.min_free_kbytes=131072 sysctl vm.highmem_is_dirtyable=1 In the past with unRAID 5 vm.highmem_is_dirtyable made a big difference in caching data before being written. I'm not so sure it matters with 64bit kernel anymore as the key does not exist. Adjusting vm.dirty_ratio and vm.dirty_background_ratio may provide the improvement you are looking for. Keep in mind that this uses the buffer cache more to temporarily store data. If adjusting to high caching values, make sure the machine is on a UPS. See also... https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ let us know how you make out.
  6. Ok, found the reason ... when searching for a time stamp the History function looks for a date with double digits, e.g. "Dec 01", but it needs to be a single digit only, thus "Dec 1" and no leading zeroes. Will make a correction for that. Since working on this feature was during dates with double digits, I never encountered the issue before The date is platform or syslog daemon dependent. Keep in mind it's a leading space with rsyslogd. xxxxxxxxxxxxxxxxx 'Dec 2 08:30:01'
  7. Appending entries to via >> /var/spool/cron/crontabs/root is not the normal way it is done. details deleted since there is a better dynamix way...
  8. WeeboTech

    Turbo Write

    @75MB/s that's still pretty fast. Just for some local comparitive results. I use the following script as a local host benchmark. Which provides these results with/without turbo-write enabled/disabled. My array is only 4 drives wide on a hp micro server with a 2ghz xeon. Drives are 3 & 4TB. Parity is a 4TB HGST 7200 RPM. RAM is 4GB, unraid is running under VMware ESXi as a guest using RDM'ed drives. Not too shabby for a lil machine that can. #!/bin/bash if [ -z "$1" ] then echo "Usage: $0 outputfilename" exit fi if [ -f "$1" ] then rm -vf $1 sync fi # To free pagecache, dentries and inodes: # echo 3 > /proc/sys/vm/drop_caches trap "rm -vf '$1' " HUP INT QUIT TERM EXIT bs=1024 count=4000000 count=10000000 total=$(( $bs * $count)) echo "writing $total bytes to: $1" touch $1;rm -f $1 dd if=/dev/zero bs=$bs count=$count of=$1 & BGPID=$! trap "kill $BGPID 2>/dev/null; rm -vf '$1'; exit" INT HUP QUIT TERM EXIT sleep 5 while kill -USR1 $BGPID 2>/dev/null do sleep 5 done trap "rm -vf '$1'; exit" INT HUP QUIT TERM EXIT echo "write complete, syncing" sync # echo "reading from: $1" # dd if=$1 bs=$bs count=$count of=/dev/null rm -vf $1 root@unRAID:/boot# [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd root@unRAID:/boot# /boot/local/bin/writeread10gb /mnt/disk1/test.dat writing 10240000000 bytes to: /mnt/disk1/test.dat 1013515264 bytes (1.0 GB) copied, 5.0011 s, 203 MB/s 1477878784 bytes (1.5 GB) copied, 10.0027 s, 148 MB/s 2025407488 bytes (2.0 GB) copied, 15.0111 s, 135 MB/s 2450924544 bytes (2.5 GB) copied, 20.0111 s, 122 MB/s 3001046016 bytes (3.0 GB) copied, 25.0111 s, 120 MB/s 3626887168 bytes (3.6 GB) copied, 30.0128 s, 121 MB/s 4356416512 bytes (4.4 GB) copied, 35.017 s, 124 MB/s 5033522176 bytes (5.0 GB) copied, 40.0211 s, 126 MB/s 5779306496 bytes (5.8 GB) copied, 45.0226 s, 128 MB/s 6277882880 bytes (6.3 GB) copied, 50.0252 s, 125 MB/s 6787147776 bytes (6.8 GB) copied, 55.2231 s, 123 MB/s 7436379136 bytes (7.4 GB) copied, 60.0311 s, 124 MB/s 8056297472 bytes (8.1 GB) copied, 65.0329 s, 124 MB/s 8679016448 bytes (8.7 GB) copied, 70.0352 s, 124 MB/s 9768040448 bytes (9.8 GB) copied, 75.1561 s, 130 MB/s 10149733376 bytes (10 GB) copied, 80.0611 s, 127 MB/s 10240000000 bytes (10 GB) copied, 81.2262 s, 126 MB/s write complete, syncing removed `/mnt/disk1/test.dat' root@unRAID:/boot# [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd root@unRAID:/boot# sync root@unRAID:/boot# /boot/local/bin/writeread10gb /mnt/disk1/test.dat writing 10240000000 bytes to: /mnt/disk1/test.dat 792613888 bytes (793 MB) copied, 5.0048 s, 158 MB/s 978371584 bytes (978 MB) copied, 10.0548 s, 97.3 MB/s 1070175232 bytes (1.1 GB) copied, 15.0448 s, 71.1 MB/s 1180541952 bytes (1.2 GB) copied, 20.0148 s, 59.0 MB/s 1425363968 bytes (1.4 GB) copied, 25.0148 s, 57.0 MB/s 1645171712 bytes (1.6 GB) copied, 30.0147 s, 54.8 MB/s 1876879360 bytes (1.9 GB) copied, 35.0155 s, 53.6 MB/s 2160645120 bytes (2.2 GB) copied, 40.0248 s, 54.0 MB/s 2457687040 bytes (2.5 GB) copied, 45.0248 s, 54.6 MB/s 3203949568 bytes (3.2 GB) copied, 50.0248 s, 64.0 MB/s 3545966592 bytes (3.5 GB) copied, 55.0253 s, 64.4 MB/s 3744470016 bytes (3.7 GB) copied, 60.0348 s, 62.4 MB/s 3837621248 bytes (3.8 GB) copied, 65.0448 s, 59.0 MB/s 4083180544 bytes (4.1 GB) copied, 70.0348 s, 58.3 MB/s 4203059200 bytes (4.2 GB) copied, 75.2201 s, 55.9 MB/s 4483929088 bytes (4.5 GB) copied, 80.0548 s, 56.0 MB/s 4664751104 bytes (4.7 GB) copied, 85.0448 s, 54.9 MB/s 4919473152 bytes (4.9 GB) copied, 90.0448 s, 54.6 MB/s 5240157184 bytes (5.2 GB) copied, 95.0448 s, 55.1 MB/s 5955642368 bytes (6.0 GB) copied, 100.055 s, 59.5 MB/s 6132057088 bytes (6.1 GB) copied, 105.055 s, 58.4 MB/s 6214882304 bytes (6.2 GB) copied, 110.055 s, 56.5 MB/s 6461868032 bytes (6.5 GB) copied, 115.055 s, 56.2 MB/s 6594380800 bytes (6.6 GB) copied, 120.065 s, 54.9 MB/s 6846481408 bytes (6.8 GB) copied, 125.065 s, 54.7 MB/s 7101137920 bytes (7.1 GB) copied, 130.065 s, 54.6 MB/s 7863050240 bytes (7.9 GB) copied, 135.065 s, 58.2 MB/s 8085197824 bytes (8.1 GB) copied, 140.075 s, 57.7 MB/s 8201982976 bytes (8.2 GB) copied, 145.085 s, 56.5 MB/s 8436970496 bytes (8.4 GB) copied, 150.075 s, 56.2 MB/s 8577868800 bytes (8.6 GB) copied, 155.075 s, 55.3 MB/s 8834466816 bytes (8.8 GB) copied, 160.085 s, 55.2 MB/s 9051507712 bytes (9.1 GB) copied, 165.08 s, 54.8 MB/s 9292321792 bytes (9.3 GB) copied, 170.085 s, 54.6 MB/s 9607111680 bytes (9.6 GB) copied, 175.085 s, 54.9 MB/s 10240000000 bytes (10 GB) copied, 178.476 s, 57.4 MB/s write complete, syncing removed `/mnt/disk1/test.dat'
  9. No that wasn't what I meant. Before you do anything to the drive you should back up your data. Do a forum search and read all the posts about resolving a pending sector. Including the wiki. http://lime-technology.com/wiki/index.php/Troubleshooting#Resolving_a_Pending_Sector
  10. First turn off the spindown timer on the respective drive. Then submit a smart long test. This will take many hours. You can get an idea from the Recommended Polling time for extended test. This wills scan the whole surface and look for defects. Then post the results. I believe the recommended approach would be to backup your data and possibly rebuild the drive. I believe there are recommended procedures in the wiki for that. Just completed both drives for smart long test, both says last smart test result is Completed without error. So I guess that's good? However I read that the Current Pending Sector count 1 might be an issue? I'm confused, which drives are you referring to? The drive with the current pending sector 'may' be an issue in the future if you try to rebuild a drive. You can backup the data on that drive, then attempt to rebuild it causing the pending sector to be re-written. or you can backup the data run through a few pre clear cycles, check the pending sector and restore or use the drive for a future replacement.
  11. First turn off the spindown timer on the respective drive. Then submit a smart long test. This will take many hours. You can get an idea from the Recommended Polling time for extended test. This wills scan the whole surface and look for defects. Then post the results. I believe the recommended approach would be to backup your data and possibly rebuild the drive. I believe there are recommended procedures in the wiki for that.
  12. This says it in a simple statement. However when using turbo write, all drives come into play and for some operations the write penalty is much smaller. I.E. When writing to a single drive. See turbo-write discussion here. http://lime-technology.com/forum/index.php?topic=34521.msg320890#msg320890
  13. A number of people have reported similar situations.
  14. From my tests years back with an Areca controller, you can have multiple raid sets with multiple drives or with a pair of drives. In my initial configuration I had a SAFE raid setup. RAID 0 on the outer tracks with two drives. (parity) RAID 1 on the inner tracks with the same two drives. (cache) unRAID saw both RAID sets as individual drives. In addition, when testing some of the silicone steelvine chipsets that did this in hardware, unRAID saw the RAID0/RAID1 pair as individual drives. When people mention mounting high speed protected raid devices outside of the array it's for performance reasons. I.E. so the high speed raid set does not feel the performance penalty of the parity device. If that's not an issue a RAID0/RAID10 raid set can be mounted as one of the unRAID data drives.
  15. WeeboTech

    Turbo Write

    This is really great news. It was my theory that with larger arrays, there was a diminishing return. I think for the backup application you've proven that turbo write is an effective feature. I know in 'some' of my usage cases, while simultaneously writing and accessing other drives in read mode there was a negative effect. However in the backup scenario where there are single massive writes, turbo write shines! That's awesome!
  16. I'll be completely satisfied if you fix the "Trust Parity" option and add dual parity Trust is a Must!!!
  17. WeeboTech

    Turbo Write

    You need to be sure the array is up. So yes, a delay is appropriate. I use the following snippet in my go script. I included my readahead snippet as well, feel free to use or delete. declare -a CHAR=('+' 'x'); let i=0 notices=60 DEV=/dev/md1 while [[ ${notices} -gt 0 && ! -b ${DEV} ]] do printf "Waiting $notices seconds for ${DEV}. Press ANY key to continue: [${CHAR[${i}]}]: " read -n1 -t1 && break echo -e "\r\c" (( notices-=1 )) [[ $(( i+=1 )) -ge ${#CHAR[@]} ]] && let i=0; done [ ${notices} -ne 60 ] && echo let i=0 notices=60 DIR=/mnt/disk1 while [[ ${notices} -gt 0 && ! -d "${DIR}" ]] do printf "Waiting $notices seconds for ${DIR}. Press ANY key to continue: [${CHAR[${i}]}]: " read -n1 -t1 && break echo -e "\r\c" (( notices-=1 )) [[ $(( i+=1 )) -ge ${#CHAR[@]} ]] && let i=0; done [ ${notices} -ne 60 ] && echo shopt -s extglob READAHEAD=1024 for disk in /dev/md+([[:digit:]]) ; do blockdev --setra ${READAHEAD} ${disk}; done for disk in /dev/sd+([[:digit:]]) ; do blockdev --setra ${READAHEAD} ${disk}; done FWIW, I do not enable turbo write all the time. I do it on a schedule from the /etc/cron.d directory In my go script I rsync a file from /boot/local/etc/cron.d/md_write_method to /etc/cron.d/md_write_method (actually I rsync a whole tree of things, but this is what you need for this application). root@unRAID:/boot/bin# cat /etc/cron.d/md_write_method 30 08 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 1' >> /proc/mdcmd 30 23 * * * [ -e /proc/mdcmd ] && echo 'set md_write_method 0' >> /proc/mdcmd # # * * * * * <command to be executed> # | | | | | # | | | | | # | | | | +---- Day of the Week (range: 1-7, 1 standing for Monday) # | | | +------ Month of the Year (range: 1-12) # | | +-------- Day of the Month (range: 1-31) # | +---------- Hour (range: 0-23) # +------------ Minute (range: 0-59)
  18. I'd rather see the next release come out with the automatic parity check NOT enabled with the "Trust Parity" option. Tom clearly KNOWS that shouldn't happen -- I've noticed since posting my feature request that this same issue was brought up way back in v5 days, and Tom indicated that was NOT his intent and that it would be fixed. Not sure if it ever was ... but clearly it's back now and needs to be fixed again. Seeking guidance from the source may be prudent. Tom may know of a way to trust the parity without the parity check starting.
  19. This might be one of the cases where you hire professional services. http://lime-technology.com/services/ especially if the trust my parity option is going to start correcting things that it should not.
  20. SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] Turn off the spindown timer on this particular drive. Then issue a smart long test. The short test is insufficient. Review the log after the completion of the test. According to this value, it will take almost 3 hours. Extended self-test routine recommended polling time: ( 154) minutes. Pending sectors sometimes do not show up until you scan the surface.
  21. HGST Deskstar NAS H3IKNAS600012872SN (0S03839) 6TB 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5" High-Performance Hard Drive Retail Kit http://www.newegg.com/Product/Product.aspx?Item=22-145-973 HGST Deskstar NAS H3IKNAS600012872SN (0S03839) 6TB 7200 RPM 128MB Cache SATA 6.0Gb/s 3.5" High-Performance Hard Drive Retail Kit the quantity: 2 units http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.2545034 Fast and reliable NAS drives.
  22. Hi Can i use this same method for unRAID 6? You will loose the ability to update the firmware from the webGui. It will update the flash, but then you will need to manually mount the .vmdk within unRAID and copy over the respective files.
  23. This looks like it will work! You're a star! Thanks!
  24. Please consider adding some form of spin down reschedule or delay if an active SMART test is in progress. A method to test would be to check the smart status right before triggering the hdparm -y root@rgclws:/home/rcotrone $ smartctl -c -lselftest /dev/sdb smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-573.3.1.el6.x86_64] (local build) Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net === START OF READ SMART DATA SECTION === General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 249) Self-test routine in progress... 90% of test remaining. Total time to complete Offline data collection: ( 617) seconds. Offline data collection capabilities: (0x73) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. No Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 143) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x1081) SCT Status supported. SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Self-test routine in progress 90% 1403 - # 2 Extended offline Completed without error 00% 54 - # 3 Short offline Completed without error 00% 48 - # 4 Conveyance offline Completed without error 00% 48 - Using these parameters you can tell if a test is in progress and possibly reschedule the spindown or poll for it at some interval. i.e. Extended self-test routine recommended polling time: ( 143) minutes. $ smartctl -lselftest /dev/sdb | egrep -i 'in progress' Self-test execution status: ( 249) Self-test routine in progress... # 1 Extended offline Self-test routine in progress 90% 1403 - Currently the only way to safely do a long surface test is to disable the spindown timer completely, trigger the test and re-enable it later. This prevents a user from scheduling a test automatically with smartd or via cron jobs. If a user forgets to disable the spindown timer, the test gets interrupted. From what I've seen smart access does not update the /proc/diskstats. $ cat /proc/diskstats | grep sdb 8 16 sdb 255 52 2456 38 0 0 0 0 0 38 38 8 17 sdb1 36 0 288 3 0 0 0 0 0 3 3 $ smartctl -c -lselftest /dev/sdb | egrep -i 'in progress' Self-test execution status: ( 249) Self-test routine in progress... # 1 Extended offline Self-test routine in progress 90% 1403 - $ cat /proc/diskstats | grep sdb 8 16 sdb 255 52 2456 38 0 0 0 0 0 38 38 8 17 sdb1 36 0 288 3 0 0 0 0 0 3 3 Therefore the only other way would be to do periodic reads or writes to the device, which seems counter productive. While this might work for a data drive. i.e. touching /mnt/disk#/. periodically it would not work for the parity drive. In addition, that would force 2 drives to stay spinning for the duration of the test. Potential tests might be to do a fdisk -l on the device, but from what I remembered in the past, sometimes this data is cached and doesn't update the /proc/diskstats as well. unraid 5 root@unRAID ~ $cat /proc/diskstats | grep sde 8 64 sde 44403954 1801819464 1885201847 811529720 1725092 19086787 166616088 56446660 0 110102400 867974200 8 65 sde1 44403934 1801819434 1885201447 811528780 1725092 19086787 166616088 56446660 0 110101440 867973250 root@unRAID ~ $fdisk -l /dev/sde >/dev/null 2>&1 root@unRAID ~ $cat /proc/diskstats | grep sde 8 64 sde 44403954 1801819464 1885201847 811529720 1725092 19086787 166616088 56446660 0 110102400 867974200 8 65 sde1 44403934 1801819434 1885201447 811528780 1725092 19086787 166616088 56446660 0 110101440 867973250 unraid 6 root@unRAIDm:~# cat /proc/diskstats | grep sdj 8 144 sdj 87628334 1377506127 11721076108 30880980 208 2958 25344 133 0 13203439 30867744 8 145 sdj1 87628277 1377506127 11721075316 30880818 208 2958 25344 133 0 13203260 30867494 root@unRAIDm:~# sfdisk -l /dev/sdj >/dev/null 2>&1 root@unRAIDm:~# cat /proc/diskstats | grep sdj 8 144 sdj 87628334 1377506127 11721076108 30880980 208 2958 25344 133 0 13203439 30867744 8 145 sdj1 87628277 1377506127 11721075316 30880818 208 2958 25344 133 0 13203260 30867494 If this doesn't seem feasible, then at least let us configure an alternate program to trigger the spindown so an agent can be dropped in to do the test logic or an emhttp api that adds a configurable number of minutes of delay or an external method to turn off/on the specific drive's spindown timer. i.e if we know the recommended polling time: we can submit via emhttp http api call to delay the spin down. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 143) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. Ideally I want to schedule these tests automatically on some interval without having to alter the timer manually via the webgui and not having the test interrupted.