Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 02/26/19 in all areas

  1. 7 points
    If someone does want to help, then in essence the problem is the runc version included in Unraid v6.6.7 and v6.7.0rc5 doesn't currently have a Nvidia patch available for it, so one approach we've thought of is replacing the runc available in Unraid completely with a separate version that does have the patch available although we're not sure if this will cause problems at all. Bassrock and me have been working on it, but he's very busy at the moment and I'm on a family holiday with my wife and daughter so have been limited to working on this after everyone else has gone to bed. I'm not willing to spend any time in the day working on it as I see little of my daughter/wife when we're working and at home, so I'm cherishing every minute of the time I am spending with them on holiday, and for me that is way more important than anything Unraid or Nvidia related. Sent from my Mi A1 using Tapatalk
  2. 5 points
  3. 4 points
    It's not an issue with stock Unraid, the issue is there isn't a patch available for the runc version. Due to the recent docker update for security reasons, Nvidia haven't caught up yet. Sent from my Mi A1 using Tapatalk
  4. 4 points
    I'm on holiday with my family, I have tried to compile it several times but there are some issues that need working on. It will be ready when it's ready, a week for something that is free is no time at all, we're not releasing the source scripts for reasons I outlined in the original script, but if someone isn't happy with the timescales that we work on, then they are more than welcome to compile and create this solution themselves and debug any issues. The source code is all out there. I've made my feelings about this sort of thing well known before, I will outline it again. We're volunteers with families, jobs, wives and lives to leads. Until the day comes where working on this stuff pays our mortgages, feeds our kids and allows us to resign our full time jobs arrives then things happen at our place and our pace only. We have a discord channel that people can join and if they want to get involved then just ask, but strangely whenever I offer, the standard reply is that people don't have enough free time. If that is the case, fine, but don't assume any of us have any more free time than you, we don't, we just choose to dedicate what little free time we have to this project.
  5. 4 points
    I'm seeing the same thing as well. Found this when searching around: https://forums.sabnzbd.org/viewtopic.php?t=23364 Started up normally after doing the following: 1. Connected to the docker (my container name was "sabnzbd") How to connect to a running Docker container? There is a docker exec command that can be used to connect to a container that is already running. Use docker ps to get the name of the existing container. Use the command docker exec -it <container name> /bin/bash to get a bash shell in the container. 2. cd /config/admin 3. mv server.cert server.cert.old (or delete it, but I was trying to play it safe) 4. mv server.key server.key.old (or delete it, but again playing it safe) I did an ls-al then and saw that the server.cert was immediately recreated but not the server.key I checked SAB and it was then running normally
  6. 3 points
    id like to see Rysnc as gui based this trying to figure out the command lines also shows you the status of completion of the rysnc… as I don't see it in the command line.. statiting the complete overall progress complete of rysnc syncing but it be nice a gui based like free nas… check box this etc... its just a thought
  7. 3 points
    Any updates on this thread. I am also experiencing this issue. I request to make this thread change from Minor to Urgent. Due to not being able to access the NAS via SMB it is a showstopper in my humble opinion
  8. 3 points
    @ezhikfound that downgrading LSI 2008/2308 firmware to p16 restores the trim function with current driver, so the trim issue is caused by the driver and the firmware and the filesystem, and while I personally wouldn't like to be running an older firmware it might be worth considering for users without a better option.
  9. 3 points
    #!/bin/bash #This should always return the name of the docker container running plex - assuming a single plex docker on the system. con="`docker ps --format "{{.Names}}" | grep -i plex`" echo "" echo "<b>Applying hardware decode patch...</b>" echo "<hr>" #Check to see if Plex Transcoder2 Exists first. exists=`docker exec -i $con stat "/usr/lib/plexmediaserver/Plex Transcoder2" >/dev/null 2>&1; echo $?` if [ $exists -eq 1 ]; then # If it doesn't, we run the clause below docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2" docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";' docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder" docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2" docker restart $con echo "" echo '<font color="green"><b>Done!</b></font>' #Green means go! else echo "" echo '<font color="red"><b>Patch already applied or invalid container!</b></font>' #Red means stop! fi EDIT: Just corrected some flawed assumptions on the logic above. Using grep -i to grab container name so that it matches without case sensitivity. Using a variable to capture the return value of the stat, since docker exec -it can't be used and docker exec -i always returns 0. Flipped -eq 0 to -eq 1 since that was the inverse of the intended behavior. Only weird thing is something prints "plex" lowercase and I don't know where. EDIT2: Figured that out, docker restart $con prints the name of the container once it's restarted. Could redirect the output to /dev/null, though.
  10. 2 points
    Parity Check Tuning plugin The Parity Check Tuning plugin is designed to allow you to split a parity check into increments and then specify when those increments should be run. It will be of particular use to those who have large parity drives so that the parity check takes a long time and who leave their Unraid servers powered on 24x7. The idea is that you can specify time slots when the increments should be run and these can be chosen to be at times when the Unraid server is likely to be idle. As an example on my system I have a 10TB Parity Disk and an uninterrupted Parity Check takes about 30 hours to complete.. I have my normal scheduled parity checks set to run monthly. By using this plugin to run using 3 hour increments the elapsed time extends to 10 days (10 x 3 = 30) but I do not notice any impact on my normal use as the increments are run when I am not using the system. Once enough increments are run to complete the scheduled parity check then no further increments will be run until the time for the next scheduled check comes around. The Settings page is added as an extra section to the Settings->Scheduler page (see the screenshot below) in the Unraid GUI as this seemed the most logical place for it to appear. The initial release of the plugin allows you to specify a single daily time slot for running increments. This seems to satisfy the basic Use Case but I am amenable to others making Use Cases for something more sophisticated. Debug feature If you enable the option for debug logging then you will see reasonably verbose entries appearing in the syslog about how this plugin is functioning internally. All these entries will include the word DEBUG so It is clear that they have been activated by turning on the Debug logging. Although this feature is primarily aimed at tracking down any issues that might be reported and developing new features the entries will be meaningful to any users interested in such matters. When this option is set to Yes then you are offered an additional option of Hourly for the frequency at which this plugin should pause/resume parity check increments. This was added primarily to help with testing and to help track down any issues that users might experience in using the plugin. Early feedback ahs suggested that users new to this plugin can use this feature as a way of getting a feel for how the plugin operates. Built-in Help The settings page for this plugin has built-in help to describe the meaning of the various settings. You can click on the description text for any particular setting to toggle it on-/off for that particular setting or you can turn it on/off at the page level by using the standard Help toggle in the Unraid GUI. Suggestions for improving the wording or expanding on the provided text are welcomed as it is not intended to produce any separate documentation. Planned Enhancements There are a few enhancements that are already planned (and on which work has started) The settings screen currently has an entry as to whether parity checks started outside a normal scheduled one (e.g. manually started or started by the system after an unclean shutdown) should also be run in increments. It is likely that in such a scenario the user may be interested in getting their array back to health as soon as possible and would like the check to run without interruption. At the moment the you can only specify the Yes as the code to support the No option is not yet complete. Improve the history kept about parity checks that are paused and resumed using this plugin so that actual running time and total elapsed time of the parity check are both tracked. Pause parity checks if disks overheat and resume them when they cool down. Ideally an Unraid server has sufficient cooling that such a feature should not be required but anecdotal evidence suggests that a significant number of people have problems with systems over-heating under load. Suggestions for other possibilities are always welcomed. Wish List This a holder for "blue sky" ideas that have been expressed for which there is no idea if it is even technically possible. They are kept here as a reminder and for others to perhaps expand on, and even perhaps come up with ideas for implementation.. Auto detect idle periods: The idea is that instead of the user having to specify specific start/stop times for running parity check increments the plugin should automatically detect periods when the system is idle to resume a parity check. This would need the complementary option of automatically detecting the system is no longer idle so that the check can be paused. Avoid running parity check if mover is running. Mover and parity Checking severely degrade each others performance. Some way of removing (or at least minimising) this conflict would be desirable. Thee are lot of permutations that need to be thought through to come up with a sensible strategy. Stop docker containers during a parity check. The ability to schedule the parity check to stop specified docker containers prior to check running and restart the docker containers after the check is paused or completed. A workaround for this would be to use the User Scripts plugin to do this although an integrated capability would be easier to use. Resume parity checks on array start. The current Limetech implementation of pause/resume does not allow a parity check to be started from any point except the beginning. If the ability to start at a defined offset is ever provided then this could be implemented. Partial parity Checks: This is just a different facet of the ability to Resume parity checks on array start where you deliberately set the system up to perform part of a parity check with reboots in between the parts. Feedback Feedback from users on the facilities offered by this plugin is welcomed, and is likely to be used to guide the direction of any future enhancements. It will be interesting to hear how useful users find this plugin to be in the normal running of their system. Please feel free to suggest any changes that you think would enhance the experience even if it only rewording of text . Requirements Unraid 6.7 rc3 or later Community Applications (CA) plugin. It is expected that this plugin will be installed via the Apps tab (i.e. the Community Applications plugin) and the appropriate template has been prepared to allow CA to handle it tidily. Installation The parity Check tuning plugin is available for installation by using the Community Applications plugin. If you navigate to the Apps tab and search for 'Parity Tuning' this plugin will show up and it can be installed from there. Once the plugin is installed then if you go to Settings->Scheduler in the Unraid GUI you will see an extra section has appeared that allows you to specify the settings you want to be used for this plugin. Restrictions/Limitations This plugin does not initiate a parity check - it only provides facilities for pausing/resuming them according to the specified criteria. If there is no parity check running during the timeslot specified for an increment then this plugin take no action. If the array is stopped for any reason then the current progress of a running parity check is lost. This means that the next time the array is started the parity check will need to be restarted from the beginning. This is a restriction imposed by the current Limetech implementation. The plugin is designed so that this restriction can easily be removed if Limetech can provide a way of starting parity checks at a specified offset rather than starting all parity checks from the beginning.
  11. 2 points
    They just need to push Samba 4.9.5 before 6.7 final. It fixes this bug. https://www.samba.org/samba/history/samba-4.9.5.html @limetech
  12. 2 points
    Ok, this may be dumb, but I have a use case that this would be really effective on. Currently I pass trough 2 unassigned 10k drives to a vm as scratch disks for audio/video editing. In the vm, they are then setup as Raid 0. Super fast. The problem becomes that the drives are then bound to that VM. I can't use the disks for any other vm nor span a disk image (work areas) for separate vm's on that pool. I think it would be better to have the host (unRaid) manage the raid, and then mount the "network" disks and use them that way. Since the vm uses paravirtualized 10GBE adapters, performance should be no issue. And multiple vm's could access them as well. Why don't I just add more drives to my current cache pool? Separation. I don't want the dockers that are running, or the mover, or anything else to interfere with performance. Plus, i'm not sure how mixing SSD's and spinners would work out. Maybe ok? I'm sure someone has done that. TLDR: Essentially I'm suggesting that we be able to have more than one pool of drives in a specifiable raid setup (0 and 10! please!)
  13. 2 points
    I think leaving it out is the best bet, thanks for being open to including it though
  14. 2 points
    Thanks for digging into that, then we should forget about the patch for 4.19 kernel?
  15. 2 points
    After doing some more digging from what i can tell libva (ffmpeg, vainfo, intel_gpu_top) all fail to connect to the device, i've tried updating libva (and friends) but it seems something else is missing. Unfortunately it doesn't seem to be as simple as adding the missing chipset id. ☹️
  16. 2 points
    This was the main reason I was originally asking, as well. Currently my i9-9900k is working a a lot harder than it should, with Plex being unable to leverage hardware transcoding.
  17. 2 points
    @1812 weird, I did some switch research as I’m re-doing home network and have a few of these on the way, even before I seen your post. Well see if I run into same issues you did getting switches a few months later 🤔😬
  18. 2 points
    other people with plex pass issues - comforting for me that its not something related to the docker image ive built, not so comforting for those who have plex natively installed and cant roll back easily:- https://www.reddit.com/r/PleX/comments/auo6jd/new_beta_pms_version_available_1151710ece95b3a1/
  19. 2 points
    i'm running my two unraid as VMs on top of ESXi for years. i'm using PlopKExec without problems.. https://download.plop.at/plopkexec/plopkexec.iso if you don't run any VMs, 4GB would be reasonable, but you can add more latter if you want. but according transcoding - you should just run some tests to see, how much it eats your CPUs, start with 4 vCPU. Free version of ESXi have a 8 vCPU limitation. i'm trying avoid any transcoding in my plex server. but you can go new unraid nvidia route and transcode with your GPU - check the plugin forum page. if you pass your IBM 1015 to unraid VM, then ESXi has nothing to do with it and drives connected to it - unraid manages spindowns of these drives. and according SSDs attached to host - i have two in my server and never noticed any problems with them.
  20. 2 points
    V6.6.7 and V6.7.0rc5 uploaded. Sent from my Mi A1 using Tapatalk
  21. 2 points
    Enable Hardware Decoding in Plex #!/bin/bash con="plex" echo "" echo "<font color='red'><b>Applying hardware decode patch...</b></font>" echo "<hr>" docker exec -i $con mv "/usr/lib/plexmediaserver/Plex Transcoder" "/usr/lib/plexmediaserver/Plex Transcoder2" docker exec -i $con /bin/sh -c 'printf "#!/bin/sh\nexec /usr/lib/plexmediaserver/Plex\ Transcoder2 -hwaccel nvdec "\""\$@"\""" > "/usr/lib/plexmediaserver/Plex Transcoder";' docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder" docker exec -i $con chmod +x "/usr/lib/plexmediaserver/Plex Transcoder2" docker restart $con echo "" echo "<font color='red'><b>Done!</b></font>" Description: Translation of manual steps required to patch Plex docker to enable hardware decoding if you're running an Nvidia version of Unraid. Quick Start: Set up and run as a script every time Plex updates. If your container is not called "plex", change the "con" variable (see notes) Disclaimer: If it can be improved (or if it's dangerously wrong), please let me know. Notes: Should be run when Plex is installed/updated From the command line, run "docker ps" to see what your plex container is called. Set that as the "con" variable in your script (mine is "plex") This script is only required until Plex officially supports hardware decoding It preforms the same as recommended in the NVidia plugin support thread here (where it was originally published), namely... Renames the file "Plex Transcoder" to "Plex Transcoder2" Creates a new "Plex Transcoder" file with the suggested contents Changes permissions on both "Plex Transcoder" and "Plex Transcoder2" files (not sure it's required on Transcoder2 - seemed to work for me without) Restarts the Plex container (not sure if required, but doing it anyhow) Probably best nothing is playing whilst the script is run You'll need to have Plex running for the script to work. Would require different code if stopped (would probably be safer to stop the container first, make the changes then start again, but here we are) Run "nvidia-smi dmon -s u" from the terminal (not within Plex container) to check whether the decoding is working. Set a video to play in a transcoded state, and the 3rd and 4th columns from the end should be non-zero This includes the "exec" addition to the Plex Transcoder file contents Good luck!
  22. 2 points
    Unionfs works 'ok' but it's a bit clunky as per the scripts above. Rclone are working on their own union which would hopefully include hardlink support unlike unionfs. It possibly will also remove the need for a seperate rclone move script, automating transfers from the local drive to the cloud https://forum.rclone.org/t/advantage-of-new-union-remote/7049/1
  23. 2 points
    Key elements of my rclone mount script: rclone mount --rc --rc-addr=172.30.12.2:5572 --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --buffer-size: determines the amount of memory, that will be used to buffer data in advance. I think this is per stream --dir-cache-time: sets how long a directory should be considered up to date and not refreshed from the backend. Changes made locally in the mount may appear immediately or invalidate the cache, so if you upload via rclone you can set this to a very high number. If you make changes direct to the remote they won't be picked up until the cache expires --drive-chunk-size: for files uploaded via the mount. I rarely do this, but I think I should set this higher for my 300/300 connection --fast-list: Improves speed but only in tandem with rclone rc --timeout=1h vfs/refresh recursive=true --vfs-read-chunk-size: this is the key variable. This controls how much data is requested in the first chunk of playback - too big and your start times will be too slow, too small and you might get stuttering at the start of playback. 128M seems to work for most but try 64M and 32M --vfs-read-chunk-size-limit: each successive vfs-read-chunk-size doubles in size until this limit is hit e.g. for me 128M, 256M,512M,1G etc. I've set the limit as off to not cap how much is requested Read more on vfs-read-chunk-size: https://forum.rclone.org/t/new-feature-vfs-read-chunk-size/5683
  24. 2 points
    How do I replace/upgrade my single cache device? (unRAID v6.2 and above only) This procedure assumes that there are at least some dockers and/or VMs related files on the cache disk, some of these steps are unnecessary if there aren't. Stop all running Dockers/VMs Settings -> VM Manager: disable VMs and click apply Settings -> Docker: disable Docker and click apply Click on Shares and change to "Yes" all cache shares with "Use cache disk:" set to "Only" or "Prefer" Check that there's enough free space on the array and invoke the mover by clicking "Move Now" on the Main page When the mover finishes check that your cache is empty (any files on the cache root will not be moved as they are not part of any share) Stop array, replace cache device, assign it, start array and format new cache device (if needed), check that it's using the filesystem you want Click on Shares and change to "Prefer" all shares that you want moved back to cache On the Main page click "Move Now" When the mover finishes re-enable Docker and VMs
  25. 1 point
    Clear an unRAID array data drive (for the Shrink array wiki page) This script is for use in clearing a drive that you want to remove from the array, while maintaining parity protection. I've added a set of instructions within the Shrink array wiki page for it. It is designed to be as safe as possible, and will not run unless specific conditions are met - - The drive must be a data drive that is a part of an unRAID array - It must be a good drive, mounted in the array, capable of every sector being zeroed (no bad sectors) - The drive must be completely empty, no data at all left on it. This is tested for! - The drive should have a single root folder named clear-me - exactly 8 characters, 7 lowercase and 1 hyphen. This is tested for! Because the User.Scripts plugin does not allow interactivity (yet!), some kludges had to be used, one being the clear-me folder, and the other being a 60 second wait before execution to allow the user to abort. I actually like the clear-me kludge, because it means the user cannot possibly make a mistake and lose data. The user *has* to empty the drive first, then add this odd folder. #!/bin/bash # A script to clear an unRAID array drive. It first checks the drive is completely empty, # except for a marker indicating that the user desires to clear the drive. The marker is # that the drive is completely empty except for a single folder named 'clear-me'. # # Array must be started, and drive mounted. There's no other way to verify it's empty. # Without knowing which file system it's formatted with, I can't mount it. # # Quick way to prep drive: format with ReiserFS, then add 'clear-me' folder. # # 1.0 first draft # 1.1 add logging, improve comments # 1.2 adapt for User.Scripts, extend wait to 60 seconds # 1.3 add progress display; confirm by key (no wait) if standalone; fix logger # 1.4 only add progress display if unRAID version >= 6.2 version="1.4" marker="clear-me" found=0 wait=60 p=${0%%$P} # dirname of program p=${p:0:18} q="/tmp/user.scripts/" echo -e "*** Clear an unRAID array data drive *** v$version\n" # Check if array is started ls /mnt/disk[1-9]* 1>/dev/null 2>/dev/null if [ $? -ne 0 ] then echo "ERROR: Array must be started before using this script" exit fi # Look for array drive to clear n=0 echo -n "Checking all array data drives (may need to spin them up) ... " if [ "$p" == "$q" ] # running in User.Scripts then echo -e "\n" c="<font color=blue>" c0="</font>" else #set color teal c="\x1b[36;01m" c0="\x1b[39;49;00m" fi for d in /mnt/disk[1-9]* do x=`ls -A $d` z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi let n=n+1 done #echo -e "found:"$found "d:"$d "marker:"$marker "z:"$z "n:"$n # No drives found to clear if [ $found == "0" ] then echo -e "\rChecked $n drives, did not find an empty drive ready and marked for clearing!\n" echo "To use this script, the drive must be completely empty first, no files" echo "or folders left on it. Then a single folder should be created on it" echo "with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen." echo "This script is only for clearing unRAID data drives, in preparation for" echo "removing them from the array. It does not add a Preclear signature." exit fi # check unRAID version v1=`cat /etc/unraid-version` # v1 is 'version="6.2.0-rc5"' (fixme if 6.10.* happens) v2="${v1:9:1}${v1:11:1}" if [[ $v2 -ge 62 ]] then v=" status=progress" else v="" fi #echo -e "v1=$v1 v2=$v2 v=$v\n" # First, warn about the clearing, and give them a chance to abort echo -e "\rFound a marked and empty drive to clear: $c Disk ${d:9} $c0 ( $d ) " echo -e "* Disk ${d:9} will be unmounted first." echo "* Then zeroes will be written to the entire drive." echo "* Parity will be preserved throughout." echo "* Clearing while updating Parity takes a VERY long time!" echo "* The progress of the clearing will not be visible until it's done!" echo "* When complete, Disk ${d:9} will be ready for removal from array." echo -e "* Commands to be executed:\n***** $c umount $d $c0\n***** $c dd bs=1M if=/dev/zero of=/dev/md${d:9} $v $c0\n" if [ "$p" == "$q" ] # running in User.Scripts then echo -e "You have $wait seconds to cancel this script (click the red X, top right)\n" sleep $wait else echo -n "Press ! to proceed. Any other key aborts, with no changes made. " ch="" read -n 1 ch echo -e -n "\r \r" if [ "$ch" != "!" ]; then exit fi fi # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} $v ) ..." dd bs=1M if=/dev/zero of=/dev/md${d:9} $v #logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 ) ..." #dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 # Done logger -tclear_array_drive "Clearing Disk ${d:9} is complete" echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n" echo -e "Unless errors appeared, the drive is now cleared!" echo -e "Because the drive is now unmountable, the array should be stopped," echo -e "and the drive removed (or reformatted)." exit The attached zip is 'clear an array drive.zip', containing both the User.Scripts folder and files, but also the script named clear_array_drive (same script) for standalone use. Either extract the files for User.Scripts, or extract clear_array_drive into the root of the flash, and run it from there. Also attached is 'clear an array drive (test only).zip', for playing with this, testing it. It contains exactly the same scripts, but writing is turned off, so no changes at all will happen. It is designed for those afraid of clearing the wrong thing, or not trusting these scripts yet. You can try it in various conditions, and see what happens, and it will pretend to do the work, but no changes at all will be made. I do welcome examination by bash shell script experts, to ensure I made no mistakes. It's passed my own testing, but I'm not an expert. Rather, a very frustrated bash user, who lost many hours with the picky syntax! I really don't understand why people like type-less languages! It only *looks* easier. After a while, you'll be frustrated with the 60 second wait (when run in User Scripts). I did have it at 30 seconds, but decided 60 was better for new users, for now. I'll add interactivity later, for standalone command line use. It also really needs a way to provide progress info while it's clearing. I have ideas for that. The included 'clear_array_drive' script can now be run at the command line within any unRAID v6, and possibly unRAID v5, but is not tested there. (Procedures for removing a drive are different in v5.) Progress display is only available in 6.2 or later. In 6.1 or earlier, it's done when it's done. Update 1.3 - add display of progress; confirm by key '!' (no wait) if standalone; fix logger; add a bit of color Really appreciate the tip on 'status=progress', looks pretty good. Lots of numbers presented, the ones of interest are the second and the last. Update 1.4 - make progress display conditional for 6.2 or later; hopefully now, the script can be run in any v6, possibly v5 clear_an_array_drive.zip clear_an_array_drive_test_only.zip