-
Posts
436 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by flyize
-
-
This seems to indicate that v4 is ready! They've already updated the mobile app, which doesn't seem to work with v3.
-
2 minutes ago, jonathanm said:
No. Adding parity2 only writes the generated data to the parity2 drive, it doesn't touch or verify the information on parity1.
I know we're getting way off topic here, but doesn't it seem like it should? If parity1 is wrong, then won't parity2 be calculated incorrectly as well?
-
2 minutes ago, jonathanm said:
Be sure to do a parity check immediately after completing the removal process to be sure everything worked properly and is in sync.
I'll be adding a second parity drive after I remove this data drive. Is it safe to assume that parity is checked when parity2 is added?
-
LOL I think I'm safe. Thanks!
-
1 hour ago, jonathanm said:
In my experience, that script takes multiple days to run, even on a small drive. You are probably much better off following the normal method and recalculating parity.
I can't just restart dd?
edit: That's what I did yesterday anyway. It took about 24 hours to zero out an 8TB drive.
-
On 9/3/2016 at 8:06 PM, RobJ said:
Clear an unRAID array data drive (for the Shrink array wiki page)
This script is for use in clearing a drive that you want to remove from the array, while maintaining parity protection. I've added a set of instructions within the Shrink array wiki page for it. It is designed to be as safe as possible, and will not run unless specific conditions are met -
- The drive must be a data drive that is a part of an unRAID array
- It must be a good drive, mounted in the array, capable of every sector being zeroed (no bad sectors)
- The drive must be completely empty, no data at all left on it. This is tested for!
- The drive should have a single root folder named clear-me - exactly 8 characters, 7 lowercase and 1 hyphen. This is tested for!
Because the User.Scripts plugin does not allow interactivity (yet!), some kludges had to be used, one being the clear-me folder, and the other being a 60 second wait before execution to allow the user to abort. I actually like the clear-me kludge, because it means the user cannot possibly make a mistake and lose data. The user *has* to empty the drive first, then add this odd folder.
#!/bin/bash # A script to clear an unRAID array drive. It first checks the drive is completely empty, # except for a marker indicating that the user desires to clear the drive. The marker is # that the drive is completely empty except for a single folder named 'clear-me'. # # Array must be started, and drive mounted. There's no other way to verify it's empty. # Without knowing which file system it's formatted with, I can't mount it. # # Quick way to prep drive: format with ReiserFS, then add 'clear-me' folder. # # 1.0 first draft # 1.1 add logging, improve comments # 1.2 adapt for User.Scripts, extend wait to 60 seconds # 1.3 add progress display; confirm by key (no wait) if standalone; fix logger # 1.4 only add progress display if unRAID version >= 6.2 version="1.4" marker="clear-me" found=0 wait=60 p=${0%%$P} # dirname of program p=${p:0:18} q="/tmp/user.scripts/" echo -e "*** Clear an unRAID array data drive *** v$version\n" # Check if array is started ls /mnt/disk[1-9]* 1>/dev/null 2>/dev/null if [ $? -ne 0 ] then echo "ERROR: Array must be started before using this script" exit fi # Look for array drive to clear n=0 echo -n "Checking all array data drives (may need to spin them up) ... " if [ "$p" == "$q" ] # running in User.Scripts then echo -e "\n" c="<font color=blue>" c0="</font>" else #set color teal c="\x1b[36;01m" c0="\x1b[39;49;00m" fi for d in /mnt/disk[1-9]* do x=`ls -A $d` z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi let n=n+1 done #echo -e "found:"$found "d:"$d "marker:"$marker "z:"$z "n:"$n # No drives found to clear if [ $found == "0" ] then echo -e "\rChecked $n drives, did not find an empty drive ready and marked for clearing!\n" echo "To use this script, the drive must be completely empty first, no files" echo "or folders left on it. Then a single folder should be created on it" echo "with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen." echo "This script is only for clearing unRAID data drives, in preparation for" echo "removing them from the array. It does not add a Preclear signature." exit fi # check unRAID version v1=`cat /etc/unraid-version` # v1 is 'version="6.2.0-rc5"' (fixme if 6.10.* happens) v2="${v1:9:1}${v1:11:1}" if [[ $v2 -ge 62 ]] then v=" status=progress" else v="" fi #echo -e "v1=$v1 v2=$v2 v=$v\n" # First, warn about the clearing, and give them a chance to abort echo -e "\rFound a marked and empty drive to clear: $c Disk ${d:9} $c0 ( $d ) " echo -e "* Disk ${d:9} will be unmounted first." echo "* Then zeroes will be written to the entire drive." echo "* Parity will be preserved throughout." echo "* Clearing while updating Parity takes a VERY long time!" echo "* The progress of the clearing will not be visible until it's done!" echo "* When complete, Disk ${d:9} will be ready for removal from array." echo -e "* Commands to be executed:\n***** $c umount $d $c0\n***** $c dd bs=1M if=/dev/zero of=/dev/md${d:9} $v $c0\n" if [ "$p" == "$q" ] # running in User.Scripts then echo -e "You have $wait seconds to cancel this script (click the red X, top right)\n" sleep $wait else echo -n "Press ! to proceed. Any other key aborts, with no changes made. " ch="" read -n 1 ch echo -e -n "\r \r" if [ "$ch" != "!" ]; then exit fi fi # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} $v ) ..." dd bs=1M if=/dev/zero of=/dev/md${d:9} $v #logger -tclear_array_drive "Clearing Disk ${d:9} (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 ) ..." #dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 # Done logger -tclear_array_drive "Clearing Disk ${d:9} is complete" echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n" echo -e "Unless errors appeared, the drive is now cleared!" echo -e "Because the drive is now unmountable, the array should be stopped," echo -e "and the drive removed (or reformatted)." exit
The attached zip is 'clear an array drive.zip', containing both the User.Scripts folder and files, but also the script named clear_array_drive (same script) for standalone use. Either extract the files for User.Scripts, or extract clear_array_drive into the root of the flash, and run it from there.
Also attached is 'clear an array drive (test only).zip', for playing with this, testing it. It contains exactly the same scripts, but writing is turned off, so no changes at all will happen. It is designed for those afraid of clearing the wrong thing, or not trusting these scripts yet. You can try it in various conditions, and see what happens, and it will pretend to do the work, but no changes at all will be made.
I do welcome examination by bash shell script experts, to ensure I made no mistakes. It's passed my own testing, but I'm not an expert. Rather, a very frustrated bash user, who lost many hours with the picky syntax! I really don't understand why people like type-less languages! It only *looks* easier.
After a while, you'll be frustrated with the 60 second wait (when run in User Scripts). I did have it at 30 seconds, but decided 60 was better for new users, for now. I'll add interactivity later, for standalone command line use. It also really needs a way to provide progress info while it's clearing. I have ideas for that.
The included 'clear_array_drive' script can now be run at the command line within any unRAID v6, and possibly unRAID v5, but is not tested there. (Procedures for removing a drive are different in v5.) Progress display is only available in 6.2 or later. In 6.1 or earlier, it's done when it's done.
Update 1.3 - add display of progress; confirm by key '!' (no wait) if standalone; fix logger; add a bit of color
Really appreciate the tip on 'status=progress', looks pretty good. Lots of numbers presented, the ones of interest are the second and the last.
Update 1.4 - make progress display conditional for 6.2 or later; hopefully now, the script can be run in any v6, possibly v5
Is there a log kept of this somewhere? I let it run overnight. It seems that my Unraid browser window disconnected though. It doesn't seem that dd is running anymore. Obviously I need to know if it completed successfully though before pulling the drive.
-
I just ordered some. Will report back!
-
It's so weird. This slow performance, mover hanging, and reboot issues just came out of nowhere. I can't explain it.
So far I have:
* Removed the SMR parity drive and added it as a data drive
* Completely replaced that drive
Is there anything obvious that I might be missing? Just for fun, I've attached new diagnostics.
-
Yes, they do.
The parity rebuild has now sped up to about 40MB/sec. I think that's because it's gotten to the empty portion of the drive. If the times are correct, it will be about 2.5 days for parity to rebuild the drive.
I'd love to try another cable for the drive, but obviously that would start everything back over.
-
So I put a new drive in there and its still running very slowly. The new drive is SMR as well, but the rebuild is running at about 250 KB/sec. Since its using the same SAS cable as the old drive, could that channel on the cable be bad? I'm not seeing any CRC errors...
-
So I tried to write a 1GB file via dd to disk9. It was still writing about 10 minutes later. I ran an xfs_repair to see if it would find anything and got nothing. SMART all checks out okay. Obviously, I've pulled that drive. Is there anything I can attempt to do to the drive, other than try to get a warranty repair on it?
-
Okay, I moved the shingled drive out of parity, but I'm still having issues with the Mover running *very* slowly, or maybe even hung. Any ideas here?
-
34 minutes ago, trurl said:
yes
If you aren't planning to replace parity2 then New Config without it and check the box saying parity is valid.
My plan was going to be to move parity2 into the array, unbalance data off a non-SMR drive, then add that drive as parity2. Will that work?
-
As for just ripping parity2 out, I get this message
[quote]Start will disable the missing disk and then bring the array on-line. Install a replacement disk as soon as possible.[/quote]
Can someone confirm that parity will be maintained by the other parity drive?
-
It *was* moving files, just very slowly. With Docker/VMs disabled, its moving them much faster.
-
Can I just remove the 2nd parity drive without having to rebuild parity?
edit: Also, I just disabled Docker and VMs, and now the Mover is flying again. Any chance that would offer clues as to what is going on?
-
30 minutes ago, John_M said:
The Mover makes sequential writes to the array, not random ones, so the persistent cache should see little use. If the drive can write directly to a shingled band it will do so. SMR disks work rather better in Unraid than in typical RAID applications, which is fortunate because you don't have just one, you have several. Parity 2 is one of them so maybe that's the problem. I don't mind using them as data disks (where there's an advantage to doing so) but I'm less enthusiastic about using them for parity.
Well that's interesting. Is it worth moving disks around so that I don't an SMR drive in parity?
edit: Mover is still running from this morning. It looks like its taking about 1.5 hours to move a ~2GB file.
-
Attached. Although after some thought, I have an idea what the issue might be. I added a new drive. Unfortunately its a shingled drive. I think when Mover runs, it very quickly overwhelms the cache on the SMR drive (cache drive is 1TB). Does that seem logical? If so, would that somehow prevent the machine from properly rebooting?
-
It appears that it took the Mover about 3 hours to move a 2GB file. What could be going on here?
-
Looks like my Sonarr updated last night and now can't connect to any indexers, but Radarr seems fine. Anyone seen this?
21-4-6 08:27:54.6|Error|X509CertificateValidationService|Certificate validation for https://*REMOVED* failed. RemoteCertificateChainErrors
-
Okay, I'm still seeing this issue. I was unable to reboot last night and had to power the server off and back on. Now I turned on the Mover, and it's hung. Can anyone help?
-
Apparently you see what I'm getting at here.
Yeah, a min/max with FIFO would be AWESOME. I wrote a PowerShell script for DrivePool that did that back when I was still on Windows. Maybe I'll have to dust off my scripting hat.
-
Possible to request that as a feature then? 😁
-
If I set 'Move files off cache based on age?' to yes, and set days to 0, will it simply move whatever the oldest file is, FIFO style?
[Plugin] Mover Tuning
in Plugin Support
Posted · Edited by flyize
Would it be possible to allow 'Move files that are greater than this many days old' to be 1-7, and then by 5s? I'd really like to be able to keep things on there for a week.
edit: Or maybe even make it an input box?