Unassigned Devices Preclear - a utility to preclear disks before adding them to the array


dlandon

Recommended Posts

13 minutes ago, pras1011 said:

I have installed a 8tb hdd that I want to preclear. On the Main page you can see the hdd as unassigned and the word MOUNT next to it. But it does not appear in the Tools Preclear page. What do I need to do?

 

The 8tb appears as part1 and part2. 

UD Prelear won't let you preclear a disk with partitions.  This is done for safety.

 

Install UD+, enable UD destructive mode, then click the red 'X' next to the disk ID and clear off the disk.  It will then show up in UD Preclear.

Link to comment

@pras1011 the parts1 2  shows  you still have a windows partitions still on it probably...    

click the Red X's  to remove the partition or the one red X that when you slide the mouse over it will say clear disk...  once you do that then you should get icon of the preclear  plugin beside the hard drive ... click it and the preclear should setup..   i currently doing 2 16tbs takes about a week to do  when i choose 3 cycles.... but that should get you to the preclearing

 

Link to comment
23 hours ago, heffe2001 said:

 

I'm seeing a very similar problem using the plugin version of the preclear script on 2 8tb SAS drives I got last week.  Both get to near the end of the zero phase, hang, and keep restarting until failure (I got the same restart at 0% on I think the 3rd retry attempt).  I installed the preclear docker image to test the original preclear script on my server (running 6.12.1 at the moment), and the first drive I tried has so far completed all the zeroing process with the preclear script that's included with the docker version, and is in the final read phase on the drive.  This was with 2 different manufacturers (one HGST, one Seagate Exos, both 8tb).  The Seagate ran through the zero process 3 total attempts, with 3 retries per attempt, and failed every time, the HGST failed a 3 retry pass before I started it using the docker.  I've currently got the Exos drive running a preclear on a virgin 6.11.5 install with just the preclear & UD plugins installed as a VM on my Proxmox server, with the drive passed through to that VM, and it's SO FAR at 83% and still going (VERY slowly, it's at I think 38 hrs on JUST the zero phase, lol, getting I think about 54m/s zero rate).  I'll let it go until it either completes, or fails on the zero, and move it back into my primary rig if it passes, and try it under the docker image too.  I DO notice that the Exos drive was shown as having a reported size of 8001563222016 total on the preclear plugin version n 6.12.1, where under 6.11.5 it's showing 7865536647168, so I'm not sure where, exactly it was getting the larger size from..  Same controller in both machines, only difference is it's being passed through to the VM directly, and not directly on bare metal.

 

As far as the HGST drive being in the final step (final read), I changed NOTHING on the server or plugins, just installed the plugin docker image (reports Docker 1.22), and started it using that instead of the canned preclear script with it's defaults..

 

I've now replaced my LSI card with a similar one from Art of Server as well, and went through another preclear cycle... and 3 drives completed fine, while a different one than before failed again with the same issue/error in the preclear log.

 

I've replaced both the card and the cable at this point, and since all of the drives HAVE precleared fine one one port or another, I'm loathe to think that it's the drives themselves.  I've upgraded to 6.12.2 and made sure the plugins were all updated. No errors in dmesg, nothing at all suspicious between boot up sequences and the assigning of partitions to the passed drives:

[Mon Jul  3 15:58:07 2023] mdcmd (30): import 29
[Mon Jul  3 15:58:07 2023] md: import_slot: 29 empty
[Tue Jul  4 08:03:00 2023] nvidia_uvm: module uses symbols nvUvmInterfaceDisableAccessCntr from proprietary module nvidia, inheriting taint.
[Tue Jul  4 08:03:00 2023] nvidia-uvm: Loaded the UVM driver, major device number 239.
[Tue Jul  4 19:38:21 2023]  sdk: sdk1
[Tue Jul  4 20:01:23 2023]  sdj: sdj1
[Tue Jul  4 20:27:52 2023]  sdl: sdl1

 

The first drive to pass (the drive I had troubles with on the most recent attempt) cleared everything successfully:

Jul 03 16:00:23 preclear_disk_<REDACTED1>_11281: Preclear Disk Version: 1.0.27
Jul 03 16:00:24 preclear_disk_<REDACTED1>_11281: Disk size: 22000969973760
Jul 03 16:00:24 preclear_disk_<REDACTED1>_11281: Disk blocks: 5371330560
Jul 03 16:00:24 preclear_disk_<REDACTED1>_11281: Blocks (512 bytes): 42970644480
Jul 03 16:00:24 preclear_disk_<REDACTED1>_11281: Block size: 4096
Jul 03 16:00:24 preclear_disk_<REDACTED1>_11281: Start sector: 0
Jul 03 16:00:25 preclear_disk_<REDACTED1>_11281: Zeroing: zeroing the disk started 1 of 5 retries...
Jul 03 16:00:25 preclear_disk_<REDACTED1>_11281: Zeroing: emptying the MBR.
Jul 03 21:29:12 preclear_disk_<REDACTED1>_11281: Zeroing: progress - 25% zeroed @ 268 MB/s
Jul 04 03:27:43 preclear_disk_<REDACTED1>_11281: Zeroing: progress - 50% zeroed @ 240 MB/s
Jul 04 10:20:38 preclear_disk_<REDACTED1>_11281: Zeroing: progress - 75% zeroed @ 200 MB/s
Jul 04 19:38:19 preclear_disk_<REDACTED1>_11281: Zeroing: progress - 100% zeroed @ 8 MB/s
Jul 04 19:38:21 preclear_disk_<REDACTED1>_11281: Zeroing: zeroing the disk completed!
Jul 04 19:38:21 preclear_disk_<REDACTED1>_11281: Signature: writing signature...
Jul 04 19:38:22 preclear_disk_<REDACTED1>_11281: Signature: verifying Unraid's signature on the MBR ...
Jul 04 19:38:23 preclear_disk_<REDACTED1>_11281: Signature: Unraid preclear signature is valid!
Jul 04 19:38:23 preclear_disk_<REDACTED1>_11281: Post-Read: post-read verification started 1 of 5 retries...
Jul 04 19:38:23 preclear_disk_<REDACTED1>_11281: Post-Read: verifying the beginning of the disk.
Jul 04 19:38:24 preclear_disk_<REDACTED1>_11281: Post-Read: verifying the rest of the disk.

 

Drive 2 and 3 had an error, but were able to recover (2 and 3 were almost the same, but a slight difference near the end):

Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21990459047936 bytes (22 TB, 20 TiB) copied, 98068.3 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10486584+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10486584+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21991960608768 bytes (22 TB, 20 TiB) copied, 98079.8 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10487364+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10487364+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21993596387328 bytes (22 TB, 20 TiB) copied, 98092.3 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10488085+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10488085+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21995108433920 bytes (22 TB, 20 TiB) copied, 98103.8 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10488862+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10488862+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21996737921024 bytes (22 TB, 20 TiB) copied, 98116.3 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10489577+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10489577+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21998237384704 bytes (22 TB, 20 TiB) copied, 98127.8 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10490352+0 records in
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 10490352+0 records out
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: dd output: 21999862677504 bytes (22 TB, 20 TiB) copied, 98140.3 s, 224 MB/s
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: dd process hung at 21999864774656, killing ...
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Zeroing: zeroing the disk started 2 of 5 retries...
Jul 04 20:00:43 preclear_disk_<REDACTED2>_17399: Continuing disk write on byte 21999862677504
Jul 04 20:01:14 preclear_disk_<REDACTED2>_17399: Zeroing: progress - 99% zeroed @ 0 MB/s
Jul 04 20:01:18 preclear_disk_<REDACTED2>_17399: Zeroing: zeroing the disk completed!
Jul 04 20:01:18 preclear_disk_<REDACTED2>_17399: Signature: writing signature...
Jul 04 20:01:24 preclear_disk_<REDACTED2>_17399: Signature: verifying Unraid's signature on the MBR ...
Jul 04 20:01:25 preclear_disk_<REDACTED2>_17399: Signature: Unraid preclear signature is valid!
Jul 04 20:01:25 preclear_disk_<REDACTED2>_17399: Post-Read: post-read verification started 1 of 5 retries...
Jul 04 20:01:25 preclear_disk_<REDACTED2>_17399: Post-Read: verifying the beginning of the disk.
Jul 04 20:01:26 preclear_disk_<REDACTED2>_17399: Post-Read: verifying the rest of the disk.

 

Drive 3 was similar but had this near the end, missing the line for progress - 99% zeroed @ 0 MB/s:

Jul 04 20:25:07 preclear_disk_<REDACTED3>_10636: Zeroing: dd output: 22000252747776 bytes (22 TB, 20 TiB) copied, 98699.8 s, 223 MB/s
Jul 04 20:25:07 preclear_disk_<REDACTED3>_10636: dd process hung at 22000254844928, killing ...
Jul 04 20:25:07 preclear_disk_<REDACTED3>_10636: Zeroing: zeroing the disk started 2 of 5 retries...
Jul 04 20:25:07 preclear_disk_<REDACTED3>_10636: Continuing disk write on byte 22000252747776
Jul 04 20:27:49 preclear_disk_<REDACTED3>_10636: Zeroing: zeroing the disk completed!
Jul 04 20:27:49 preclear_disk_<REDACTED3>_10636: Signature: writing signature...
Jul 04 20:27:53 preclear_disk_<REDACTED3>_10636: Signature: verifying Unraid's signature on the MBR ...
Jul 04 20:27:53 preclear_disk_<REDACTED3>_10636: Signature: Unraid preclear signature is valid!
Jul 04 20:27:53 preclear_disk_<REDACTED3>_10636: Post-Read: post-read verification started 1 of 5 retries...
Jul 04 20:27:53 preclear_disk_<REDACTED3>_10636: Post-Read: verifying the beginning of the disk.
Jul 04 20:27:54 preclear_disk_<REDACTED3>_10636: Post-Read: verifying the rest of the disk.

 

Finally, the 4th drive got to 99% zeroed, and then failed to resume:

Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10485099+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21988846338048 bytes (22 TB, 20 TiB) copied, 98008.1 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10485824+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10485824+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21990366773248 bytes (22 TB, 20 TiB) copied, 98019.5 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10486618+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10486618+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21992031911936 bytes (22 TB, 20 TiB) copied, 98032.1 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10487344+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10487344+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21993554444288 bytes (22 TB, 20 TiB) copied, 98043.6 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10488134+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10488134+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21995211194368 bytes (22 TB, 20 TiB) copied, 98056.1 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10488927+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10488927+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21996874235904 bytes (22 TB, 20 TiB) copied, 98068.6 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10489715+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10489715+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 21998526791680 bytes (22 TB, 20 TiB) copied, 98081.1 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10490507+0 records in
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 10490507+0 records out
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 22000187736064 bytes (22 TB, 20 TiB) copied, 98093.6 s, 224 MB/s
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: dd process hung at 22000189833216, killing ...
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Zeroing: zeroing the disk started 2 of 5 retries...
Jul 04 20:45:38 preclear_disk_<REDACTED4>_7253: Continuing disk write on byte 22000187736064
Jul 04 20:49:54 preclear_disk_<REDACTED4>_7253: Zeroing: dd output: 
Jul 04 20:49:54 preclear_disk_<REDACTED4>_7253: dd process hung at 0, killing ...
Jul 04 20:49:54 preclear_disk_<REDACTED4>_7253: Zeroing: zeroing the disk started 3 of 5 retries...
Jul 04 20:49:54 preclear_disk_<REDACTED4>_7253: Zeroing: emptying the MBR.

 

So it's almost like it got progressively worse with each preclear to write and verify the signature at the end, and on the last disk it couldn't recover somehow so started over from 0. The problem is, last time I let it go through to completion, it failed again and then hit it's 5 retries limit and terminated with a drive preclear failure.

 

I haven't tried downgrading to 6.11.5 as this box was new and is still in trial. It's definitely interesting that you're seeing a larger size reported in the previous version of UnRAID - is there some sort of weird variable length or storage issue between the two versions of UnRAID and the plugin?

 

Happy to try any additional things or suggestions.  It's going through it's zeroing yet again now, while the other drives are doing the post-read.

 

There's literally nothing else running on this machine except for some fairly standard plugins (CA, UDs, nVidia driver) and have never started the array or the pools.


 

Link to comment
29 minutes ago, Alyred said:

 

So it's almost like it got progressively worse with each preclear to write and verify the signature at the end, and on the last disk it couldn't recover somehow so started over from 0. The problem is, last time I let it go through to completion, it failed again and then hit it's 5 retries limit and terminated with a drive preclear failure.

 

I haven't tried downgrading to 6.11.5 as this box was new and is still in trial. It's definitely interesting that you're seeing a larger size reported in the previous version of UnRAID - is there some sort of weird variable length or storage issue between the two versions of UnRAID and the plugin?

 

Happy to try any additional things or suggestions.  It's going through it's zeroing yet again now, while the other drives are doing the post-read.

 

There's literally nothing else running on this machine except for some fairly standard plugins (CA, UDs, nVidia driver) and have never started the array or the pools.


 

 

The larger drive size was reported on 6.12.1, and reported correctly on the older 6.11.5 version.  Both drives passed preclear this last attempt (the one running the preclear docker went through all the steps, except on the post-read, it got to 63% before the docker process hung, but it still passed the drive anyway).  The one difference on the 6.11.5 VM preclear, the output during the operations looked very different from the normal preclear on the self-updating status screen.  I'm wondering if that has something to do as to how Proxmox passed that drive through to the VM though.  I took that drive, put it in my main unraid box, and it verified the preclear signature, so it appears it did at least complete correctly (I've verified both drives at this point, both got valid signatures).  I also had already done a media validation on both drives before this last preclear operation doing a format and DD passes by hand, so I'm pretty confident that the drives are fully functional. 

 

I'm currently rebuilding my array with one of them replacing a very old 4tb drive, and as soon as that's done, I plan on replacing another with the 2nd drive.  I NEED to replace a pre-fail ST8000AS0002 (slowly starting to get correctable media errors), but apparently the HGST 8tb I got, plus the EROS drive format out to just a touch smaller than the AS0002, so they won't work for that..  I'm debating just dropping parity, moving the data from that drive by hand to the new drives, and having the system completely rebuild parity again (I'd split the data from that failing 8tb over to the excess new free space on those replaced 4tb's if I do that).  It's either that, or start looking into larger sas drives (my parity is currently only a 10gb drive, so might just grab a couple 10tb sas models and throw them in the mix, lol).

 

Not sure if you've noticed or not though, several posts above yours DLandon says he's having an issue with larger drives and read failures, but that the preclear docker works on those issue-causing drives (it's using the older preclear script I believe), so if you're just wanting to get them all cleared and set up, that would probably be a good way to go to get around the issue with the current preclear plugin script issues.

Edited by heffe2001
Mentioned the issues with preclear script vs docker.
Link to comment
58 minutes ago, heffe2001 said:

 

The larger drive size was reported on 6.12.1, and reported correctly on the older 6.11.5 version.  Both drives passed preclear this last attempt (the one running the preclear docker went through all the steps, except on the post-read, it got to 63% before the docker process hung, but it still passed the drive anyway).  The one difference on the 6.11.5 VM preclear, the output during the operations looked very different from the normal preclear on the self-updating status screen.  I'm wondering if that has something to do as to how Proxmox passed that drive through to the VM though.  I took that drive, put it in my main unraid box, and it verified the preclear signature, so it appears it did at least complete correctly (I've verified both drives at this point, both got valid signatures).  I also had already done a media validation on both drives before this last preclear operation doing a format and DD passes by hand, so I'm pretty confident that the drives are fully functional. 

 

I'm currently rebuilding my array with one of them replacing a very old 4tb drive, and as soon as that's done, I plan on replacing another with the 2nd drive.  I NEED to replace a pre-fail ST8000AS0002 (slowly starting to get correctable media errors), but apparently the HGST 8tb I got, plus the EROS drive format out to just a touch smaller than the AS0002, so they won't work for that..  I'm debating just dropping parity, moving the data from that drive by hand to the new drives, and having the system completely rebuild parity again (I'd split the data from that failing 8tb over to the excess new free space on those replaced 4tb's if I do that).  It's either that, or start looking into larger sas drives (my parity is currently only a 10gb drive, so might just grab a couple 10tb sas models and throw them in the mix, lol).

 

Not sure if you've noticed or not though, several posts above yours DLandon says he's having an issue with larger drives and read failures, but that the preclear docker works on those issue-causing drives (it's using the older preclear script I believe), so if you're just wanting to get them all cleared and set up, that would probably be a good way to go to get around the issue with the current preclear plugin script issues.

 

Hm, interesting... I'd have to figure out a way to either run docker without an array, or temporarily get an array going just long enough to get a docker config. DIdn't particularly want to go halfway on that but might be my only option in that regard.

 

Of course, I guess I could just downgrade that server to 6.11.5 and see how that goes... or just trust that the errors are particular to the UD Preclear plugin and not an actual indication of issues with other pieces of my hardware, which was the original thing I wanted to run it through some exercises on.

Link to comment

Jul 05 09:52:29 preclear_disk_VJHBUZ1X_29626: Zeroing: progress - 75% zeroed @ 144 MB/s

Jul 05 14:11:59 preclear_disk_VJHBUZ1X_29626: Pause (smartctl run time: 16s)

Jul 05 14:11:59 preclear_disk_VJHBUZ1X_29626: Pause (hdparm run time: 25s)

Jul 05 14:12:00 preclear_disk_VJHBUZ1X_29626: Paused

Jul 05 14:12:20 preclear_disk_VJHBUZ1X_29626: Resumed

Jul 05 14:12:22 preclear_disk_VJHBUZ1X_29626: Zeroing: zeroing the disk completed!

Jul 05 14:12:22 preclear_disk_VJHBUZ1X_29626: Signature: writing signature...

Jul 05 14:12:24 preclear_disk_VJHBUZ1X_29626: Signature: verifying Unraid's signature on the MBR ...

Jul 05 14:12:24 preclear_disk_VJHBUZ1X_29626: Signature: Unraid preclear signature is valid!

Jul 05 14:12:24 preclear_disk_VJHBUZ1X_29626: Post-Read: post-read verification started 1 of 5 retries...

Jul 05 14:12:24 preclear_disk_VJHBUZ1X_29626: Post-Read: verifying the beginning of the disk.

Jul 05 14:12:25 preclear_disk_VJHBUZ1X_29626: Post-Read: verifying the rest of the disk.

Jul 05 16:55:55 preclear_disk_VJHBUZ1X_29626: Post-Read: progress - 25% verified @ 199 MB/s

Jul 05 19:53:21 preclear_disk_VJHBUZ1X_29626: Post-Read: progress - 50% verified @ 182 MB/s

Jul 05 23:15:48 preclear_disk_VJHBUZ1X_29626: Post-Read: progress - 75% verified @ 147 MB/s

Jul 06 03:34:51 preclear_disk_VJHBUZ1X_29626: Post-Read: elapsed time - 13:22:24

Jul 06 03:34:51 preclear_disk_VJHBUZ1X_29626: Post-Read: post-read verification completed!

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Cycle 1

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.:

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: ATTRIBUTE INITIAL NOW STATUS

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Reallocated_Sector_Ct 0 0 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Power_On_Hours 419 459 Up 40

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Temperature_Celsius 36 36 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Reallocated_Event_Count 0 0 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Current_Pending_Sector 0 0 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: Offline_Uncorrectable 0 0 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: UDMA_CRC_Error_Count 0 0 -

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: S.M.A.R.T.: 

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: Cycle: elapsed time: 40:07:52

Jul 06 03:34:52 preclear_disk_VJHBUZ1X_29626: Preclear: total elapsed time: 40:07:53

DOWNLOAD

Link to comment

So I'm thinking there's some sort of error in how the Preclear plugin handles multiple drives at once.  After my 4th drive "failed" preclear with the same errors I was having before, I restarted the UnRaid server with no changes to the system, and ran preclear again on the drive. This time it succeeded with only running the one drive by itself, though it did the same "hiccup" at the end where the DD process hung and had to be restarted once... but was successful and completed the preclear.

This was on the same port as before, same exact hardware.

Jul 05 15:46:24 preclear_disk_<REDACTED4>_27243: Preclear Disk Version: 1.0.27
Jul 05 15:46:24 preclear_disk_<REDACTED4>_27243: Restoring previous instance of preclear
Jul 05 15:52:05 preclear_disk_<REDACTED4>_27243: Disk size: 22000969973760
Jul 05 15:52:05 preclear_disk_<REDACTED4>_27243: Disk blocks: 5371330560
Jul 05 15:52:05 preclear_disk_<REDACTED4>_27243: Blocks (512 bytes): 42970644480
Jul 05 15:52:05 preclear_disk_<REDACTED4>_27243: Block size: 4096
Jul 05 15:52:05 preclear_disk_<REDACTED4>_27243: Start sector: 0
Jul 05 15:52:07 preclear_disk_<REDACTED4>_27243: Zeroing: zeroing the disk started 1 of 5 retries...
Jul 05 15:52:07 preclear_disk_<REDACTED4>_27243: Continuing disk write on byte 12524462276608
Jul 05 20:53:58 preclear_disk_<REDACTED4>_27243: Zeroing: progress - 75% zeroed @ 200 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4511214+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4511214+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9460701462528 bytes (9.5 TB, 8.6 TiB) copied, 50706.2 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4511944+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4511944+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9462232383488 bytes (9.5 TB, 8.6 TiB) copied, 50717.8 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4512735+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4512735+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9463891230720 bytes (9.5 TB, 8.6 TiB) copied, 50730.3 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4513463+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4513463+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9465417957376 bytes (9.5 TB, 8.6 TiB) copied, 50741.8 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4514256+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4514256+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9467080998912 bytes (9.5 TB, 8.6 TiB) copied, 50754.3 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4514985+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4514985+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9468609822720 bytes (9.5 TB, 8.6 TiB) copied, 50765.9 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4515777+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4515777+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9470270767104 bytes (9.5 TB, 8.6 TiB) copied, 50778.4 s, 187 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4516509+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4516509+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9471805882368 bytes (9.5 TB, 8.6 TiB) copied, 50789.9 s, 186 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4517298+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4517298+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9473460535296 bytes (9.5 TB, 8.6 TiB) copied, 50802.4 s, 186 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4518026+0 records in
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 4518026+0 records out
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: dd output: 9474987261952 bytes (9.5 TB, 8.6 TiB) copied, 50813.9 s, 186 MB/s
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: dd process hung at 21999449538560, killing ...
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Zeroing: zeroing the disk started 2 of 5 retries...
Jul 06 06:02:39 preclear_disk_<REDACTED4>_27243: Continuing disk write on byte 21999447441408
Jul 06 06:05:11 preclear_disk_<REDACTED4>_27243: Zeroing: progress - 99% zeroed @ 0 MB/s
Jul 06 06:05:27 preclear_disk_<REDACTED4>_27243: Zeroing: progress - 100% zeroed @ 223 MB/s
Jul 06 06:05:29 preclear_disk_<REDACTED4>_27243: Zeroing: zeroing the disk completed!
Jul 06 06:05:29 preclear_disk_<REDACTED4>_27243: Signature: writing signature...
Jul 06 06:05:30 preclear_disk_<REDACTED4>_27243: Signature: verifying Unraid's signature on the MBR ...
Jul 06 06:05:30 preclear_disk_<REDACTED4>_27243: Signature: Unraid preclear signature is valid!
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Cycle 1
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.:
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: ATTRIBUTE              	INITIAL	NOW	STATUS
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Reallocated_Sector_Ct  	0      	0  	-
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Power_On_Hours         	393    	420	Up 27
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Temperature_Celsius    	32     	41 	Up 9
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Reallocated_Event_Count	0      	0  	-
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Current_Pending_Sector 	0      	0  	-
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: Offline_Uncorrectable  	0      	0  	-
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: UDMA_CRC_Error_Count   	0      	0  	-
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: S.M.A.R.T.: 
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: Cycle: elapsed time: 27:19:38
Jul 06 06:05:32 preclear_disk_<REDACTED4>_27243: Preclear: total elapsed time: 27:19:41

No errors in dmesg, final preclear stats all showed success.

####################################################################################################
#                              Unraid Server Preclear of disk <REDACTED4>                          #
#                            Cycle 1 of 1, partition start on sector 64.                           #
#                                                                                                  #
#   Step 1 of 3 - Zeroing the disk:                       [27:19:12 @ 223 MB/s] SUCCESS            #
#   Step 2 of 3 - Writing Unraid's Preclear signature:                          SUCCESS            #
#   Step 3 of 3 - Verifying Unraid's Preclear signature:                        SUCCESS            #
#                                                                                                  #
#                                                                                                  #
#                                                                                                  #
#                                                                                                  #
#                                                                                                  #
####################################################################################################
#       Cycle elapsed time: 27:19:38 | Total elapsed time: 27:19:39                                #
####################################################################################################

####################################################################################################
#   S.M.A.R.T. Status (device type: default)                                                       #
#                                                                                                  #
#   ATTRIBUTE                   INITIAL CYCLE 1 STATUS                                             #
#   Reallocated_Sector_Ct       0       0       -                                                  #
#   Power_On_Hours              393     420     Up 27                                              #
#   Temperature_Celsius         32      41      Up 9                                               #
#   Reallocated_Event_Count     0       0       -                                                  #
#   Current_Pending_Sector      0       0       -                                                  #
#   Offline_Uncorrectable       0       0       -                                                  #
#   UDMA_CRC_Error_Count        0       0       -                                                  #
#                                                                                                  #
#                                                                                                  #
####################################################################################################
#                                                                                                  #
####################################################################################################
--> ATTENTION: Please take a look into the SMART report above for drive health issues.

--> RESULT: Preclear Finished Successfully!.

If anyone has these issues with UnRAID 6.12, try rebooting your server and running the preclear again on one drive at a time per reboot. Seems to clear out whatever in there is getting confused.

Would be interesting to see if more folks have gotten the "dd hang" that was still able to recover, so didn't notice... 3/4 of the preclears I did still had that in each one.

Link to comment
32 minutes ago, pras1011 said:

The clear finished and now the disc is unmountable. 

 

Nothing in the syslog. What is going on?

I don't think preclears make disc disk mountable, they write a special partition to the drive as a signature so that UnRaid recognizes it as ready for the array without another zeroing operation.

Link to comment

I am currently on "# Step 1 of 5 - Pre-read in progress: (93% Done) #" but i've checked the logs throughout the night and the drives keep spinning down. Is this normal? Shall I turn off the spin down for the whole array?
 

My unraid version is 6.11.5 and the preclear and UD plugins are up to date. 

 

Jul  6 18:48:56 Tower  emhttpd: spinning down /dev/sdg
Jul  6 18:49:01 Tower  emhttpd: read SMART /dev/sdg
Jul  6 19:03:58 Tower  emhttpd: spinning down /dev/sdg
Jul  6 19:04:03 Tower  emhttpd: read SMART /dev/sdg
Jul  6 19:19:01 Tower  emhttpd: spinning down /dev/sdg
Jul  6 19:19:06 Tower  emhttpd: read SMART /dev/sdg
Jul  6 19:34:03 Tower  emhttpd: spinning down /dev/sdg
Jul  6 19:34:07 Tower  emhttpd: read SMART /dev/sdg
Jul  6 19:49:06 Tower  emhttpd: spinning down /dev/sdg
Jul  6 19:49:10 Tower  emhttpd: read SMART /dev/sdg
Jul  6 20:04:08 Tower  emhttpd: spinning down /dev/sdg
Jul  6 20:04:12 Tower  emhttpd: read SMART /dev/sdg
Jul  6 20:19:10 Tower  emhttpd: spinning down /dev/sdg
Jul  6 20:19:14 Tower  emhttpd: read SMART /dev/sdg
Jul  6 20:34:12 Tower  emhttpd: spinning down /dev/sdg
Jul  6 20:34:17 Tower  emhttpd: read SMART /dev/sdg
Jul  6 20:49:14 Tower  emhttpd: spinning down /dev/sdg
Jul  6 20:49:18 Tower  emhttpd: read SMART /dev/sdg
Jul  6 21:04:16 Tower  emhttpd: spinning down /dev/sdg
Jul  6 21:04:21 Tower  emhttpd: read SMART /dev/sdg
Jul  6 21:19:18 Tower  emhttpd: spinning down /dev/sdg
Jul  6 21:19:22 Tower  emhttpd: read SMART /dev/sdg
Jul  6 21:34:20 Tower  emhttpd: spinning down /dev/sdg
Jul  6 21:34:25 Tower  emhttpd: read SMART /dev/sdg
Jul  6 21:49:23 Tower  emhttpd: spinning down /dev/sdg
Jul  6 21:49:27 Tower  emhttpd: read SMART /dev/sdg
Jul  6 22:04:25 Tower  emhttpd: spinning down /dev/sdg
Jul  6 22:04:29 Tower  emhttpd: read SMART /dev/sdg
Jul  6 22:19:27 Tower  emhttpd: spinning down /dev/sdg
Jul  6 22:19:32 Tower  emhttpd: read SMART /dev/sdg
Jul  6 22:34:30 Tower  emhttpd: spinning down /dev/sdg
Jul  6 22:34:35 Tower  emhttpd: read SMART /dev/sdg
Jul  6 22:49:32 Tower  emhttpd: spinning down /dev/sdg
Jul  6 22:49:37 Tower  emhttpd: read SMART /dev/sdg
Jul  6 23:04:35 Tower  emhttpd: spinning down /dev/sdg
Jul  6 23:04:39 Tower  emhttpd: read SMART /dev/sdg
Jul  6 23:19:37 Tower  emhttpd: spinning down /dev/sdg
Jul  6 23:19:42 Tower  emhttpd: read SMART /dev/sdg
Jul  6 23:34:40 Tower  emhttpd: spinning down /dev/sdg
Jul  6 23:34:44 Tower  emhttpd: read SMART /dev/sdg
Jul  6 23:49:42 Tower  emhttpd: spinning down /dev/sdg
Jul  6 23:49:46 Tower  emhttpd: read SMART /dev/sdg
Jul  7 00:04:44 Tower  emhttpd: spinning down /dev/sdg
Jul  7 00:04:49 Tower  emhttpd: read SMART /dev/sdg
Jul  7 00:19:47 Tower  emhttpd: spinning down /dev/sdg
Jul  7 00:19:52 Tower  emhttpd: read SMART /dev/sdg
Jul  7 00:34:50 Tower  emhttpd: spinning down /dev/sdg
Jul  7 00:34:54 Tower  emhttpd: read SMART /dev/sdg
Jul  7 00:49:52 Tower  emhttpd: spinning down /dev/sdg
Jul  7 00:49:57 Tower  emhttpd: read SMART /dev/sdg
Jul  7 01:04:54 Tower  emhttpd: spinning down /dev/sdg
Jul  7 01:04:58 Tower  emhttpd: read SMART /dev/sdg
Jul  7 01:19:56 Tower  emhttpd: spinning down /dev/sdg
Jul  7 01:20:01 Tower  emhttpd: read SMART /dev/sdg
Jul  7 01:34:59 Tower  emhttpd: spinning down /dev/sdg
Jul  7 01:35:04 Tower  emhttpd: read SMART /dev/sdg
Jul  7 01:50:02 Tower  emhttpd: spinning down /dev/sdg
Jul  7 01:50:07 Tower  emhttpd: read SMART /dev/sdg
Jul  7 02:05:05 Tower  emhttpd: spinning down /dev/sdg
Jul  7 02:05:10 Tower  emhttpd: read SMART /dev/sdg
Jul  7 02:20:08 Tower  emhttpd: spinning down /dev/sdg
Jul  7 02:20:13 Tower  emhttpd: read SMART /dev/sdg
Jul  7 02:35:11 Tower  emhttpd: spinning down /dev/sdg
Jul  7 02:35:15 Tower  emhttpd: read SMART /dev/sdg
Jul  7 02:50:13 Tower  emhttpd: spinning down /dev/sdg
Jul  7 02:50:18 Tower  emhttpd: read SMART /dev/sdg
Jul  7 03:05:16 Tower  emhttpd: spinning down /dev/sdg
Jul  7 03:05:20 Tower  emhttpd: read SMART /dev/sdg
Jul  7 03:20:18 Tower  emhttpd: spinning down /dev/sdg
Jul  7 03:20:23 Tower  emhttpd: read SMART /dev/sdg
Jul  7 03:35:22 Tower  emhttpd: spinning down /dev/sdg
Jul  7 03:35:26 Tower  emhttpd: read SMART /dev/sdg
Jul  7 03:50:24 Tower  emhttpd: spinning down /dev/sdg
Jul  7 03:50:29 Tower  emhttpd: read SMART /dev/sdg
Jul  7 04:05:27 Tower  emhttpd: spinning down /dev/sdg
Jul  7 04:05:31 Tower  emhttpd: read SMART /dev/sdg
Jul  7 04:20:30 Tower  emhttpd: spinning down /dev/sdg
Jul  7 04:20:35 Tower  emhttpd: read SMART /dev/sdg
Jul  7 04:35:32 Tower  emhttpd: spinning down /dev/sdg
Jul  7 04:35:36 Tower  emhttpd: read SMART /dev/sdg
Jul  7 04:50:34 Tower  emhttpd: spinning down /dev/sdg
Jul  7 04:50:39 Tower  emhttpd: read SMART /dev/sdg
Jul  7 05:05:36 Tower  emhttpd: spinning down /dev/sdg
Jul  7 05:05:40 Tower  emhttpd: read SMART /dev/sdg
Jul  7 05:20:38 Tower  emhttpd: spinning down /dev/sdg
Jul  7 05:20:43 Tower  emhttpd: read SMART /dev/sdg
Jul  7 05:35:41 Tower  emhttpd: spinning down /dev/sdg
Jul  7 05:35:45 Tower  emhttpd: read SMART /dev/sdg
Jul  7 05:50:43 Tower  emhttpd: spinning down /dev/sdg
Jul  7 05:50:48 Tower  emhttpd: read SMART /dev/sdg
Jul  7 06:05:46 Tower  emhttpd: spinning down /dev/sdg
Jul  7 06:05:50 Tower  emhttpd: read SMART /dev/sdg
Jul  7 06:20:48 Tower  emhttpd: spinning down /dev/sdg
Jul  7 06:20:53 Tower  emhttpd: read SMART /dev/sdg
Jul  7 06:35:50 Tower  emhttpd: spinning down /dev/sdg
Jul  7 06:35:55 Tower  emhttpd: read SMART /dev/sdg
Jul  7 06:50:53 Tower  emhttpd: spinning down /dev/sdg
Jul  7 06:50:58 Tower  emhttpd: read SMART /dev/sdg
Jul  7 07:05:56 Tower  emhttpd: spinning down /dev/sdg
Jul  7 07:06:01 Tower  emhttpd: read SMART /dev/sdg
Jul  7 07:20:59 Tower  emhttpd: spinning down /dev/sdg
Jul  7 07:21:03 Tower  emhttpd: read SMART /dev/sdg
Jul  7 07:36:01 Tower  emhttpd: spinning down /dev/sdg
Jul  7 07:36:06 Tower  emhttpd: read SMART /dev/sdg
Jul  7 07:51:03 Tower  emhttpd: spinning down /dev/sdg
Jul  7 07:51:07 Tower  emhttpd: read SMART /dev/sdg
Jul  7 08:06:06 Tower  emhttpd: spinning down /dev/sdg
Jul  7 08:06:11 Tower  emhttpd: read SMART /dev/sdg
Jul  7 08:21:09 Tower  emhttpd: spinning down /dev/sdg
Jul  7 08:21:14 Tower  emhttpd: read SMART /dev/sdg
Jul  7 08:36:12 Tower  emhttpd: spinning down /dev/sdg
Jul  7 08:36:18 Tower  emhttpd: read SMART /dev/sdg
Jul  7 08:51:15 Tower  emhttpd: spinning down /dev/sdg
Jul  7 08:51:19 Tower  emhttpd: read SMART /dev/sdg
Jul  7 09:06:17 Tower  emhttpd: spinning down /dev/sdg
Jul  7 09:06:22 Tower  emhttpd: read SMART /dev/sdg
Jul  7 09:21:20 Tower  emhttpd: spinning down /dev/sdg
Jul  7 09:21:24 Tower  emhttpd: read SMART /dev/sdg
Jul  7 09:36:22 Tower  emhttpd: spinning down /dev/sdg
Jul  7 09:36:27 Tower  emhttpd: read SMART /dev/sdg
Jul  7 09:51:24 Tower  emhttpd: spinning down /dev/sdg
Jul  7 09:51:28 Tower  emhttpd: read SMART /dev/sdg
Jul  7 10:06:26 Tower  emhttpd: spinning down /dev/sdg
Jul  7 10:06:31 Tower  emhttpd: read SMART /dev/sdg
Jul  7 10:21:28 Tower  emhttpd: spinning down /dev/sdg
Jul  7 10:21:32 Tower  emhttpd: read SMART /dev/sdg

 

Edited by thebigjb
Link to comment
3 hours ago, thebigjb said:

Is this normal? Shall I turn off the spin down for the whole array?

No, there is a bug in the spin down code.  Disks without partitions should not be spun down.  While preclearing, it would be best to turn off spin down for the array.

 

A fix for this will come in a later release of Unraid.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.