Preclear plugin


Recommended Posts

Personally I view it like test driving a car. Normally it wont change your mind about the confidence level you have in it.. but the rare event it does... it pays for itself. The time and energy/stress related to having your parity drive or something out of commission while you try to get another drive to replace it is just not worth it. *(which is why you should always keep a pre-cleared drive as a spare to reduce that scenario).


I’m not disagreeing with the value of knowing if your drive is healthy but I think even if your drive is cleared Unraid will clear a parity drive.
Link to comment
4 hours ago, rxnelson said:

 


I’m not disagreeing with the value of knowing if your drive is healthy but I think even if your drive is cleared Unraid will clear a parity drive.

Unraid never clears parity. Only data drives. And only if you are adding a drive to a slot that doesn't have an existing drive.

 

When replacing a data drive, adding or replacing a parity drive, Unraid directly writes the data that belongs, it doesn't bother clearing the drive beforehand.

 

The only reason a clear drive is even needed is to add a totally new data drive slot to an array with existing valid parity.

  • Thanks 1
Link to comment

Thanks everyone for clearing this up.

 

If I understand correctly, when adding a drive to the array unRAID will:

- not clear the drive it is a parity drive or replacing an existing drive

- clear the drive it is a new (additional) data drive

 

14 hours ago, jonathanm said:

The only reason a clear drive is even needed is to add a totally new data drive slot to an array with existing valid parity.

 

You mean that it is only necessary in this case because unRAID would clear the drive anyway, making it possible to skip or rather do this beforehand?

 

Since I am replacing a (faulty) drive, unRAID would not clear the drive when I add a new one. The only upside a pre-clear would bring would be stress-testing the new drive to avoid early failure?

 

 

Link to comment
3 minutes ago, taalas said:

Since I am replacing a (faulty) drive, unRAID would not clear the drive when I add a new one. The only upside a pre-clear would bring would be stress-testing the new drive to avoid early failure?

If you put fifty to one hundred hours on the drive, that time period would catch most of the infant-mortality failures.  If you actually had a drive that would fail in this life phase, this would probably occur while you were rebuilding the drive.  Are you prepared to deal (mentally more than anything else) with this situation?  You have now had two physical drives fail in the same logical drive location!  (I would guess that today, the odds of a infant mortality instance is probably less than 1 out-of-a 1000.)

 

The second use for preclear is for those of us who have cold spare drive on the shelf ready for installation when an array drive goes offline.  It is far better to to have precleared (or some other long term testing procedure) to make sure that the drive is not defective out-of-the-box.  It is usually much easier to deal with a vendor to replace a DOA drive than a manufacturer after the vendor return window has passed!

  • Thanks 1
Link to comment
12 minutes ago, taalas said:

You mean that it is only necessary in this case because unRAID would clear the drive anyway, making it possible to skip or rather do this beforehand?

 

Since I am replacing a (faulty) drive, unRAID would not clear the drive when I add a new one. The only upside a pre-clear would bring would be stress-testing the new drive to avoid early failure?

If I'm correctly parsing what you said, you are 100% correct. Pre-clear, done outside of the array, clear, done when adding a slot. If the drive is cleared outside the array with the preclear plugin, then as long as NO writes are done in the mean time, unraid will detect that it is already cleared and skip the clearing step.

 

The array is now fully functional while the clearing is done, so there is no reason to pre-clear a drive any more if the drive is in perfect operating condition.

 

The function of pre-clear is now mainly testing, as you state. Thorough drive testing is needed with Unraid, as the integrity of your data relies on the whole capacity of the drive being error free instead of just the portion actually holding the data.

 

The drive manufacturers generally provide testing tools as well, so pre-clear is just another tool in the box to test drives.

  • Thanks 1
Link to comment
Unraid never clears parity. Only data drives. And only if you are adding a drive to a slot that doesn't have an existing drive.
 
When replacing a data drive, adding or replacing a parity drive, Unraid directly writes the data that belongs, it doesn't bother clearing the drive beforehand.
 
The only reason a clear drive is even needed is to add a totally new data drive slot to an array with existing valid parity.


Thanks for clearing this up. Sorry for bad information. Is there something different with parity though? Maybe in the old days, like parity writes to the entire drive?
Link to comment

 I'm having the same issue where drives keep "erasing" over and over again or getting stuck at 99%. I had two drives that were "erasing" for 200+ hours before I killed the process. I started them over and they both both hung at 99%. I started one of the drives by itself and it keeps getting to the end of the erase and restarting. I'll post the logs here and a few notes:

 

These are drives connected with a usb dock and I was able to clear 8-10 other drives before this. All of the ones failing are seagate or hitachi 3-4TB drives. I have the unassigned drives plugin, but they aren't mounted. I'm running them as "erase and clear" with the gfjardim script, 1 cycle, no pre-read and this is a sample of what I'm seeing:

 

Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 3 --frequency 1 --cycles 1 --skip-preread --skip-postread --no-prompt /dev/sdk
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Preclear Disk Version: 1.0.9
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. info type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. attrs type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk size: 4000787030016
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk blocks: 976754646
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Blocks (512 byte): 7814037168
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Block size: 4096
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Start sector: 0
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [3330]
Feb 03 11:32:44 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 03 14:20:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 03 17:09:09 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 03 19:58:05 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 03 22:47:18 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased
Feb 04 01:36:10 preclear_disk_ZFN1PJC3_650: Erasing: progress - 60% erased
Feb 04 04:25:08 preclear_disk_ZFN1PJC3_650: Erasing: progress - 70% erased
Feb 04 07:14:12 preclear_disk_ZFN1PJC3_650: Erasing: progress - 80% erased
Feb 04 10:03:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 90% erased
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906354+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3997914103808 bytes (4.0 TB, 3.6 TiB) copied, 100927 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998415323136 bytes (4.0 TB, 3.6 TiB) copied, 100939 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998864113664 bytes (4.0 TB, 3.6 TiB) copied, 100951 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999375818752 bytes (4.0 TB, 3.6 TiB) copied, 100963 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999824609280 bytes (4.0 TB, 3.6 TiB) copied, 100975 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000334217216 bytes (4.0 TB, 3.6 TiB) copied, 100987 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000768327680 bytes (4.0 TB, 3.6 TiB) copied, 100999 s, 39.6 MB/s
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: dd process hung at 4000770424832, killing....
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Continuing disk write on byte 4000768327680
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=4000768327680 count=18702336 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [8748]
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: dd process hung at 0, killing....
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [31679]
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535414 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535427 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535439 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535451 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535464 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535476 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 48: 31679 Killed $dd_cmd 2> $dd_output
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: dd process hung at 2097152, killing....
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [10243]
Feb 04 15:45:32 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 04 18:37:31 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 04 21:29:19 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 05 00:21:28 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 05 03:13:48 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased

Any ideas what is going on? 

 

  • Like 1
Link to comment
5 minutes ago, cisellis said:

 I'm having the same issue where drives keep "erasing" over and over again or getting stuck at 99%. I had two drives that were "erasing" for 200+ hours before I killed the process. I started them over and they both both hung at 99%. I started one of the drives by itself and it keeps getting to the end of the erase and restarting. I'll post the logs here and a few notes:

 

These are drives connected with a usb dock and I was able to clear 8-10 other drives before this. All of the ones failing are seagate or hitachi 3-4TB drives. I have the unassigned drives plugin, but they aren't mounted. I'm running them as "erase and clear" with the gfjardim script, 1 cycle, no pre-read and this is a sample of what I'm seeing:

 


Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 3 --frequency 1 --cycles 1 --skip-preread --skip-postread --no-prompt /dev/sdk
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Preclear Disk Version: 1.0.9
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. info type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. attrs type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk size: 4000787030016
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk blocks: 976754646
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Blocks (512 byte): 7814037168
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Block size: 4096
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Start sector: 0
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [3330]
Feb 03 11:32:44 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 03 14:20:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 03 17:09:09 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 03 19:58:05 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 03 22:47:18 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased
Feb 04 01:36:10 preclear_disk_ZFN1PJC3_650: Erasing: progress - 60% erased
Feb 04 04:25:08 preclear_disk_ZFN1PJC3_650: Erasing: progress - 70% erased
Feb 04 07:14:12 preclear_disk_ZFN1PJC3_650: Erasing: progress - 80% erased
Feb 04 10:03:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 90% erased
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906354+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3997914103808 bytes (4.0 TB, 3.6 TiB) copied, 100927 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998415323136 bytes (4.0 TB, 3.6 TiB) copied, 100939 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998864113664 bytes (4.0 TB, 3.6 TiB) copied, 100951 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999375818752 bytes (4.0 TB, 3.6 TiB) copied, 100963 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999824609280 bytes (4.0 TB, 3.6 TiB) copied, 100975 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000334217216 bytes (4.0 TB, 3.6 TiB) copied, 100987 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000768327680 bytes (4.0 TB, 3.6 TiB) copied, 100999 s, 39.6 MB/s
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: dd process hung at 4000770424832, killing....
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Continuing disk write on byte 4000768327680
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=4000768327680 count=18702336 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [8748]
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: dd process hung at 0, killing....
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [31679]
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535414 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535427 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535439 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535451 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535464 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535476 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 48: 31679 Killed $dd_cmd 2> $dd_output
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: dd process hung at 2097152, killing....
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [10243]
Feb 04 15:45:32 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 04 18:37:31 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 04 21:29:19 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 05 00:21:28 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 05 03:13:48 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased

Any ideas what is going on? 

 

Damn, I thought my drives were dead. Yeah, same issue here. Ssd, hdd, doesn't matter what port either. 

Link to comment
2 minutes ago, trurl said:

You shouldn't preclear SSDs

Why not? I understand it technically reduces their life, but I'm using mostly Samsung drives with 40-400tbw warranties. 

If I write 500gb to it once, am I really reducing the life? 

 

My 850 pro, for example, I've had since 2011 and I have only written 68tb of the 400tb warranty. 

Edited by Hikakiller
Link to comment
4 minutes ago, Hikakiller said:

Why not? I understand it technically reduces their life, but I'm using mostly Samsung drives with 40-400tbw warranties. 

If I write 500gb to it once, am I really reducing the life? 

It's still unnecessary, you can use blkdiscard to completely wipe an SSD in a couple of seconds tops.

 

blkdiscard /dev/sdX

 

Link to comment
1 minute ago, johnnie.black said:

It's still unnecessary, you can use blkdiscard to completely wipe an SSD in a couple of seconds tops.

 


blkdiscard /dev/sdX

 

Well, I also use it as a stress test of sorts. I've never detected a bad drive that way, but I imagine if say the wear leveling went down 1% from a single drive pass I'd return it. 

 

I buy a lot of used ssds.

Link to comment

Cisellis,

I was having the same issue where it hung at 99% at the zeroing phase and the second drive computed. I ended up stopping the process and when I click to start it back I clicked on the resume feature and after that it completed the zeroing and the post read! So just to be patient with it I guess and try doing one at a time in my opinion.


Sent from my iPhone using Tapatalk

Link to comment

I think that is where I'm at. I'm only running one drive this time and I'm getting the same behavior (from the log above). It's at about 70 hours and has restarted at least 2-3 times from what I can tell. I haven't tried stopping and restarting it yet. I will probably wait and see if I can find another solution first. I saw something about a docker container for preclear and I might try that instead if I can't figure out what is going on.

  • Like 1
Link to comment
40 minutes ago, Hikakiller said:

Any other way to make sure used ssds are good? 

Look at SMART data, if all is normal do a read test using for example the diskspeed docker to confirm speeds are normal, that should be enough, SSDs don't usually fail like disks, i.e., by developing bad sectors, though there are exceptions, they usually completely fail after a reboot/power cycle, and you can't test for that.

Link to comment

 

Hi! Sorry for posting in the general support section.

Just got an error while preclearing a new 8TB drive. Can someone suggest me what to do?

I dont see any explicit error. It just could not finish reading it and exited.

EDIT: I ran a smart test and it shows the disk is ok. Since I want to replace the parity disk with this new disk, can I do that or should I run any further test to check the health of the disk?

Rgds

preclear round 1

ST8000DM004-2CX188_WCT2AX56-20200208-1107.txt

Edited by luca2
I also att<ached a smart report
Link to comment
1 hour ago, Gragorg said:

Can anyone recommend a good external USB enclosure that I can use to preclear drives? Then I could remove them from the enclosure and install in the server.  I am getting close to capacity in my case and would like to start replacing old small drives with 10TB drives.

I am not a great lover of USB enclosures but I would be looking for something like this:

 

   https://www.amazon.com/Sabrent-External-Duplicator-Function-EC-HD2B/dp/B0759567JT/ref=sr_1_19?keywords=usb%2Bdrive%2Benclosure&qid=1581192710&sr=8-19&th=1

or this one:

https://www.amazon.com/dp/B07PX2HHD6/ref=psdc_160354011_t5_B0759567JT

 

Taking a housing apart is not always the easiest thing.  Plus, this slide-in type allows the effective use of a external cooling fan if you find that necessary. Take sure the supplied USB cable is one that you can use or else buy the USB cable you will need.

 

Be sure to double check as I seem to recall that getting SMART reports via USB is sometimes difficult...

 

Link to comment
3 hours ago, Frank1940 said:

I am not a great lover of USB enclosures but I would be looking for something like this:

 

   https://www.amazon.com/Sabrent-External-Duplicator-Function-EC-HD2B/dp/B0759567JT/ref=sr_1_19?keywords=usb%2Bdrive%2Benclosure&qid=1581192710&sr=8-19&th=1

or this one:

https://www.amazon.com/dp/B07PX2HHD6/ref=psdc_160354011_t5_B0759567JT

 

Taking a housing apart is not always the easiest thing.  Plus, this slide-in type allows the effective use of a external cooling fan if you find that necessary. Take sure the supplied USB cable is one that you can use or else buy the USB cable you will need.

 

Be sure to double check as I seem to recall that getting SMART reports via USB is sometimes difficult...

re

Have you used these units without issue before?  Just trying to avoid headaches.  The plan is to preclear 10TB WD Red drive in the enclosure and then remove them to be install on sata in the server once preclear.

Link to comment
6 hours ago, Gragorg said:

Can anyone recommend a good external USB enclosure that I can use to preclear drives? Then I could remove them from the enclosure and install in the server.  I am getting close to capacity in my case and would like to start replacing old small drives with 10TB drives.

As mentioned above there are USB enclosures that do pass the SMART and other info from the drive(s). However, if you haven't already bought drives then why not just use the WD EasyStore/Elements or Seagate offerings? Both the stock WD and Seagate enclosures for their 10TB+ models have worked for me to pass SMART and other info via USB.

 

Unless you're concerned about getting a longer warranty (3 - 5 yrs for retail bare drives, 2 years for drives in USB enclosures), just buy the less expensive USB drives and shuck the bare drive from the enclosure after you've let them preclear and/or stress test. The WD enclosures are almost always 'white label' REDs (the WD NAS series). Every 10TB+ Seagate that I've shucked has been a Barracuda Pro.

 

And yes, the bare drives are still warrantied after being shucked from the USB enclosure, but in most cases only for 2 years. I've returned a bare drive to Seagate that came from an enclosure using the serial number on the bare drive itself. Not one question about why it wasn't in the USB enclosure - they just sent me a replacement sealed retail Barracuda Pro. Others have had similar experiences with recent WD drives.

 

You save considerable money purchasing the USB drives over the bare drive. My queries to both Seagate and WD on why they do this have never been answered. They sell the same bare drive with it installed in their USB enclosure cheaper than the bare retail drive. But as mentioned above, most of the USB drives from WD/Seagate have only 2 years of warranty, even though the bare drives inside are the same model as their bare drives.

 

Hope that helps.

 

Edited by AgentXXL
grammar
  • Like 1
Link to comment
12 hours ago, Gragorg said:

Have you used these units without issue before?

I have not but I have friends who have purchased similar units in the past.  They were using them for much the same thing as you are planning on--  to work on drives with various utilities.  It is a whole lot easier to just slide a drive into one of these than it is to disassemble many of the USB housings.  (Most of them are intended to have the drive installed in them for the life of the drive.  Convenience of doing it is not a major design consideration!)

Edited by Frank1940
  • Like 1
Link to comment
On 2/6/2020 at 1:09 PM, cisellis said:

 I'm having the same issue where drives keep "erasing" over and over again or getting stuck at 99%. I had two drives that were "erasing" for 200+ hours before I killed the process. I started them over and they both both hung at 99%. I started one of the drives by itself and it keeps getting to the end of the erase and restarting. I'll post the logs here and a few notes:

 

These are drives connected with a usb dock and I was able to clear 8-10 other drives before this. All of the ones failing are seagate or hitachi 3-4TB drives. I have the unassigned drives plugin, but they aren't mounted. I'm running them as "erase and clear" with the gfjardim script, 1 cycle, no pre-read and this is a sample of what I'm seeing:

 


Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --erase-clear --notify 3 --frequency 1 --cycles 1 --skip-preread --skip-postread --no-prompt /dev/sdk
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Preclear Disk Version: 1.0.9
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. info type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: S.M.A.R.T. attrs type: default
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk size: 4000787030016
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Disk blocks: 976754646
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Blocks (512 byte): 7814037168
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Block size: 4096
Feb 03 08:48:17 preclear_disk_ZFN1PJC3_650: Start sector: 0
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 03 08:48:20 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [3330]
Feb 03 11:32:44 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 03 14:20:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 03 17:09:09 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 03 19:58:05 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 03 22:47:18 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased
Feb 04 01:36:10 preclear_disk_ZFN1PJC3_650: Erasing: progress - 60% erased
Feb 04 04:25:08 preclear_disk_ZFN1PJC3_650: Erasing: progress - 70% erased
Feb 04 07:14:12 preclear_disk_ZFN1PJC3_650: Erasing: progress - 80% erased
Feb 04 10:03:23 preclear_disk_ZFN1PJC3_650: Erasing: progress - 90% erased
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906354+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3997914103808 bytes (4.0 TB, 3.6 TiB) copied, 100927 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906593+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998415323136 bytes (4.0 TB, 3.6 TiB) copied, 100939 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1906807+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3998864113664 bytes (4.0 TB, 3.6 TiB) copied, 100951 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907051+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999375818752 bytes (4.0 TB, 3.6 TiB) copied, 100963 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907265+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 3999824609280 bytes (4.0 TB, 3.6 TiB) copied, 100975 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907508+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000334217216 bytes (4.0 TB, 3.6 TiB) copied, 100987 s, 39.6 MB/s
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records in
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 1907715+0 records out
Feb 04 12:52:43 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 4000768327680 bytes (4.0 TB, 3.6 TiB) copied, 100999 s, 39.6 MB/s
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: dd process hung at 4000770424832, killing....
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Continuing disk write on byte 4000768327680
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=4000768327680 count=18702336 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:52:44 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [8748]
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: dd process hung at 0, killing....
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:53:48 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:57:00 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [31679]
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535414 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535427 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535439 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535451 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535464 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records in
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0+0 records out
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd output: 0 bytes copied, 535476 s, 0.0 kB/s
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 48: 31679 Killed $dd_cmd 2> $dd_output
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: dd process hung at 2097152, killing....
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: openssl enc -aes-256-ctr -pass pass:'******' -nosalt < /dev/zero > /tmp/.preclear/sdk/fifo
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: emptying the MBR.
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd if=/tmp/.preclear/sdk/fifo of=/dev/sdk bs=2097152 seek=2097152 count=4000784932864 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes iflag=fullblock
Feb 04 12:58:05 preclear_disk_ZFN1PJC3_650: Erasing: dd pid [10243]
Feb 04 15:45:32 preclear_disk_ZFN1PJC3_650: Erasing: progress - 10% erased
Feb 04 18:37:31 preclear_disk_ZFN1PJC3_650: Erasing: progress - 20% erased
Feb 04 21:29:19 preclear_disk_ZFN1PJC3_650: Erasing: progress - 30% erased
Feb 05 00:21:28 preclear_disk_ZFN1PJC3_650: Erasing: progress - 40% erased
Feb 05 03:13:48 preclear_disk_ZFN1PJC3_650: Erasing: progress - 50% erased

Any ideas what is going on? 

 

I can't replicate this behavior, but will increase some time lapses to try to avoid this.

Link to comment

Huh... I thought it was just me and being weird with having 5 disks running preclear at once.

I had the same situation with a disk or 2 (I don't remember, there were 18 of them total) that got to the end of phase 1, but never did phases 2 and 3, and at least one of them started over with the preclear.

It was 9 hours to process these 4TB disks.  Again it wasn't all of them, just a couple.  I didn't take very good notes, i'm sorry.

Link to comment
  • Squid unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.