Preclear plugin


Recommended Posts

Hey guys,

 

currently running preclear on 2 disks. The log is getting flodded with messages like these:

 

Feb 15 21:53:13 Tower preclear_disk_ZDH1NDE7[5373]: tput: unknown terminal "screen"

 

I started the preclear via unassigned devices plugin. In the GUI everything appears fine, the unassigned devices section shows the preclear progress correctly, but the GUI just can't show my the syslog, anymore, due to "out of RAM".

 

Not a big deal, but thought I'd report it.

Link to comment
  • 3 weeks later...

I bought yet another drive and tried it again.  Failed again at post read :(

Here is the log:

 

Mar  7 17:08:17 UnRAID preclear_disk_9LG7SS1A[11901]: Zeroing: progress - 90% zeroed @ 115 MB/s
Mar  7 20:40:42 UnRAID preclear_disk_9LG7SS1A[11901]: Pause (smartctl run time: 25s)
Mar  7 20:40:42 UnRAID preclear_disk_9LG7SS1A[11901]: Paused
Mar  7 20:40:54 UnRAID preclear_disk_9LG7SS1A[11901]: killing smartctl with pid 24018 - probably stalled...
Mar  7 20:41:03 UnRAID rc.diskinfo[9412]: SIGHUP received, forcing refresh of disks info.
Mar  7 20:41:06 UnRAID preclear_disk_9LG7SS1A[11901]: Resumed
Mar  7 20:41:08 UnRAID preclear_disk_9LG7SS1A[11901]: Zeroing: dd - wrote 14000519643136 of 14000519643136 (0).
Mar  7 20:41:08 UnRAID preclear_disk_9LG7SS1A[11901]: Zeroing: elapsed time - 47:13:31
Mar  7 20:41:08 UnRAID preclear_disk_9LG7SS1A[11901]: Zeroing: dd exit code - 0
Mar  7 20:41:08 UnRAID preclear_disk_9LG7SS1A[11901]: Zeroing: zeroing the disk completed!
Mar  7 20:41:09 UnRAID preclear_disk_9LG7SS1A[11901]: Signature: writing signature:    0   0   2   0   0 255 255 255   1   0   0   0 255 255 255 255
Mar  7 20:41:09 UnRAID rc.diskinfo[9412]: SIGHUP received, forcing refresh of disks info.
Mar  7 20:41:09 UnRAID preclear_disk_9LG7SS1A[11901]: Signature: verifying unRAID's signature on the MBR ...
Mar  7 20:41:09 UnRAID preclear_disk_9LG7SS1A[11901]: Signature: Unraid preclear signature is valid!
Mar  7 20:41:09 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: post-read verification started (1/5)....
Mar  7 20:41:10 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: verifying the beginning of the disk.
Mar  7 20:41:10 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: cmp /tmp/.preclear/sdd/fifo /dev/zero
Mar  7 20:41:10 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: dd if=/dev/sdd of=/tmp/.preclear/sdd/fifo count=2096640 skip=512 iflag=nocache,count_bytes,skip_bytes
Mar  7 20:41:11 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: verifying the rest of the disk.
Mar  7 20:41:11 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: cmp /tmp/.preclear/sdd/fifo /dev/zero
Mar  7 20:41:11 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: dd if=/dev/sdd of=/tmp/.preclear/sdd/fifo bs=2097152 skip=2097152 count=14000517545984 iflag=nocache,count_bytes,skip_bytes
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: size:  32824832, available: 5199112, free: 15%
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: Filesystem             1K-blocks        Used Available Use% Mounted on
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: rootfs                  32824832    27627768   5197064  85% /
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: devtmpfs                32824840           0  32824840   0% /dev
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: tmpfs                   32931576           0  32931576   0% /dev/shm
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cgroup_root                 8192           0      8192   0% /sys/fs/cgroup
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: tmpfs                     131072        2164    128908   2% /var/log
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/sda1               30202944      464704  29738240   2% /boot
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 32824832    27627768   5197064  85% /lib/modules
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 32824832    27627768   5197064  85% /lib/firmware
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: tmpfs                       1024           0      1024   0% /mnt/disks
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: tmpfs                       1024           0      1024   0% /mnt/remotes
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/nvme0n1p1         999715228   877905540 121809688  88% /mnt/disks/NVME
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/md1              4881683620  4836461832  45221788 100% /mnt/disk1
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/md2              4881683620  4837057604  44626016 100% /mnt/disk2
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/md3              3905110812  3862115484  42995328  99% /mnt/disk3
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/md4             11716798412 11584165276 132633136  99% /mnt/disk4
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/sde1              976284628   839029464 137255164  86% /mnt/cache
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: shfs                 25385276464 25119800196 265476268  99% /mnt/user0
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: shfs                 25385276464 25119800196 265476268  99% /mnt/user
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: Google:Unraid_Backup    15728640     6037744   9690896  39% /mnt/disks/Unraid_Backup
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/loop2              20961280    11925896   9035384  57% /var/lib/docker
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/1508e11bff76a50aa6db7c550a6fc3b76c7023f60e3b52fb90b75b9945815581/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: /dev/loop3               1048576        4524    925524   1% /etc/libvirt
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/45f100b9e24e461f896332ed14bbe093116abec7f2c3d82524e9e811e56795b6/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/de352440f6c47a3536b0e3d901c0435d7944ca85430df7e307ad7c046447b747/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/8aad0448a3057d43ad9ef5085d76dd0a6c812164e5e225f1f6e720a3d377c3ca/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/222f57029490f6cd06de8c95e0ca0ffc80a48fd6c6f64dbb359528a3a26ddfb3/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/9ba6cc3f545ce1faae3556bd86033193a41a5a85ffd7b1aae03c71f939e5c123/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/bd38ba8c809b49eb7b02c9057019d847efa911342e3b983dc3febab8f1c540d4/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/8c3b5c2401f6c02a0a342bf55af4de8da14d27e7a805c778ecf13c9857e915f7/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/0b81ddc17859b5fab267404f03f939e1b1f33fd1ca315679216eb0e7e4972e04/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/b21a366d4c9b6e8e549e44e97ca70918de9d848964ea0f9e0c762eaed48d02fe/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/7fe642a1bf32a5ae8a0b7f781fd60a75c0ee771c33bf01af73b735a0088b769d/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/80fe5f6a60a61d0f8ea5cb66617cce38c366002bde1a05832212b8ec9f3b6d68/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: overlay                 20961280    11925896   9035384  57% /var/lib/docker/overlay2/423b8544d034b72de42757e45870f808e21f82984a24962a4ce9aa85f426c3ae/merged
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: Low memory detected, aborting...
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: Post-Read: post-read verification failed!
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:53 UnRAID preclear_disk_9LG7SS1A[11901]: cat: /tmp/.preclear/sdd/smart_cycle_initial_start: No such file or directory
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1702: let: diff=( - ): syntax error: operand expected (error token is ")")
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Error:
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.:
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Reallocated_Sector_Ct             -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Power_On_Hours                    -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Temperature_Celsius               -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Reallocated_Event_Count           -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Current_Pending_Sector            -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: Offline_Uncorrectable             -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: UDMA_CRC_Error_Count              -
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Mar  7 20:42:54 UnRAID preclear_disk_9LG7SS1A[11901]: error encountered, exiting...

 

Link to comment

I just added a new 8TB drive to my server and am attempting to preclear the drive. I see the drive in unassigned devices and in the preclear plugin. I clicked the start preclear link and it shows as "starting". Does not start the actual preclear process. I tried this twice.

 

Bad drive? Do I need to format it?

 

Here is what is showing in the log:

 

Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] 4096-byte physical blocks
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Write Protect is off
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Mode Sense: 7f 00 00 08
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Attached SCSI disk
Mar 8 13:12:24 AUBURN kernel: sd 7:0:12:0: [sdo] Synchronizing SCSI cache
Mar 8 13:12:24 AUBURN kernel: sd 7:0:12:0: [sdo] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] 4096-byte physical blocks
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Write Protect is off
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Mode Sense: 7f 00 00 08
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Attached SCSI disk

Link to comment

A cable on my drive was causing read errors, so I swapped out the cable and connected to another port on the motherboard. This was disk 1. When I started the array, Disk 2 was now Disk 1 and automatically formatted. The original Disk 1, which still had data on it because it was just moving a cable, was then automatically cleared, losing data on that drive now, too. Unpaid saw this as a two-drive failure and so the data on Disk 1 is lost.

 

Did preclear automatically start on the original Disk 1 because of a new port number on the motherboard?

 

This is with Unraid OS 6.9.0 and the latest Preclear app.

Link to comment
On 3/8/2021 at 1:20 PM, stepmback said:

I just added a new 8TB drive to my server and am attempting to preclear the drive. I see the drive in unassigned devices and in the preclear plugin. I clicked the start preclear link and it shows as "starting". Does not start the actual preclear process. I tried this twice.

 

Bad drive? Do I need to format it?

 

Here is what is showing in the log:

 

Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] 4096-byte physical blocks
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Write Protect is off
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Mode Sense: 7f 00 00 08
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 8 13:02:23 AUBURN kernel: sd 7:0:12:0: [sdo] Attached SCSI disk
Mar 8 13:12:24 AUBURN kernel: sd 7:0:12:0: [sdo] Synchronizing SCSI cache
Mar 8 13:12:24 AUBURN kernel: sd 7:0:12:0: [sdo] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] 15628053168 512-byte logical blocks: (8.00 TB/7.28 TiB)
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] 4096-byte physical blocks
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Write Protect is off
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Mode Sense: 7f 00 00 08
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Mar 8 13:12:49 AUBURN kernel: sd 7:0:13:0: [sdo] Attached SCSI disk

I got another drive and installed and am having the same issue. I noticed on the last drive I was getting Seek error rate numbers and on this NEW drive I am getting them again. This time 483. Is this another bad drive or I am just doing something wrong? Help!!

 

#ATTRIBUTE NAMEFLAGVALUEWORSTTHRESHOLDTYPEUPDATEDFAILEDRAW VALUE

1Raw read error rate0x000f100100044Pre-failAlwaysNever2128

3Spin up time0x0003098098000Pre-failAlwaysNever0

4Start stop count0x0032100100020Old ageAlwaysNever1

5Reallocated sector count0x0033100100010Pre-failAlwaysNever0

7Seek error rate0x000f100253045Pre-failAlwaysNever483

9Power on hours0x0032100100000Old ageAlwaysNever0 (0h)

10Spin retry count0x0013100100097Pre-failAlwaysNever0

12Power cycle count0x0032100100020Old ageAlwaysNever1

18Unknown attribute0x000b100100050Pre-failAlwaysNever0

Link to comment
22 minutes ago, stepmback said:

I got another drive and installed and am having the same issue. I noticed on the last drive I was getting Seek error rate numbers and on this NEW drive I am getting them again. This time 483. Is this another bad drive or I am just doing something wrong? Help!!

 

#ATTRIBUTE NAMEFLAGVALUEWORSTTHRESHOLDTYPEUPDATEDFAILEDRAW VALUE

1Raw read error rate0x000f100100044Pre-failAlwaysNever2128

3Spin up time0x0003098098000Pre-failAlwaysNever0

4Start stop count0x0032100100020Old ageAlwaysNever1

5Reallocated sector count0x0033100100010Pre-failAlwaysNever0

7Seek error rate0x000f100253045Pre-failAlwaysNever483

9Power on hours0x0032100100000Old ageAlwaysNever0 (0h)

10Spin retry count0x0013100100097Pre-failAlwaysNever0

12Power cycle count0x0032100100020Old ageAlwaysNever1

18Unknown attribute0x000b100100050Pre-failAlwaysNever0

That was annoying.

 

I had not rebooted my server in about 160 days. Everything was fine so why bother. I just rebooted the server and now i am able to pre-clear the drive. Bug?

Link to comment

Preclear passed. Drive failed rebuilding parity on it. Used drive. Looks like it's flakey tho:

First preclear log: 

############################################################################################################################ # # # unRAID Server Preclear of disk PCJUZH9B # # Cycle 1 of 1, partition start on sector 64. # # # # # # Step 1 of 5 - Pre-read verification: [9:05:50 @ 122 MB/s] SUCCESS # # Step 2 of 5 - Zeroing the disk: [8:22:53 @ 132 MB/s] SUCCESS # # Step 3 of 5 - Writing unRAID's Preclear signature: SUCCESS # # Step 4 of 5 - Verifying unRAID's Preclear signature: SUCCESS # # Step 5 of 5 - Post-Read verification: [9:05:15 @ 122 MB/s] SUCCESS # # # # # # # # # # # # # # # ############################################################################################################################ # Cycle elapsed time: 26:34:05 | Total elapsed time: 26:34:07 # ############################################################################################################################ ############################################################################################################################ # # # S.M.A.R.T. Status (device type: default) # # # # # # ATTRIBUTE INITIAL CYCLE 1 STATUS # # Reallocated_Sector_Ct 0 0 - # # Power_On_Hours 48982 49008 Up 26 # # Temperature_Celsius 39 44 Up 5 # # Reallocated_Event_Count 0 0 - # # Current_Pending_Sector 0 0 - # # Offline_Uncorrectable 0 0 - # # UDMA_CRC_Error_Count 0 0 - # # # # # # # # # # # ############################################################################################################################ # SMART overall-health self-assessment test result: PASSED # ############################################################################################################################ --> ATTENTION: Please take a look into the SMART report above for drive health issues. --> RESULT: Preclear Finished Successfully!.

 

----------------

SMART Identity returns some garbage:

Vendor:

Product:H�m

Revision:��m

Compliance:SPC-3

User capacity:10,474,428,802,542,830,592 bytes [10474 PB]

Logical block size:11502592 bytes

Scsimodepageoffset:raw_curr too small, offset=89 resp_len=85 bd_len=85

>> terminate command early due to bad response to IEC mode page:

A mandatory SMART command failed:exiting. To continue, add one or more '-T permissive' options.

'Can not read capabilities'

----

Second Preclear log:

Mar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 480: /tmp/.preclear/sdg/dd_output: No such file or directoryMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:41 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 480: /tmp/.preclear/sdg/dd_output: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 480: /tmp/.preclear/sdg/dd_output: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: /tmp/.preclear/sdg/dd_output_complete: No such file or directoryMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 475: [: -gt: unary operator expectedMar 11 15:57:42 preclear_disk_PCJUZH9B_24885: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 478: /tmp/.preclear/sdg/dd_output_complete: No such file or directory

---------------

 

Hope it isn't the controller or DS4246 (both new to me) but other disks seem to be working as expected.

 

Thinking it's just bad luck timing.

 

Thoughts?

 

Edited by RealActorRob
Link to comment

UNRAID Ver: 6.9.1

PreClear ver: 2021.01.03

Server type: Supermicro CS836, LSI SAS2308 controller

 

Noticed in the fix notes for 2020:12-22

 

Fix: script failing during drive zeroing

 

I've purchased several brand new external WD 8TB drives.  Each of the drives runs through pre-reads just fine.  Then toward the very end of zeroing it reports a failure.  I've tried different slots and the behavior is the same.  I've pulled the drive out of the server and put it back into the original enclosure to run pre-clear on another system.  Should I generate a full diag next time or attach the pre-clear log.  

 

I am wondering if i'm hitting the bug behavior or just a hardware type issue causing the pre-clear bug.

 

Skip

 

edit: attached diag- but syslog looks like it is missing data

tower-diagnostics-20210316-1218.zip

Edited by Skipdog
Link to comment
On 2/5/2021 at 12:41 PM, Skxnk said:

I'm getting this warning when trying to preclear an external driveb3410e6f7d4965e8465c458fb6af0ebf.png

I got this exact same sequence of error messages when attempting to preclear a drive.  I have a slot in my server rack that I use specifically for pre-clearling drives.  I replaced the SATA cable and put the drive into a new slot in my server rack and the messages disappeared.  The drive did finally fail during the zeroing process at around the 75% complete point.  I pulled the drive and ran SeaTools on it and it failed the short drive self-test so it turned out the drive was bad.  I think this is actually unrelated to the error log you posted and feel it was more of a cable or backplane issue in my server.  I would suggest replacing the cable and try to pre-clear it again.

 

I just realized you have an external drive, but it could still be a cable issue.  The vast majority of drives that I use in my unRAID server are drives shucked from external enclosures

Edited by captain_video
Link to comment

Been running preclear on one of my 8TB discs and it appears that has run the zeroing function twice along with the pre-read function.  It is currently performing the post read segment so I assume it should be completed by tomorrow morning.

 

Does the latest preclear plugin automatically zero the disk twice?  I don't recall changing the option when I initiated preclear.  I generally only run preclear once on any new disk since most of them are used to rebuild data on either a failed disk or one that is being replaced due to the age of the disk.

Link to comment

The preclear completed with no issues so it would appear that whatever bug was causing my problem has been corrected.  The pre- and post-read functions took about 15 hours each, but the zeroing phase took about 49 hours for an 8 TB drive.  This phase was performed twice for whatever reason even though I had selected only one cycle for the pre-clear process.

Link to comment
I'm running pre-clear on a 2nd 8TB drive and it is also performing the zeroing process twice.  Is this really necessary?  Can we get the option to just do this once?
Please send me tour diagnostics file, can be by PM.

Enviado de meu SM-G985F usando o Tapatalk

Link to comment

Any update on the preclear issue I was having?  I've run it on two 8TB drives and it ran the zeroing function twice on both drives.  Total time to run preclear on a single drive was close to 80 hours.  That includes the pre-read, zeroing x 2, and post-read sequences.  Reducing the zeroing sequence to just a single pass would reduce that time by about 24-25 hours. 

 

I just use these drives to replace failed drives or ones that are nearing end of life or just upgrading from a smaller drive to increase storage.  I don't really even need to run preclear on these dives, but I do it to ensure they're in good working order before installing them in the array.  I pick them up when I see them on sale to have on hand in case of a drive failure.  They're never on sale when you need one so I just have them as insurance.

 

Link to comment

Tried to preclear a new 12TB Easystore drive via USB (like I have done all my shuckable drives before shucking) twice now and it fails the post read both times about half way through the process.  I'm running badblocks on the drive now, but appears to be the same issue previously reported.  I am running the latest version of the plugin.  Will post back with badblocks results in a couple days...

Log:

Mar 30 19:09:23 Titan preclear_disk_5147485459353754[29121]: Post-Read: progress - 50% verified @ 170 MB/s
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: cmp command failed - disk not zeroed
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd - read 6607001878528 of 12000138625024 (5393136746496).
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: elapsed time - 9:50:16
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd command failed, exit code [141].
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6585550110720 bytes (6.6 TB, 6.0 TiB) copied, 35273.1 s, 187 MB/s
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3141291+0 records in
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3141290+0 records out
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6587762606080 bytes (6.6 TB, 6.0 TiB) copied, 35286.2 s, 187 MB/s
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3142296+0 records in
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3142295+0 records out
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6589870243840 bytes (6.6 TB, 6.0 TiB) copied, 35299.3 s, 187 MB/s
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3143298+0 records in
Mar 30 20:09:59 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3143297+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6591971590144 bytes (6.6 TB, 6.0 TiB) copied, 35312.3 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3144311+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3144310+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6594096005120 bytes (6.6 TB, 6.0 TiB) copied, 35325.3 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3145369+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3145368+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6596314791936 bytes (6.6 TB, 6.0 TiB) copied, 35338.3 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3146396+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3146395+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6598468567040 bytes (6.6 TB, 6.0 TiB) copied, 35351.3 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3147382+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3147381+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6600536358912 bytes (6.6 TB, 6.0 TiB) copied, 35364.2 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3148391+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3148390+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6602652385280 bytes (6.6 TB, 6.0 TiB) copied, 35377.2 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3149405+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3149404+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6604778897408 bytes (6.6 TB, 6.0 TiB) copied, 35390.2 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3150464+0 records in
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 3150463+0 records out
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: dd output: 6606999781376 bytes (6.6 TB, 6.0 TiB) copied, 35403.3 s, 187 MB/s
Mar 30 20:10:00 Titan preclear_disk_5147485459353754[29121]: Post-Read: post-read verification failed!
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Error:
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.:
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    -
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Power_On_Hours           76       128  Up 52
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Temperature_Celsius      33       36   Up 3
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Reallocated_Event_Count  0        0    -
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Current_Pending_Sector   0        0    -
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: Offline_Uncorrectable    0        0    -
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    -
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Mar 30 20:10:04 Titan preclear_disk_5147485459353754[29121]: error encountered, exiting...

 

Edit: I'm guessing the issue is with my unRAID server.  I moved this drive to my other server and it is now more than 70% complete with the post-read process.  Need to track down my issues with my new build... Grrr

Edited by TexasDaddy
Update
Link to comment

 Hey all,

 

In almost an identical case to TexasDaddy above, I too had a seemingly unexplained post-read error around 30Gb in, when trying to preclear a Seagate Expansion 5Tb (2.5", ST5000LM000 drive) over USB.

 

Unraid Version 6.9.1

Preclear 2021.03.18

 

Log:

Apr 05 12:21:54 preclear_disk_WCJ2NAHF_15509: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --notify 1 --frequency 2 --cycles 1 --no-prompt /dev/sdd
Apr 05 12:21:54 preclear_disk_WCJ2NAHF_15509: Preclear Disk Version: 1.0.21
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T. info type: default
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T. attrs type: default
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: Disk size: 5000981077504
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: Disk blocks: 1220942645
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: Blocks (512 bytes): 9767541167
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: Block size: 4096
Apr 05 12:21:55 preclear_disk_WCJ2NAHF_15509: Start sector: 309
Apr 05 12:21:58 preclear_disk_WCJ2NAHF_15509: Pre-read: pre-read verification started (1/5)....
Apr 05 12:21:58 preclear_disk_WCJ2NAHF_15509: Pre-Read: dd if=/dev/sdd of=/dev/null bs=2097152 skip=0 count=5000981077504 conv=noerror iflag=nocache,count_bytes,skip_bytes
Apr 05 13:26:27 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 10% read @ 135 MB/s
Apr 05 14:32:23 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 20% read @ 128 MB/s
Apr 05 15:40:58 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 30% read @ 111 MB/s
Apr 05 16:52:53 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 40% read @ 113 MB/s
Apr 05 18:08:48 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 50% read @ 104 MB/s
Apr 05 19:30:03 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 60% read @ 90 MB/s
Apr 05 20:58:06 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 70% read @ 94 MB/s
Apr 05 22:35:02 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 80% read @ 80 MB/s
Apr 06 00:24:46 preclear_disk_WCJ2NAHF_15509: Pre-Read: progress - 90% read @ 67 MB/s
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Pre-Read: dd - read 5000981077504 of 5000981077504 (0).
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Pre-Read: elapsed time - 14:09:37
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Pre-Read: dd exit code - 0
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Pre-read: pre-read verification completed!
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Zeroing: zeroing the disk started (1/5)....
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Zeroing: emptying the MBR.
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Zeroing: dd if=/dev/zero of=/dev/sdd bs=2097152 seek=2097152 count=5000978980352 conv=notrunc iflag=count_bytes,nocache,fullblock oflag=seek_bytes
Apr 06 02:31:39 preclear_disk_WCJ2NAHF_15509: Zeroing: dd pid [20883]
Apr 06 03:32:52 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 10% zeroed @ 138 MB/s
Apr 06 04:35:49 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 20% zeroed @ 136 MB/s
Apr 06 05:40:27 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 30% zeroed @ 131 MB/s
Apr 06 06:47:22 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 40% zeroed @ 113 MB/s
Apr 06 07:57:50 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 50% zeroed @ 113 MB/s
Apr 06 09:12:35 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 60% zeroed @ 114 MB/s
Apr 06 10:32:40 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 70% zeroed @ 101 MB/s
Apr 06 11:59:51 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 80% zeroed @ 93 MB/s
Apr 06 13:38:36 preclear_disk_WCJ2NAHF_15509: Zeroing: progress - 90% zeroed @ 77 MB/s
Apr 06 15:35:10 preclear_disk_WCJ2NAHF_15509: Zeroing: dd - wrote 5000981077504 of 5000981077504 (0).
Apr 06 15:35:10 preclear_disk_WCJ2NAHF_15509: Zeroing: elapsed time - 13:03:28
Apr 06 15:35:10 preclear_disk_WCJ2NAHF_15509: Zeroing: dd exit code - 0
Apr 06 15:35:10 preclear_disk_WCJ2NAHF_15509: Zeroing: zeroing the disk completed!
Apr 06 15:35:11 preclear_disk_WCJ2NAHF_15509: Signature: writing signature:    0   0   2   0   0 255 255 255   1   0   0   0 255 255 255 255
Apr 06 15:35:11 preclear_disk_WCJ2NAHF_15509: Signature: verifying unRAID's signature on the MBR ...
Apr 06 15:35:11 preclear_disk_WCJ2NAHF_15509: Signature: Unraid preclear signature is valid!
Apr 06 15:35:12 preclear_disk_WCJ2NAHF_15509: Post-Read: post-read verification started (1/5)....
Apr 06 15:35:12 preclear_disk_WCJ2NAHF_15509: Post-Read: verifying the beginning of the disk.
Apr 06 15:35:12 preclear_disk_WCJ2NAHF_15509: Post-Read: cmp /tmp/.preclear/sdd/fifo /dev/zero
Apr 06 15:35:12 preclear_disk_WCJ2NAHF_15509: Post-Read: dd if=/dev/sdd of=/tmp/.preclear/sdd/fifo count=2096640 skip=512 iflag=nocache,count_bytes,skip_bytes
Apr 06 15:35:13 preclear_disk_WCJ2NAHF_15509: Post-Read: verifying the rest of the disk.
Apr 06 15:35:13 preclear_disk_WCJ2NAHF_15509: Post-Read: cmp /tmp/.preclear/sdd/fifo /dev/zero
Apr 06 15:35:13 preclear_disk_WCJ2NAHF_15509: Post-Read: dd if=/dev/sdd of=/tmp/.preclear/sdd/fifo bs=2097152 skip=2097152 count=5000978980352 iflag=nocache,count_bytes,skip_bytes
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: cmp command failed - disk not zeroed
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd - read 26589790208 of 5000981077504 (4974391287296).
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: elapsed time - 0:03:27
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd command failed, exit code [141].
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 5770+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 5769+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 12098469888 bytes (12 GB, 11 GiB) copied, 91.5091 s, 132 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 6515+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 6514+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 13660848128 bytes (14 GB, 13 GiB) copied, 104.411 s, 131 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 7253+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 7252+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 15208546304 bytes (15 GB, 14 GiB) copied, 117.321 s, 130 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 7923+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 7922+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 16613638144 bytes (17 GB, 15 GiB) copied, 129.003 s, 129 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 8620+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 8619+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 18075353088 bytes (18 GB, 17 GiB) copied, 140.677 s, 128 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 9468+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 9467+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 19853737984 bytes (20 GB, 18 GiB) copied, 153.488 s, 129 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 10312+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 10311+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 21623734272 bytes (22 GB, 20 GiB) copied, 166.229 s, 130 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 11085+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 11084+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 23244832768 bytes (23 GB, 22 GiB) copied, 177.966 s, 131 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 11852+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 11851+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 24853348352 bytes (25 GB, 23 GiB) copied, 189.788 s, 131 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 12679+0 records in
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 12678+0 records out
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: dd output: 26587693056 bytes (27 GB, 25 GiB) copied, 202.644 s, 131 MB/s
Apr 06 15:38:42 preclear_disk_WCJ2NAHF_15509: Post-Read: post-read verification failed!
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Error:
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.:
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Power_On_Hours           0        27   Up 27
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: SATA_Downshift_Count     0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: End-to-End_Error         0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Reported_Uncorrect       0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Airflow_Temperature_Cel  29       51   Up 22
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Current_Pending_Sector   0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: Offline_Uncorrectable    0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    -
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Apr 06 15:38:44 preclear_disk_WCJ2NAHF_15509: error encountered, exiting...

 

There's a *small* chance that when I plugged another external SSD in to format last night that it caused some sort of weird bug (preclear was in progress). As an alternative test I've restarted Unraid; moved the USB to a rear I/O shield port (rather than a front connector); and re-started the test.

 

One interesting thing I've noticed is that the drive is showing ZERO writes on the "Main" tab:

image.thumb.png.fdd31a50f7af47f60a0862602392d2b4.png

Is this normal? I would have thought writing all the zero's would constitute writes? Unless it's something in relation to how Unassigned Devices operates (ie. technically I haven't written to the array...

 

 

preclear_disk_WCJ2NAHF_15509.txt

Edited by heyitsjel
Link to comment
6 minutes ago, trurl said:

 

Have you done memtest? 

 

Hey trurl - not yet, but I can't say I've had this issue before (or stability issues).... ram is Corsair LPX 3000 running its' specified XMP profile (ie. 3000mhz @ 15-17-17-35). It's also supported by the motherboard I'm running (ASRock Fatal1ty AB350 Gaming-ITX/ac with a R5 1600X @ stock settings).

 

If it fails this time, I'll try dialling back the timings/clocks to 2400mhz....

 

CPU is running cool for an ITX system (55-60c / 130-140f), and the drive in the USB enclosure is sitting around the low 40-45c's (104-113f)... so nothing seems too abnormal in that regard.

 

I wonder if it's something to do with virtualisation? I don't have any VM's currently running, but I wonder if it's related to VM configuration and USB/controller? (IOMMU groups & PCIe ACS Override...). If it fails this round, I'll try setting those back to stock...

Link to comment
7 hours ago, trurl said:

 

Have you done memtest? 

 

I've had system instability recently and someone else had mentioned they thought it looked like a memory issue from my diags.  I pulled 2 sticks from my system and so far everything has been rock solid.  

 

I've got the 2 sticks I pulled being tested individually on my buddy's bench rig that he recently completed a full burn-in to validate all hardware but has not found any issues with my memory at this time.  He is going to be running more extensive testing and will get back to me.  If it all tests out ok, then I'm going to reach out to AsRock Rack support to see if there is anything they would like to investigate regarding the issue.  The only thing that comes to mind is that the memory is 3200 MHz, and the MB slows it down to 2666 MHz with 4 sticks of DR DIMMS are populated.  Perhaps there is a timing issue with the board. 

 

I finished my preclears without issue on my backup server, so I will need to get a new external drive to test if pulling the memory resolved the issue on my primary from completing the post-read process. 

Link to comment
53 minutes ago, TexasDaddy said:

I've got the 2 sticks I pulled being tested individually on my buddy's bench rig

Testing RAM in another machine won't diagnose problems with your RAM slots or other things that may be related to how that RAM actually works on your machine.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.