Preclear plugin


Recommended Posts

Thanks again, JorgeB. That's really helpful.

 

OK, I've now set the Maxstor format to btrfs and successfully added it to the restarted array.

 

More confusion here, though, as under Main, the drive file system is reported as btrfs but is also being reported as "unmountable, no file system".

 

Apologies for the repeated requests for handholding. The UnRAID WebGUI is magnificently detailed and the clarity of the physical layout is exemplary (QNAP and Synology please note). But the logic is occasionally defeating me.

 

-- 

Chris

btrfs but unformatted.png

Link to comment
On 11/16/2020 at 10:03 AM, gfjardim said:

Have you updated nthe plugin recently?

 

 

If you can, please send me a PM with your Diagnostics file.

 

 

I will reboot the system on Friday to add an HBA card. Will collect and send you the Diagnostics if it disappears again after the reboot.

 

Thank you!

Link to comment
4 hours ago, bidmead said:

If I'm going to add the drive to the array (which is my intention) I don't need to format it at this point because the act of adding it will present the formatting opportunity.

Just thought I would comment on this to add something for anyone that might read it.

 

If you format a clear disk before ADDING it to a NEW data slot, Unraid will have to clear it again so parity is maintained, since a formatted disk isn't clear.

  • Thanks 1
Link to comment

Got it. Excellent point, Constructor trurl. Thanks.

 

Formatting a drive entails laying data down on it and if UnRAID sees data on a new drive (as I understand it) it will decide it needs clearing when that drive asks to join the array.

 

But if the formatting of a precleared drive is left to UnRAID, only those bits on the parity drive corresponding to the formatting it lays down will need to be checked and/or flipped. And UnRAID will be taking care of the parity in parallel with the formatting operation. Is that about right?

 

If so, only the parity drive and the newly added drive need to be spinning, as UnRAID already knows the parity status of all the other drives in the array.

 

Bottom line: PreClear, add the drive to the array, then format the drive. This is important, as:

 

1. When UnRAID is clearing a drive the array has to be down and can do no work.

2. Today's multi-terabyte drives can take 24 hours or longer to be cleared.

 

(Sanity check invited. I'm very new to all this.)

 

-- 

Chris

Edited by bidmead
Link to comment
17 minutes ago, bidmead said:

1. When UnRAID is clearing a drive the array has to be down and can do no work.

No, Unraid will clear a drive with the array running (and has done for some time).

 

Certainly the last bunch of drives I've added, I've installed them straight away and let Unraid clear them.  Others prefer to soak test them with a couple of preclear cycles first, which is understandable.

Link to comment

Just had pre-clear fail during post read. Although looking at the logs it looks like the drive might of disconnected momentarily and that caused the error? It is in a USB dock so disconnecting is possible over a 72 hour period. 

Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 4073318255 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 3
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 27344764927 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 3
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 26952798208 op 0x0:(READ) flags 0x80700 phys_seg 256 prio class 0
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 26952796168 op 0x0:(READ) flags 0x80700 phys_seg 255 prio class 0
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 141407706 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 3
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 26952796168 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Nov 23 08:22:30 NAS kernel: Buffer I/O error on dev sdu, logical block 3369099521, async page read
Nov 23 08:22:30 NAS kernel: blk_update_request: I/O error, dev sdu, sector 26952796168 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 0
Nov 23 08:22:30 NAS kernel: Buffer I/O error on dev sdu, logical block 3369099521, async page read
Nov 23 08:22:30 NAS kernel: sd 9:0:0:0: [sdu] Synchronizing SCSI cache
Nov 23 08:22:30 NAS kernel: sd 9:0:0:0: [sdu] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00
Nov 23 08:22:30 NAS rc.diskinfo[13766]: SIGHUP received, forcing refresh of disks info.
Nov 23 08:22:30 NAS kernel: usb 4-1: new SuperSpeed Gen 1 USB device number 4 using xhci_hcd
Nov 23 08:22:31 NAS kernel: usb-storage 4-1:1.0: USB Mass Storage device detected
Nov 23 08:22:31 NAS kernel: usb-storage 4-1:1.0: Quirks match for vid 174c pid 55aa: 400000
Nov 23 08:22:31 NAS kernel: scsi host9: usb-storage 4-1:1.0
Nov 23 08:22:32 NAS kernel: scsi 9:0:0:0: Direct-Access     WDC WD14 0EDFZ-11A0VA0    0    PQ: 0 ANSI: 6
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: Attached scsi generic sg20 type 0
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] 4096-byte physical blocks
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] Write Protect is off
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] Mode Sense: 43 00 00 00
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Nov 23 08:22:32 NAS kernel: sdu: sdu1
Nov 23 08:22:32 NAS kernel: sd 9:0:0:0: [sdu] Attached SCSI disk
Nov 23 08:22:33 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd - read 13799831638016 of 14000519643136.
Nov 23 08:22:33 NAS unassigned.devices: Disk with serial 'WDC_WD140EDFZ-11A0VA0_9LG7XPWA', mountpoint 'WDC_WD140EDFZ-11A0VA0_9LG7XPWA' is not set to auto mount and will not be mounted.
Nov 23 08:22:33 NAS unassigned.devices: Issue spin down timer for device '/dev/sdu'.
Nov 23 08:22:33 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: elapsed time - 23:58:53
Nov 23 08:22:33 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd command failed, exit code [1].
Nov 23 08:22:33 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13792360529920 bytes (14 TB, 13 TiB) copied, 86255.3 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6577324+0 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6577323+0 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13793646084096 bytes (14 TB, 13 TiB) copied, 86268.2 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6578006+0 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6578005+0 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13795076341760 bytes (14 TB, 13 TiB) copied, 86282.6 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6578622+0 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6578621+0 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13796368187392 bytes (14 TB, 13 TiB) copied, 86295.3 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6579313+0 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6579312+0 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13797817319424 bytes (14 TB, 13 TiB) copied, 86309.4 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6579966+0 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6579965+0 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13799186759680 bytes (14 TB, 13 TiB) copied, 86323.5 s, 160 MB/s
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: dd: error reading '/dev/sdu': Input/output error
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6580271+1 records in
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 6580271+1 records out
Nov 23 08:22:34 NAS preclear_disk_9LG7XPWA[32066]: Post-Read: dd output: 13799829540864 bytes (14 TB, 13 TiB) copied, 86330.4 s, 160 MB/s
Nov 23 08:22:41 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 5    Reallocated_Sector_Ct    0
Nov 23 08:22:41 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 9    Power_On_Hours           70
Nov 23 08:22:41 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 194  Temperature_Celsius      33
Nov 23 08:22:41 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 196  Reallocated_Event_Count  0
Nov 23 08:22:41 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 197  Current_Pending_Sector   0
Nov 23 08:22:42 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 198  Offline_Uncorrectable    0
Nov 23 08:22:42 NAS preclear_disk_9LG7XPWA[32066]: S.M.A.R.T.: 199  UDMA_CRC_Error_Count     0
Nov 23 08:22:42 NAS preclear_disk_9LG7XPWA[32066]: error encountered, exiting...

 

SMART is still fine with no signs of issues except throughput performance is a bit lower then my other drives (114-116 mostly but they are 12TB vs 14TB)

1	Raw read error rate	0x000b	100	100	001	Pre-fail	Always	Never	0
2	Throughput performance	0x0004	135	135	054	Old age	Offline	Never	108
3	Spin up time	0x0007	093	093	001	Pre-fail	Always	Never	315
4	Start stop count	0x0012	100	100	000	Old age	Always	Never	5
5	Reallocated sector count	0x0033	100	100	001	Pre-fail	Always	Never	0
7	Seek error rate	0x000a	100	100	001	Old age	Always	Never	0
8	Seek time performance	0x0004	133	133	020	Old age	Offline	Never	18
9	Power on hours	0x0012	100	100	000	Old age	Always	Never	70 (2d, 22h)
10	Spin retry count	0x0012	100	100	001	Old age	Always	Never	0
12	Power cycle count	0x0032	100	100	000	Old age	Always	Never	5
22	Unknown attribute	0x0023	100	100	025	Pre-fail	Always	Never	100
192	Power-off retract count	0x0032	100	100	000	Old age	Always	Never	6
193	Load cycle count	0x0012	100	100	000	Old age	Always	Never	6
194	Temperature celsius	0x0002	051	051	000	Old age	Always	Never	32 (min/max 22/37)
196	Reallocated event count	0x0032	100	100	000	Old age	Always	Never	0
197	Current pending sector	0x0022	100	100	000	Old age	Always	Never	0
198	Offline uncorrectable	0x0008	100	100	000	Old age	Offline	Never	0
199	UDMA CRC error count	0x000a	100	100	000	Old age	Always	Never	0

Would it detect a drive dropping out as a non-zero event? The message said the drive is not zeroed?

 

Trying to decide if I should return it or give it another post-read.

Edited by TexasUnraid
Link to comment

Small bug report, I noticed that the pre-clear button is not available next to the drive in UD anymore after the failed post read.

 

Went to the plugin screen and decided to run a test, I started another verifcation run and then manually went over and turned off the USB dock for a second and then turned it back on. It failed in the same way, says the drive is not zeroed and same basic errors, just less of them.

 

So I think I am going to go with it was just a USB dock error and the drive/system are fine unless someone can point out a strong reason to think otherwise?

Link to comment

In 6.9, I did a preclear on disk,  using  Joe-L, skip pre-read and selected fast post-read. It finished, I got notification on email so I added it to the array (new slot), and unraid clears the disk again ("Clearing in progress"). Is it normal? Or it happened because I didn't click the red "X" button after the preclear? I thought pre-clearing's purpose was to not have to clear it when adding to array.

Link to comment
6 hours ago, Tomr said:

In 6.9, I did a preclear on disk,  using  Joe-L, skip pre-read and selected fast post-read. It finished, I got notification on email so I added it to the array (new slot), and unraid clears the disk again ("Clearing in progress"). Is it normal? Or it happened because I didn't click the red "X" button after the preclear? I thought pre-clearing's purpose was to not have to clear it when adding to array.

Sounds like something went wrong with the preclear. Clicking the X should have no impact. Once the preclear is done, even before the post-read, there would be a signature written to the drive that Unraid picks up on when you later add it to the array so that it doesn't have to clear it again.

I've never heard of this happening.

Link to comment
On 11/17/2020 at 8:58 AM, trurl said:

Just thought I would comment on this to add something for anyone that might read it.

 

If you format a clear disk before ADDING it to a NEW data slot, Unraid will have to clear it again so parity is maintained, since a formatted disk isn't clear.

I wish I had noticed this post earlier, lol. Thank you for pointing this out. I had the same problem as the previous poster and that would have saved me some time on a 12TB drive.

Link to comment
3 hours ago, DougCube said:

Sounds like something went wrong with the preclear. Clicking the X should have no impact. Once the preclear is done, even before the post-read, there would be a signature written to the drive that Unraid picks up on when you later add it to the array so that it doesn't have to clear it again.

I've never heard of this happening.

Everything went fine, I received email saying so, and I also checked the logs.

 

Quote

== invoked as: /usr/local/sbin/preclear_disk_ori.sh -M 1 -o 2 -c 1 -W -f -J /dev/sdf

== WDCWD120EDAZ-11F3RA0 XXXXXXXX

== Disk /dev/sdf has been successfully precleared

== with a starting sector of 64

== Ran 1 cycle

==

== Last Cycle`s Zeroing time : 20:27:01 (162 MB/s)

== Last Cycle`s Total Time : 43:37:02

==

== Total Elapsed Time 43:37:02

==

== Disk Start Temperature: 30C

==

== Current Disk Temperature: 30C,

 

2 hours ago, Zorlofe said:

I wish I had noticed this post earlier, lol. Thank you for pointing this out. I had the same problem as the previous poster and that would have saved me some time on a 12TB drive.

That means preclear doesn't do anything when adding new drive? That's strange. I thought Unraid should pickup that the disk is precleared and go straight to syncing parity. I didn't format the disk as trurl wrote, just preclear.

Edited by Tomr
Link to comment
2 hours ago, Tomr said:

That means preclear doesn't do anything when adding new drive? That's strange. I thought Unraid should pickup that the disk is precleared and go straight to syncing parity. I didn't format the disk as trurl wrote, just preclear.

Unraid does not need to do anything to parity when you add a 'clear' disk as since it is all zeroes it does not affect parity.   It is at this stage that you tell Unraid to format the disk (which only takes a few minutes) and as you format the disk (which is a form of write operation) then Unraid will be updating parity to reflect this format operation.

Link to comment
6 hours ago, itimpi said:

Unraid does not need to do anything to parity when you add a 'clear' disk as since it is all zeroes it does not affect parity.   It is at this stage that you tell Unraid to format the disk (which only takes a few minutes) and as you format the disk (which is a form of write operation) then Unraid will be updating parity to reflect this format operation.

The thing is I didn't tell it to format anything. I just added new slot, started the array and it went up straight to cleaning it. Now that I think about it, it's prob because my array is encrypted.

Link to comment
18 minutes ago, Tomr said:

The thing is I didn't tell it to format anything. I just added new slot, started the array and it went up straight to cleaning it. Now that I think about it, it's prob because my array is encrypted.

Encryption is irrelevant to this as parity runs at the physical sector level.   Unraid will go straight into clearing a disk when you add it any time it does not recognise a ‘clear’ signature as being present on the drive.

Link to comment
1 hour ago, itimpi said:

Encryption is irrelevant to this as parity runs at the physical sector level.   Unraid will go straight into clearing a disk when you add it any time it does not recognise a ‘clear’ signature as being present on the drive.

I wonder why it didn't recognize the correct signature then.

Link to comment
  • Squid unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.