Unassigned Devices Preclear - a utility to preclear disks before adding them to the array


dlandon

Recommended Posts

Tha preclear page was not accessible for whatever reason.  My point is that it is not an error directly from a preclear operation.  As you say, you were not running a preclear.

 

I don't understand this:

3 minutes ago, comet424 said:

host: "192.168.0.3", referrer: "http://192.168.0.3/Docker"

Docker?

Link to comment

ah well thats why i was asking why i getting an error  with preclear in it wanted to know what caused it

 

no idea  thats my unraid server ip... and thats the page to the docker page all i know

i still dealing with my parity is going to take 2 days and damn postfinds lol  when i thought i solved all the server issues by upgrading i find another issue lol

Link to comment

I think I dun goofed, and not sure if this is the best place to write about it?

I have 2 new 8TB drives and no free internal slots, so I connected a multibay USB JBOD enclosure, so unpaid unassigned devices saw both disks, I used the plugin to start pre clear , it says both drives will be done, hit start and 98 hours later it says complete.

but I only ever saw the one HDD drive flashing and only 1 drive shows 'pre cleared' but when I "+" in the tools menu, both drives have a pre clear file - I think the system got confused as both drives reporting as the same (both show DEV1) but one listed as sdj and one as sdk

so it didn't seem to be going any further, didn't say anywhere ok moving to next drive, but when I clicked the top square icon, the pre cleared status is gone for the one drive but the logs remain.

so, should I be removing and just doing 1 drive at a time? is it because they are the same model that confused things or am I way off?

(my plan was to pre clear both, then shut down, replace the smaller drive, let it rebuild then shut down and replace the next smallest drive)

 

any tips appreciated!

Screenshot 2023-06-08 at 10.19.40 pm.png

Screenshot 2023-06-08 at 10.19.52 pm.png

Link to comment
5 minutes ago, itimpi said:

If the USB enclosure does not present the drives with different Id’s then it will not work properly with Unraid.

Ahh, I thought as it showed the two different device names it would be ok, I will remove 1 drive and do them individually, cheers!

Link to comment
  • 3 weeks later...
3 hours ago, adammerkley said:

Apologies if this has been asked previously, I searched the thread and couldn't find it.

 

I started a 3 cycle pre clear on a 16TB drive a few days ago.  It's almost finished with the post-read on cycle 1.  Is it possible to change the settings to just 1 cycle at this point?

No.  You can pause and restart where it left off, but you can't change the test parameters.

Link to comment
On 5/21/2023 at 2:37 PM, dlandon said:

Several issues with what you are doing:

  • A mount point has been created for these kinds of mounts.  Use /mnt/addons/.  It used to be recommended to use /mnt/disks/ so FCP wouldn't flag a problem with your mounts.  That's why UD creates the /mnt/addons/.  So users have a place for special mounts that do't interfere with UD.  UD adds some protection and FCP will not flag anything there as a problem.
  • Your ZFS mount needs to happen after the array is running and UD has been installed on a reboot.  Updates of UD won't flag the issue because the protection mount has already been applied at boot.

dlandon

 

I changed the mount point for the ZFS pool to /mnt/addons/

 

Once that was done I ran the latest UD update and rebooted the server. The banner continues to show "reboot required to apply Unassigned Devices update"

 

Two questions:

1) Is there a way to force update UD?
2) Is it safe to update to Unraid 6.12 with this issue still present?

 

Thank you!

 

*edit* It looks like a few more restarts solved the problem, In case it helps anyone this is what I did for the mounting:

 

Quote
  1. Run the command sudo zfs get all. This will list all the properties of your current ZFS pools and file systems. One of those properties, if correctly set, should be mountpoint=.
  2. If the mountpoint property is not set or if you want to change it, you can do so with the following command: sudo zfs set mountpoint=/mnt/addons dumpster. 
  3. After the mount point has been set, you may want to change its owner. If root owns the mount point, you can change the owner with the following command: sudo chown -R user:user /mnt/addons. Replace user:user with the actual username and group that should own the mount point. This command will make the specified user and group own the mount point and everything inside it

 

I only did this so I can update UD properly before upgrading to 6.12.1

 

Once updated to 6.12.1 I created a new pool with all 12 ZFS drives, left filesystem to "Auto" and started the array. It automatically imported the ZFS pool and mounted it to "/mnt/<zfs pool name>"

 

Initially I set the "Enable user share assignment" set to "No" until I had a few more restarts/changed all the docker container paths to reflect the new mount point. Only then did I stop the array and update the share function & turn SMB service back on.

 

Thank you @dlandon for the support!

 

Edited by v3life
Updates/Resolution
Link to comment

I've been trying to preclear a set of new disks on a brand-new UnRAID 6.12.1 server for a week now.  I can always get 3 disks to finish preclear fine, but one of the four keeps restarting.  It gets to 99% on zeroing and then freezes, tries to restart at the sector it froze on, freezes again, and then restarts at 0%.

At first I thought it was the disk itself, but I swapped the disk with other array cables, and then that disk pre-cleared fine but the one plugged into the old port failed.  I figured I had a bad cable, so replaced that (It's a forward breakout 8087->SATA cable) ordered new and the same SATA plug failed, same drive as last time. This drive precleared previously on a different SATA port.

I'll post the error below, but to be clear:
Drive1 SATA1->Cleared

Drive2 SATA2->Cleared

Drive3 SATA3->Keeps restarting zeroing phase

Drive4 SATA4->Cleared

 

New Configuration/Swapped cables, and

Drive1 SATA4->Cleared

Drive2 SATA3->Keeps restarting zeroing phase

Drive3 SATA2->Cleared

Drive4 SATA1->Cleared

 

My HBA is an LSI 9217-4i4e purched from the Art of Server's ebay store and didn't present problems when used in another machine, but I wasn't attaching such large disks there. Could have always developed a problem between now and then, I suppose, as it was pulled from the old machine and in a static bag for a month before being put into the new build.

 

The DD process hangs at a sector near the end but can't recover. It looks like other disks do have the "dd process hung at" issue but are able to recover where they left off.

 

Does it sound like I have a bad HBA? These are large disks (22TB), so I wasn't sure if this was a symptom of previous disks not being large enough to notice the problem yet... has anyone else had troubles clearing a 22TB disk with this plugin? I'm not sure how to interpret the dd process hanging on all of the disks, but being able to recover.

 

Here's one of the error logs (DiskSN changed), but they have all looked like this when the device restarts the clear. They take ~27 hours to do the zeroing so it's a long wait to see if a change has made any difference.
 

preclear_disk_SN_20300.txt

Edited by Alyred
Link to comment
On 6/28/2023 at 3:14 PM, Alyred said:

I've been trying to preclear a set of new disks on a brand-new UnRAID 6.12.1 server for a week now.  I can always get 3 disks to finish preclear fine, but one of the four keeps restarting.  It gets to 99% on zeroing and then freezes, tries to restart at the sector it froze on, freezes again, and then restarts at 0%.

At first I thought it was the disk itself, but I swapped the disk with other array cables, and then that disk pre-cleared fine but the one plugged into the old port failed.  I figured I had a bad cable, so replaced that (It's a forward breakout 8087->SATA cable) ordered new and the same SATA plug failed, same drive as last time. This drive precleared previously on a different SATA port.

I'll post the error below, but to be clear:
Drive1 SATA1->Cleared

Drive2 SATA2->Cleared

Drive3 SATA3->Keeps restarting zeroing phase

Drive4 SATA4->Cleared

 

New Configuration/Swapped cables, and

Drive1 SATA4->Cleared

Drive2 SATA3->Keeps restarting zeroing phase

Drive3 SATA2->Cleared

Drive4 SATA1->Cleared

 

My HBA is an LSI 9217-4i4e purched from the Art of Server's ebay store and didn't present problems when used in another machine, but I wasn't attaching such large disks there. Could have always developed a problem between now and then, I suppose, as it was pulled from the old machine and in a static bag for a month before being put into the new build.

 

The DD process hangs at a sector near the end but can't recover. It looks like other disks do have the "dd process hung at" issue but are able to recover where they left off.

 

Does it sound like I have a bad HBA? These are large disks (22TB), so I wasn't sure if this was a symptom of previous disks not being large enough to notice the problem yet... has anyone else had troubles clearing a 22TB disk with this plugin? I'm not sure how to interpret the dd process hanging on all of the disks, but being able to recover.

 

Here's one of the error logs (DiskSN changed), but they have all looked like this when the device restarts the clear. They take ~27 hours to do the zeroing so it's a long wait to see if a change has made any difference.
 

preclear_disk_SN_20300.txt 8.76 kB · 0 downloads

Yeah, after the 5th attempt on that port, it "errors encountered, please check the log" and when I check the log, it says it encoutered SMART failures, but then showed no SMART failures:
 

This is after the 5th "attempt", started in the previously pasted log:
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10486357+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21991484555264 bytes (22 TB, 20 TiB) copied, 99134.1 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10487140+0 records in
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10487140+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21993126625280 bytes (22 TB, 20 TiB) copied, 99146.8 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10487927+0 records in
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10487927+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21994777083904 bytes (22 TB, 20 TiB) copied, 99159.5 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10488650+0 records in
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10488650+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21996293324800 bytes (22 TB, 20 TiB) copied, 99171.2 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10489434+0 records in
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10489434+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21997937491968 bytes (22 TB, 20 TiB) copied, 99183.9 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10490224+0 records in
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 10490224+0 records out
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: dd output: 21999594242048 bytes (22 TB, 20 TiB) copied, 99196.6 s, 222 MB/s
Jun 29 18:02:13 preclear_disk_<REDACTED>_20300: Zeroing: zeroing the disk failed!
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Error:
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.:
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: ATTRIBUTE               INITIAL NOW STATUS
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Reallocated_Sector_Ct   0       -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Power_On_Hours          159     -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Temperature_Celsius     41      -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Reallocated_Event_Count 0       -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Current_Pending_Sector  0       -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: Offline_Uncorrectable   0       -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: UDMA_CRC_Error_Count    0       -
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: S.M.A.R.T.: 
Jun 29 18:11:56 preclear_disk_<REDACTED>_20300: error encountered, exiting ...

 

Anyone have any ideas why it would fail the preclear, but seem to be OK on every other port? Again, I've replaced the cable and the drive has cleared on another ports. The drive gets to 99% on the zero then starts over, taking several days to completely fail. I do see the other SATA Ports pull an error at the end of zeroing but are able to recover.

I'm running 4x22TB WD Red Pros from an LSI 9207-4i4e SAS HBA with the drives connected to the internal 8027 port with a SATA foward breakout cable.

My /var/logs/messages file is empty, but it is a brand-new server where I've only been clearing and testing disks so far. Is there another or more detailed log I can pull?

Link to comment

i dont know if its a bug or cuz i upgraded...  but the preclear didnt resume after a reboot

 

i was running 6.12.1  doing a preclear on 2 drives...  i did an upgrade to 6.12.2  and then rebooted.. and thinking i could resume the preclear which i normally can.. i couldnt do it... so i just restarting preclear  least only like 5 6 hours into it  so not bad.. but not sure if resume will work again  if i do a reboot etc

Link to comment
26 minutes ago, comet424 said:

so i did a test let the preclear run for 15 min...  with 6.12.2  and you cant resume the preclear if you do a reboot or so...  so gotta make sure  i dont need to reboot the server in the mean time for a week of preclearing

 

I just tried a reboot to see if there was an issue with pausing and resuming a preclear and it worked for me.  In order to catch this so I can troubleshoot this, do this:

  • Set up the syslog server to write the log to disk.
  • Reboot while running a preclear.
  • Post the syslog.
Link to comment
23 hours ago, comet424 said:

@dlandon if there is any other tests you need me to do ill wait till first round of preclear is done im 35 hours in  step 2 is on 66%  so i dont wanna take chance if any tests you need me to do going to restart back to 0  

and so far  no log file from the syslog server 

I need the log when Unraid shuts down and reboots.  Unfortunately, it is lost when rebooting.

 

You have the Tips and Tweaks plugin installed.  There is a log archive feature that saves a copy of the syslog on the flash when Unraid is shutting down.  Set the "Enable syslog Archiving?" to 'Yes' and then start a preclear and shut down Unraid.  When you reboot, the syslog will be archived to '/flash/logs/'.

Link to comment
5 minutes ago, DiscoverIt said:

It looks like UD doesn't accurately check if a remote NFS mount is available. In this case I haven't reconfigured the NFS export so it's just creating folders in the tmpfs. Nothing in syslog showing a failure to mount but cross referencing 'df` confirms no NFS mounts are active.

 

 

Screenshot 2023-07-03 093555.png

Your issue is not a UD Preclear issue.  Please post your issue with diagnostics here: 

 

Link to comment

@dlandon so i think it kinda fixed itself but i think i got sata cable error i dunno.. whats going on  

but here is a diagnostic  before i rebooted.. as i couldnt stop the array..

then i rebooted..  started syslog  and started preclear.. one hard drive let me resume other didnt

then i rebooted

 

now i can resume both drives..  but my one array disk isnt working.....  and here is the 2nd  diagnostic  

 

syslog-20230703-181744.txtsyslog.zipcan you tell whats wrong.. but it doebackupserver-diagnostics-20230703-1817.zipsnt solve the issue  before when i gave diagnostics and it didtn show errors  the resumes didnt work ... so i dunno if my logs help you or not

 

i gotta swap in new sata cables now  to see if it fixes my array.. i dont get it...

backupserver-diagnostics-20230703-1809.zip backupserver-diagnostics-20230703-1826.zip backupserver-diagnostics-20230703-1817.zip syslog.zip syslog-20230703-181744.txt

Link to comment

looks like its my hard drive in my array is failing not the cables ugh.. least still under warranty

 

hopefully the syslog will show why least the one drive couldnt resume preclear  they both can preclear resume now...  so not sure if it will show why it wasnt resuming prior  unless i reset the  and start fresh again and test that way

 

Link to comment
On 6/28/2023 at 6:14 PM, Alyred said:

I've been trying to preclear a set of new disks on a brand-new UnRAID 6.12.1 server for a week now.  I can always get 3 disks to finish preclear fine, but one of the four keeps restarting.  It gets to 99% on zeroing and then freezes, tries to restart at the sector it froze on, freezes again, and then restarts at 0%.

At first I thought it was the disk itself, but I swapped the disk with other array cables, and then that disk pre-cleared fine but the one plugged into the old port failed.  I figured I had a bad cable, so replaced that (It's a forward breakout 8087->SATA cable) ordered new and the same SATA plug failed, same drive as last time. This drive precleared previously on a different SATA port.

 

 

I'm seeing a very similar problem using the plugin version of the preclear script on 2 8tb SAS drives I got last week.  Both get to near the end of the zero phase, hang, and keep restarting until failure (I got the same restart at 0% on I think the 3rd retry attempt).  I installed the preclear docker image to test the original preclear script on my server (running 6.12.1 at the moment), and the first drive I tried has so far completed all the zeroing process with the preclear script that's included with the docker version, and is in the final read phase on the drive.  This was with 2 different manufacturers (one HGST, one Seagate Exos, both 8tb).  The Seagate ran through the zero process 3 total attempts, with 3 retries per attempt, and failed every time, the HGST failed a 3 retry pass before I started it using the docker.  I've currently got the Exos drive running a preclear on a virgin 6.11.5 install with just the preclear & UD plugins installed as a VM on my Proxmox server, with the drive passed through to that VM, and it's SO FAR at 83% and still going (VERY slowly, it's at I think 38 hrs on JUST the zero phase, lol, getting I think about 54m/s zero rate).  I'll let it go until it either completes, or fails on the zero, and move it back into my primary rig if it passes, and try it under the docker image too.  I DO notice that the Exos drive was shown as having a reported size of 8001563222016 total on the preclear plugin version n 6.12.1, where under 6.11.5 it's showing 7865536647168, so I'm not sure where, exactly it was getting the larger size from..  Same controller in both machines, only difference is it's being passed through to the VM directly, and not directly on bare metal.

 

As far as the HGST drive being in the final step (final read), I changed NOTHING on the server or plugins, just installed the plugin docker image (reports Docker 1.22), and started it using that instead of the canned preclear script with it's defaults..

 

Link to comment

I have installed a 8tb hdd that I want to preclear. On the Main page you can see the hdd as unassigned and the word MOUNT next to it. But it does not appear in the Tools Preclear page. What do I need to do?

 

The 8tb appears as part1 and part2. 

Edited by pras1011
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.