Preclear plugin


Recommended Posts

 

33 minutes ago, trurl said:

Did you look at the preclear log?

 

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdw

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Preclear Disk Version: 1.0.22

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: S.M.A.R.T. info type: default

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: S.M.A.R.T. attrs type: default

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Disk size: 8001563222016

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Disk blocks: 1953506646

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Blocks (512 bytes): 15628053168Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Block size: 4096

Aug 31 14:11:57 preclear_disk_ZA12PD7W_13451: Start sector: 0Aug 31 14:11:59 preclear_disk_ZA12PD7W_13451: Pre-read: pre-read verification started (1/5)....

Aug 31 14:11:59 preclear_disk_ZA12PD7W_13451: Pre-Read: dd if=/dev/sdw of=/dev/null bs=2097152 skip=0 count=8001563222016 conv=noerror iflag=nocache,count_bytes,skip_bytes

 

This is the log from the restart but I guess I didn't.  The report showed smart passed and just said zeroing failed.  I guess I'll have to see what the second try does.  Sorry for not providing the correct information. 

Edited by cbr600ds2
formatting
Link to comment
  • 2 weeks later...

Hi there...Anyone else having this banner pop up? - "Preclear Plugin (2021.04.11): unsupported Unraid version (6.9.2). Please upgrade your OS/plugin or request proper support"

image.thumb.png.3fbbfc6273e9b7c8711ab745ac2f6c7e.png

 

I have checked the plugin and it is up to date:

image.thumb.png.75e0a9499d26e7b4155b039b24f1f52d.png

 

@gfjardim just wondering if this is a known issue?

Regards

Link to comment

I ran three rounds of pre-clear with a brand new WD Red, and got the following error on the third run, full log attached. It says SMART error, but I don't see an error below.

 

Log shows earlier events:

 

Quote

Sep 22 19:47:16 preclear_disk_WD-WX52D31RYVY4_3942: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 1372: wait: pid 28787 is not a child of this shell
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: dd - read 4000787030016 of 4000787030016 (0).
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: elapsed time - 7:24:38
Sep 22 19:47:18 preclear_disk_WD-WX52D31RYVY4_3942: Pre-Read: dd command failed, exit code [127].

 

Not exactly descriptive, but looks more like a pre-clear script error than a drive error? Or is the drive DOA?

 

(Forum doesn't work properly with Ghostery, and code snippet puts it all in one line wtf)

 

Quote

Sep 22 19:47:19 preclear_disk_WD-WX52D31RYVY4_3942: Pre-read: pre-read verification failed! Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Error: Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Sep 22 19:47:20 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Power_On_Hours           1        54   Up 53 Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Temperature_Celsius      37       39   Up 2 Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Reallocated_Event_Count  0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Current_Pending_Sector   0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: Offline_Uncorrectable    0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    - Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED Sep 22 19:47:21 preclear_disk_WD-WX52D31RYVY4_3942: error encountered, exiting...

preclear_disk_WD-WX52D31RYVY4_3942.txt

Edited by Ulvan
Link to comment
2 hours ago, Paul Keating said:

Hi there...Anyone else having this banner pop up? - "Preclear Plugin (2021.04.11): unsupported Unraid version (6.9.2). Please upgrade your OS/plugin or request proper support"

image.thumb.png.3fbbfc6273e9b7c8711ab745ac2f6c7e.png

 

I have checked the plugin and it is up to date:

image.thumb.png.75e0a9499d26e7b4155b039b24f1f52d.png

 

@gfjardim just wondering if this is a known issue?

Regards

I'm on 6.9.2 also and same preclear version and I'm not seeing that warning.

Link to comment
On 9/23/2021 at 11:58 AM, tjb_altf4 said:

I'm on 6.9.2 also and same preclear version and I'm not seeing that warning.

Thanks for that....certainly odd behavior and a first for me. I did have a loss of network access and had to reboot after 3 months of happy running. I will monitor and see if it happens again.

Link to comment
On 9/24/2021 at 1:46 PM, yoban said:

Thanks for that....certainly odd behavior and a first for me. I did have a loss of network access and had to reboot after 3 months of happy running. I will monitor and see if it happens again.

Well... first reboot in 2 months, and I now see the incompatibility warning banner!

 

Funny thing is I've used 6.9.2 and this version of preclear a few times in the last few months and its worked just fine.

I think this might be something driven from CA that @gfjardim needs to make a (hopefully) small update for.

Link to comment

I am getting an Error during Post Read verification, for the 2nd time now. Does anyone know what could cause that? Do you guys think the disc is good to use anyways?

 

############################################################################################################################
#                                                                                                                          #
#                                        unRAID Server Preclear of disk ZCT0DTK5                                           #
#                                       Cycle 1 of 1, partition start on sector 64.                                        #
#                                                                                                                          #
#                                                                                                                          #
#   Step 1 of 5 - Pre-read verification:                                                  [16:18:25 @ 136 MB/s] SUCCESS    #
#   Step 2 of 5 - Zeroing the disk:                                                       [16:59:16 @ 130 MB/s] SUCCESS    #
#   Step 3 of 5 - Writing unRAID's Preclear signature:                                                          SUCCESS    #
#   Step 4 of 5 - Verifying unRAID's Preclear signature:                                                        SUCCESS    #
#   Step 5 of 5 - Post-Read verification:                                                                          FAIL    #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#                              Cycle elapsed time: 34:02:11 | Total elapsed time: 34:02:11                                 #
############################################################################################################################


############################################################################################################################
#                                                                                                                          #
#                                        S.M.A.R.T. Status (device type: default)                                          #
#                                                                                                                          #
#                                                                                                                          #
#   ATTRIBUTE                INITIAL  STATUS                                                                               #
#   Reallocated_Sector_Ct    0        -                                                                                    #
#   Power_On_Hours           207      -                                                                                    #
#   Runtime_Bad_Block        0        -                                                                                    #
#   End-to-End_Error         0        -                                                                                    #
#   Reported_Uncorrect       0        -                                                                                    #
#   Airflow_Temperature_Cel  28       -                                                                                    #
#   Current_Pending_Sector   0        -                                                                                    #
#   Offline_Uncorrectable    0        -                                                                                    #
#   UDMA_CRC_Error_Count     0        -                                                                                    #
#                                                                                                                          #
#                                                                                                                          #
#                                                                                                                          #
############################################################################################################################
#   SMART overall-health self-assessment test result: PASSED                                                               #
############################################################################################################################

 

 

 

 

 

Oct  3 18:36:46 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: progress - 90% zeroed @ 106 MB/s
Oct  3 21:11:11 TheArk rc.diskinfo[8155]: SIGHUP received, forcing refresh of disks info.
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: dd - wrote 8001563222016 of 8001563222016 (0).
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: elapsed time - 16:59:13
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: dd exit code - 0
Oct  3 21:11:13 TheArk preclear_disk_ZCT0DTK5[26100]: Zeroing: zeroing the disk completed!
Oct  3 21:11:14 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: writing signature:    0   0   2   0   0 255 255 255   1   0   0   0 255 255 255 255
Oct  3 21:11:14 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: verifying unRAID's signature on the MBR ...
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Signature: Unraid preclear signature is valid!
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: post-read verification started (1/5)....
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: verifying the beginning of the disk.
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp /tmp/.preclear/sdg/fifo /dev/zero
Oct  3 21:11:15 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd if=/dev/sdg of=/tmp/.preclear/sdg/fifo count=2096640 skip=512 iflag=nocache,count_bytes,skip_bytes
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: verifying the rest of the disk.
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp /tmp/.preclear/sdg/fifo /dev/zero
Oct  3 21:11:16 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd if=/dev/sdg of=/tmp/.preclear/sdg/fifo bs=2097152 skip=2097152 count=8001561124864 iflag=nocache,count_bytes,skip_bytes
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: cmp command failed - disk not zeroed
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd - read 480839204864 of 8001563222016 (7520724017152).
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: elapsed time - 0:44:24
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd command failed, exit code [141].
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 459993513984 bytes (460 GB, 428 GiB) copied, 2525.4 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 220437+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 220436+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 462287798272 bytes (462 GB, 431 GiB) copied, 2537.64 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 221426+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 221425+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 464361881600 bytes (464 GB, 432 GiB) copied, 2549.79 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 222396+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 222395+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 466396119040 bytes (466 GB, 434 GiB) copied, 2561.99 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 223367+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 223366+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 468432453632 bytes (468 GB, 436 GiB) copied, 2574.1 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 224428+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 224427+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 470657531904 bytes (471 GB, 438 GiB) copied, 2587.39 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 225417+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 225416+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 472731615232 bytes (473 GB, 440 GiB) copied, 2599.69 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 226373+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 226372+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 474736492544 bytes (475 GB, 442 GiB) copied, 2611.91 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 227350+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 227349+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 476785410048 bytes (477 GB, 444 GiB) copied, 2624.21 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 228315+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 228314+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 478809161728 bytes (479 GB, 446 GiB) copied, 2636.37 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 229282+0 records in
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 229281+0 records out
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: dd output: 480837107712 bytes (481 GB, 448 GiB) copied, 2648.52 s, 182 MB/s
Oct  3 21:55:52 TheArk preclear_disk_ZCT0DTK5[26100]: Post-Read: post-read verification failed!
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Error:
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.:
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Power_On_Hours           207      241  Up 34
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Runtime_Bad_Block        0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: End-to-End_Error         0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Reported_Uncorrect       0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Airflow_Temperature_Cel  28       32   Up 4
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Current_Pending_Sector   0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: Offline_Uncorrectable    0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    -
Oct  3 21:55:55 TheArk preclear_disk_ZCT0DTK5[26100]: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Oct  3 21:55:56 TheArk preclear_disk_ZCT0DTK5[26100]: error encountered, exiting...

 

Link to comment
  • 3 weeks later...
On 7/21/2019 at 5:05 PM, Forusim said:

Hello @gfjardim

 

When a drive is mounted as unassigned (not even shared), your plugin issues "lsof -- /mnt/disks/tempdrive" command every few seconds.

This causes remarkable CPU spikes (30% out of 400%) via process "php" and don´t let that drive ever spin down.

 

Would it be possible not to issue this command when no preclear activity takes place?

As workaround I have to uninstall this plugin, when not used.

 

This enhancement is much appreciated.

 

@gfjardim

Hey this is happening to me now as well [2 years later]. Unraid 6.9.3, plugin version 2021.04.11 [up-to-date]

 

I noticed 'lsof' was pegging an entire cpu to 100%; investigated and it's coming from /etc/rc.d/rc.diskinfo

 

From /var/log/diskinfo.log , it looks like this benchmark is being run continuously in a loop, with no delay/sleep between iterations

Mon Oct 18 22:17:49 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.509439s.
Mon Oct 18 22:18:03 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.332904s.
Mon Oct 18 22:18:47 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.167924s.
Mon Oct 18 22:19:00 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.680031s.
Mon Oct 18 22:19:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 12.882862s.
Mon Oct 18 22:19:58 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.628200s.
Mon Oct 18 22:20:44 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/SanDiskSSD' 2>/dev/null | tail -n +2 | wc -l) took 14.887803s.
Mon Oct 18 22:20:57 EDT 2021: benchmark: shell_exec(lsof -- '/mnt/disks/VolatileSSD' 2>/dev/null | tail -n +2 | wc -l) took 13.041714s.

 

 

Those drives are mounted by Unassigned Devices. I'm not sure what this benchmark was trying to accomplish, as I don't have any preclear's running, and haven't since a reboot.

Link to comment
  • 2 weeks later...
On 3/10/2021 at 12:03 PM, stepmback said:

That was annoying.

 

I had not rebooted my server in about 160 days. Everything was fine so why bother. I just rebooted the server and now i am able to pre-clear the drive. Bug?

just wanted to say thanks, i don't know how long i would have futz around before I tried a reboot.  lol.

Link to comment
  • 2 weeks later...

Built a new server for myself and am having an odd issue with the preclear plugin and specifically using the gfjardim script.  When I kick it off to do a preclear on this new machine I get the below message in the preview window.

 

/usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh: line 514: 0 * 100 /         0 : division by 0 (error token is "0 ")

 

The main unraid gui shows that the preclear is finished but the machine is definitely still doing something as there is a dd process that is using CPU.

 

If I start the process using the Joe. L script then the process starts as expected and works.

 

I am running 6.9.2, the newest version of the plugin, and all that jazz.

I do not particularly care which script I need to use when running preclear but figured I would mention it here in this thread to see if anyone has seen similar.

 

The new build is a Ryzen 7 5700G, 16GB of RAM on an MSI B550i Gaming Edge Max motherboard.

I have a much older build that is my test system that is running something like an Intel Q6600, 12GB of RAM on a supermicro motherboard and the gfjardim script seems to work perfectly fine on that one.

Link to comment

I'm seeing 429MB/s on the pre-read stage of preclear on my WDC blue 4TB hard drive. (info is in the image below)  This drive is a shingle variety.  This isn't read cache, it pretty much has stayed at this speed the whole time.    Is this why the read speeds are so high?  I'll post a follow up on my write speeds.  Is this normal?  I've only seen SSD's have read speeds like this!

1763704856_2021-11-2613_30_15-Tower_PreclearMozillaFirefox.thumb.png.f762a4ce3d7e885a4d811b81db11394e.png

Link to comment

I'm trying to preclear a disk but it has encountered an error claiming the disk is part of the array. I am fairly certain the disk is not part of the array and it is the error that is wrong. Perhaps it has to do with the disk being assigned the name sda? I have 1 parity disk (sdae), 5 array disks (sdab, sdac, sdai, sadh, sdag), and 34 unassigned devices (sda through sdan excluding the previously listed names). I have the Unassigned Devices and Unassigned Devices Plus plugins installed if that makes a difference.

 

Quote

Dec 01 14:15:31 preclear_disk_ZL2MDQA5_12537: Disk /dev/sda is part of unRAID array. Aborted.

Dec 01 14:15:31 preclear_disk_ZL2MDQA5_12537: error encountered, exiting...

 

In the "preview" window, it does not list any potential candidates for preclearing which seems odd to me.
 

Quote

The disk '/dev/sda' is part of unRAID's array, or is assigned as a cache device.

 

Please choose another one from below:

 

====================================

 Disks not assigned to the unRAID array

  (potential candidates for clearing)

========================================

 

Is there some kind of workaround I can employ?

tower-diagnostics-20211201-1416.zip

Edited by unburt
Link to comment

I just got a new 14TB external that I would like to preclear while in the enclosure.

 

I plugged it in and did not mount the drive.  I deleted the NTFS partition to start fresh. 

 

When I click on start to preclear it just gets stuck on "Starting...".  No read or write activity and I don't see anything in the log.

 

I have precleared many drives over the years and never had a problem. 

 

How should I troubleshoot?

 

UPDATE:  Reboot of the server fixed the issue.

 

Edited by BradJ
Fixed
Link to comment

Unraid version: 6.9.2

 

 

Hi all 

 

I have an unraid server that consists of the following:

Motherboard: Asus P5Q Premium

CPU : Intel Core 2 Duo E7400

Memory 2GB

Parity Disk: Seagate Ironwolf Pro 4TB

Disk 1: Seagate Ironwolf Pro 4TB

Cache Drive: Crucial MX500 1TB

 

I just purchased an additional WD Red Pro 4TB drive. I installed it into a spare slot in my rack and started the pre clear plugin and selected the gfjdarmim 1.0.22 script. I just went for all other default options and started the script. I get the following error ;error encountered, please verify log' . In the preclear log (see below), It says something about low memory. I know i only have 2GB but is this really a problem for pre clear to work? If it is, is there anyway to reduce the meomry load?

 

Here is the preclear log, any guidance would be appreciated!

 

 

Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Command: /usr/local/emhttp/plugins/preclear.disk/script/preclear_disk.sh --cycles 1 --no-prompt /dev/sdf
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Preclear Disk Version: 1.0.22
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. info type: default
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T. attrs type: default
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk size: 4000787030016
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Disk blocks: 976754646
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Blocks (512 bytes): 7814037168
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Block size: 4096
Dec 06 13:23:38 preclear_disk_VBH0L3ZF_10889: Start sector: 0
Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification started (1/5)....
Dec 06 13:23:41 preclear_disk_VBH0L3ZF_10889: Pre-Read: dd if=/dev/sdf of=/dev/null bs=2097152 skip=0 count=4000787030016 conv=noerror iflag=nocache,count_bytes,skip_bytes
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: size:    946212, available: 93896, free: 9%
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Filesystem     1K-blocks   Used Available Use% Mounted on
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: rootfs            946212 852316     93896  91% /
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: devtmpfs          946220      0    946220   0% /dev
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs            1020916      0   1020916   0% /dev/shm
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: cgroup_root         8192      0      8192   0% /sys/fs/cgroup
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs             131072    304    130768   1% /var/log
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: /dev/sda1       15000232 294664  14705568   2% /boot
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay           946212 852316     93896  91% /lib/modules
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: overlay           946212 852316     93896  91% /lib/firmware
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs               1024      0      1024   0% /mnt/disks
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: tmpfs               1024      0      1024   0% /mnt/remotes
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Low memory detected, aborting...
Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Pre-read: pre-read verification failed!
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Error:
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.:
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: ATTRIBUTE                INITIAL  NOW  STATUS
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Sector_Ct    0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Power_On_Hours           0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Temperature_Celsius      33       33   -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Reallocated_Event_Count  0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Current_Pending_Sector   0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: Offline_Uncorrectable    0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: UDMA_CRC_Error_Count     0        0    -
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: S.M.A.R.T.: SMART overall-health self-assessment test result: PASSED
Dec 06 13:23:46 preclear_disk_VBH0L3ZF_10889: error encountered, exiting...

Link to comment
54 minutes ago, rajk said:

Dec 06 13:23:44 preclear_disk_VBH0L3ZF_10889: Low memory detected, aborting...

 

54 minutes ago, rajk said:

i only have 2GB but is this really a problem

Appears to be

 

You can try the binhex_preclear app instead (which does the identical thing, but is command line only -> he's got a good FAQ on how to do this)

Link to comment
  • Squid unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.