rc16c: "pclear_new_disks: write: No space left on device"


Recommended Posts

I've been migrating to a Norco RPC-4224 and adding 7 formerly external USB drives filled with data to the array one-by-one and have come across this error during pre-clearing on the sixth drive:

 

...
Sep 16 17:19:15 UnRAID emhttp: clear: 98% complete
Sep 16 17:25:07 UnRAID emhttp: clear: 99% complete
Sep 16 17:31:05 UnRAID emhttp: clear: 100% complete
Sep 16 17:37:10 UnRAID emhttp: pclear_new_disks: write: No space left on device

 

then unRAID presents only the option to Clear the disk (again) instead of the usual next option to Format disk after a clearing operation.

 

I pressed the Clear one more time but after the hours-long operation got the same error at the end and only the Clear option presented.

 

I ran a short SMART test on both the suspect drive (sdu) and the parity drive and they both passed.

 

The syslog does not show any significant drive errors; just the "pclear_new_disks: write: No space left on device".

 

Prior to the clearing operations, I had successfully copied all 2TB of files from this drive to the array with no errors (all within the reporting period of the syslog attached).

 

So what should be my next step?  Replace the drive, even though there are no drive-related errors indicated?  If this is a "bug" (even though I had successfully cleared and formatted 5 drives previously) is there a way to manually mark the drive as "cleared" to allow me to go to the Format option?

syslog-2013-09-17.txt.zip

SMART_Reports.zip

Link to comment

There is the preclear_disk script available via the User Customisations part of the forum.    This can be run on any disk that is not part of the array, and you can be using the array while it is running.    When this script completes it writes a signature to the disk that indicates to unRAID that the disk has been pre-cleared.    You therefore only have your array offline for a few seconds/minutes while you add the disk to the array.

 

This script also provides extensive options for stress testing a disk which is always a good idea before adding it to unRAID as you do not want to add any disks that look like they are soon going to fail.

Link to comment

Yea.  I stil had a 2009 version of the pre-clear script on my flash drive, so I moseyed on over to the pre-clear script thread and noticed a newer version.

 

When I ran the -l (eligible pre-clear drives list) it reported no drives.  Hrmph.  Knowing that the target drive ID was "sdu," I went ahead with "pre_clear_disk.sh /dev/sdu" but the script came back with:

 

Sorry, but /dev/sdu is currently mounted and in use:

/dev/sdu1 on /mnt/disk/sdu1 type ntfs (ro,umask=111,dmask=000)

Clearing will NOT be performed

 

I am assuming this was a result of it being initially added to the array, going through the unsuccessful built-in clearing operation, and that removing it from the array did not automatically unmount it.  unMenu doesn't offer the option to unmount; just the option to "create ResierFS" on the "sdu1" partition on the target drive, so I will restart the computer to see if it starts up unmounted.  I'm in the middle of a lengthy file transfer to the array so I'll do the restart after the transfer completes and report back.

Link to comment

Okay, I got the pre-clear script running for the better half of a day, but I had to restart my computer that the telnet session running the script.

 

How can I tell if the preclear_disk.sh script is still running?  I ran a "ps -ef" command but don't see it listed, but since it's a script, I'm not sure how it would be listed in the processes.

 

The script had definitely finished the first phase, and was at 2% on the second phase.

 

BTW, during at least the first phase, the array was not accessible from any remote clients through SMB or NFS, either PC's or Media Players (e.g. PCH), even though the web GUI was up and responding to all selections, and so was unMENU.

 

EDIT: I think with the closing of the remote telnet session that terminated the running pre-clearing script before it finished, because I went ahead and stopped the array then added the target drive and unRAID still provided only the Clear option.

 

(sigh) I now initiated the script again via IPMI so the it's truly running off a session on the machine itself.  Have to wait until tomorrow morning to see if it this whole process is successful in leading to the target drive getting formatted.

Link to comment

Okay, I got the pre-clear script running for the better half of a day, but I had to restart my computer that the telnet session running the script.

 

How can I tell if the preclear_disk.sh script is still running?  I ran a "ps -ef" command but don't see it listed, but since it's a script, I'm not sure how it would be listed in the processes.

 

The script had definitely finished the first phase, and was at 2% on the second phase.

 

BTW, during at least the first phase, the array was not accessible from any remote clients through SMB or NFS, either PC's or Media Players (e.g. PCH), even though the web GUI was up and responding to all selections, and so was unMENU.

 

EDIT: I think with the closing of the remote telnet session that terminated the running pre-clearing script before it finished, because I went ahead and stopped the array then added the target drive and unRAID still provided only the Clear option.

 

(sigh) I now initiated the script again via IPMI so the it's truly running off a session on the machine itself.  Have to wait until tomorrow morning to see if it this whole process is successful in leading to the target drive getting formatted.

If you run it from TELNET session and are not using SCREEN (a plugin for unRAID) then you will terminate the preclear when you close the TELNET window as you found out.  If you do it from the unRAID console (keyboard and monitor attached to your unRAID server) or IPMI connection as you found out then it will continue.  I use the MyMenu tab in unMenu to monitor my preclears remotely and always preclear from the unRAID console on my preclear station or ESXi VM console window when preclearing on the server.
Link to comment

Yea, lesson learned!  But I thought I was past having to manually pre-clear drives with unRAID 5.  :(

 

Another thing of note, the first time around I couldn't access the shares of the server, but after stopping, starting the array, then reinitiating the pre-clear, I can now access them.  Weird bug, but nothing to write home about presently as I'm not sure if what exactly prevented the shares from being accessible the first time, e.g. could it have been the pre-clear script or was it a more obscure bug of unRAID 5?

 

On step 2 of the 10 step pre-clearing process after a whole night of running...

Link to comment

Yea, lesson learned!  But I thought I was past having to manually pre-clear drives with unRAID 5.  :(

Supposedly the error where you had to preclear a drive on one of the RC's was fixed but you may have found another bug.  In any case I like to preclear my drives so that my array is not down when unRAID is clearing the drive since that takes a while.  Also if it is a new drive I run multiple passes of preclear to break in the drive.  If it is a drive I am transfering from one unRAID box to another I just run with the -n parameter to preclear to skip the pre and post read steps so it is just writing zeros.

Another thing of note, the first time around I couldn't access the shares of the server, but after stopping, starting the array, then reinitiating the pre-clear, I can now access them.  Weird bug, but nothing to write home about presently as I'm not sure if what exactly prevented the shares from being accessible the first time, e.g. could it have been the pre-clear script or was it a more obscure bug of unRAID 5?
Not sure what to tell you here

On step 2 of the 10 step pre-clearing process after a whole night of running...
Sounds like you could use the -n parameter to save time but only use that parameter with known good drives and I limit it to drives I know have been through 3 cycles of preclear previously and I am just transfering the drive from one server to another.
Link to comment

After almost an exhaustive 43 hours of waiting for the pre-clear script to complete its tasks on a 2TB drive, success!  I was able to add it to the array where unRAID immediately went straight to the option to Format the drive.

 

So there appears to be a bug that I've come across in v5.0 rc16c in regards to its built-in clearing module: perhaps it involves a "used" drive formatted in NTFS.

 

Anyways, if I come across it again in the remaining 3 drives I need to add to the array to bring to 24 drives, I will remember to invoke the -n switch.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.