Question thread for a new prospective unRAID user


Recommended Posts

The preclear script writes a special signature to the drive so that Unraid can recognise that is has been precleared when you add it as a data drive.

 

If the drive does not have this special signature when it is added to the Unraid system then it will take however long (many hours) it takes to update parity taking into account the new disk; with the system unavailable for normal use during this time.  Which is why adding a precleared disk is preferable.

 

In both cases Unraid will create its own partition table on the data disk

Link to comment
  • Replies 56
  • Created
  • Last Reply

Top Posters In This Topic

... If the drive does not have this special signature when it is added to the Unraid system then it will take however long (many hours) it takes to update parity taking into account the new disk; with the system unavailable for normal use during this time.

 

NO.  This is not correct.  While it's correct that it will take many hours to add a disk if it has not been pre-cleared, it is NOT because it is updating parity.  UnRAID simply writes all zeroes to the new disk [i.e. clears the disk => which is why the script JoeL wrote a long time ago to avoid this is call PRE-clear  :) ]

 

Link to comment
NO.  This is not correct.  While it's correct that it will take many hours to add a disk if it has not been pre-cleared, it is NOT because it is updating parity.  UnRAID simply writes all zeroes to the new disk [i.e. clears the disk => which is why the script JoeL wrote a long time ago to avoid this is call PRE-clear  :) ]

 

Is it true however that the preclear script writes a special signature to the disk, and that unRAID was programmed to recognize this flag and trust it as a cleared drive?

Link to comment

NO.  This is not correct.  While it's correct that it will take many hours to add a disk if it has not been pre-cleared, it is NOT because it is updating parity.  UnRAID simply writes all zeroes to the new disk [i.e. clears the disk => which is why the script JoeL wrote a long time ago to avoid this is call PRE-clear  :) ]

 

Is it true however that the preclear script writes a special signature to the disk, and that unRAID was programmed to recognize this flag and trust it as a cleared drive?

Yes
Link to comment

You cannot safely use a drive with unRAID while it has a value of pending sectors that is anything other than 0.

 

Thank you for the input.

 

I just read the FAQ on parity and I have a question on it.  Since parity bit on the parity drive is calculated by comparing the same bit across all disks, does that mean that any time I write anything to any drive that all other drives must spin up as well in order for the parity drive to modify the parity bits of the affected blocks?  And if not, then why not?

 

edit:  Nevermind I think I answered my own question.  I suppose it only needs to know the byte being written, its current value, and the value to be written.  If it is 0 -> 1 or 1 -> 0 the parity bit must be toggled.  So there is no need to spin up the other drives.

I would just like to congratulate itsrumsey for taking the trouble to understand parity. I wish many more users would do so.

 

If you understand unRAID parity and keep it in mind it will go a long way to help prevent mistakes when making changes to your array configuration, such as replacing a drive.

Link to comment

NO.  This is not correct.  While it's correct that it will take many hours to add a disk if it has not been pre-cleared, it is NOT because it is updating parity.  UnRAID simply writes all zeroes to the new disk [i.e. clears the disk => which is why the script JoeL wrote a long time ago to avoid this is call PRE-clear  :) ]

 

Is it true however that the preclear script writes a special signature to the disk, and that unRAID was programmed to recognize this flag and trust it as a cleared drive?

 

Of course => that's why it's called PREclear.  It clears the drive BEFORE you add it to UnRAID, so UnRAID can skip the clearing process.    UnRAID recognizes that it's already been cleared, so it can skip the many-hours long clearing process for the drive.    I think the main reason Joe wrote this script was because during the clearing process the UnRAID array is not available.  Since the pre-clear runs "outside of UnRAID" (it's just a Linux script), it has no impact on the operation of the array, and avoids that effective downtime.

 

When you add an already-cleared drive, all that UnRAID needs to do is format the drive ... which just takes a couple minutes.

 

Link to comment

It might help to put the preclear script into historic context.  The preclear script didn't always exist and adding a drive to your array meant that it would be unavailable while the newly added drive was being zeroed.  A community member (Joe L.) wrote the preclear script in order to minimize the downtime needed to add a new drive to the array by allowing the zeroing process to be done before the drive was added.

Link to comment

What a game changer that must have been.  I can't imagine having to shut down my server for a day or more just to add some additional storage.  At the current rate the 2x preclear script on my 2TB test drives looks like it will take close to two full days.

 

X491AXG.png

 

I would just like to congratulate itsrumsey for taking the trouble to understand parity. I wish many more users would do so

 

Thank you for the kind words, I believe its important to have at least a rudimentary understanding of any system you are going to depend on to house all of your data!  The community has been very quick to answer any of my questions too which inspires confidence that I'll have support should and when the time comes that I need it.

Link to comment

When the array did the clearing I don't think it took as long as the preclear script.  I'm not sure about this but the array version of the preclear didn't do the pre-read and post-read (the step you are on now) since it is technically unnecessary (and can be disabled by a command line parameter).  The preclear script adds a few steps to the process since doing the preclear outside the array removes the time-pressure to bring the array back up.  The reason these additional steps are done is to test the the disk surface and mechanical components in order to weed out any drives that may fail early; the post-read for example is done in a linear fashion reading a chunk of data and then a random block, the first block or the drive, another random block, the last sector of the drive and finally another random block in order to "torture" the drive and expose any weakness.  This is the reason that a SMART report is collected when the process begins and then compared to another at the end of the process.  The comparison can be confusing because any lines that differ are displayed and some of these can be ignored since they don't represent a problem (such as power-on hours).  The ones you are typically after are attribute #5 (Reallocated Sector Ct) and #197 (Current Pending Sector).  There are others too (and can vary between drives) but there are usually two you want to see at zero (low numbers for #5 are ok too).

Link to comment
What a game changer that must have been.  I can't imagine having to shut down my server for a day or more just to add some additional storage.  At the current rate the 2x preclear script on my 2TB test drives looks like it will take close to two full days.
To be fair, preclear does a LOT more than just write zeroes to the drive, which is what the clearing operation built into unraid does. The downtime to add a drive without using preclear is substantial, but nowhere near the time a preclear cycle takes. Most people feel that the extra testing and verifying that the preclear script does is more important than the end result of having a drive that unraid will add more quickly. Preclear pulls smart reports before and after and compares them for you, and it verifies that the zeros are actually read back successfully, which the stock clearing process does not do.

 

(Heh, sniped by WizADSL, posted anyway just because)

Link to comment

Definitely true that JoeL's pre-clear script does more than just zero the drive.    The actual zeroing takes roughly 1/4th of the time that the pre-clear script runs => THAT is how long the array will be unavailable if you simply add a drive without pre-clearing it [still many hours; but not nearly as many as the pre-clear runs].

 

In addition, many folks run more than one pass of pre-clear, which of course takes even longer.

 

The extra steps the pre-clear does are, as noted above, designed to give the drive a good workout and hopefully isolate any infant mortality issues BEFORE you actually use the drive.    This can help identify drives with problems so you won't find those problems after you've added the drive to your array.

 

Link to comment

When the array did the clearing I don't think it took as long as the preclear script.  I'm not sure about this but the array version of the preclear...

I always refer to the "array version" as clearing, not pre-clearing. Preclearing is what you do so the array won't have to clear.
Link to comment

Preclear came out in 2008, when I believe the largest disks were 1T. And it only wrote the binary zeros - it did not do any drive testing. But even with the smaller sizes and limited I/Os, it still took quite a while to add a new disk during which the entire array was unavailable. It was definitely a PITA. Joe L.' script certainly was well received by the community!

Link to comment

When the array did the clearing I don't think it took as long as the preclear script.  I'm not sure about this but the array version of the preclear...

I always refer to the "array version" as clearing, not pre-clearing. Preclearing is what you do so the array won't have to clear.

 

That's because it IS clearing  :)    The PRE- in JoeL's script means you're clearing BEFORE you add the drive to the array.  It's not really an "array version" of pre-clear => clearing is simply an array function that you can avoid by doing it BEFORE you add the drive (thus the "Pre").

 

 

Link to comment

So my drives and new case have arrived and I just built the server and purchased my license.  Preclear is running on all the new disks which will take some time.  Before that finishes I need find a solution for copying data from USB drives to my unRAID shares.  I do believe there are several plugins that would be sufficient for this task, does anyone have suggestions?

 

I'd like to be able to perform the move from the console using something like MC.

 

Also, is anyone familiar with what the icon on the top drive in this preclear plugin indicates?  The plus sign:

 

RQWvuFl.png

Link to comment

First, I presume you mean 15TB, not 15GB.

 

Second, why not just copy it across the network from a PC that natively supports NTFS?    Assuming a GB network it will be just as fast as directly connecting the drives.

 

You're correct I meant 15TB.  There is not an available machine with a GB connection to the server, most of the devices in the house are wireless.  It looks like unassigned devices will do the trick.

 

I have another question regarding allocation method and split level.  What happens if I have split level set so that a show will keep all seasons on one disk and that disk then fills to capacity but new files are copied to the share for that show?

Link to comment

The copy will fail.

 

I don't know if the min free logic can be used to force a move to a different disk before the next copy is started -- but the last time I experimented with that (in v5) it failed if the split level was "forcing" a show to stay on the current disk.    I've long-since (years) quite using overly-restrictive split levels for exactly that reason.  [Other than keeping multiple files associated with the same recording together, there's no reason to artificially keep multiple related shows together ... that's the beauty of the user shares -- they "look" like a single large storage volume externally; and where the files are physically located really doesn't matter.]

 

Link to comment

The reason I wanted to keep the shows on the same disk is because all of my metadata is stored on disk (fanart, nfo, etc).  Theoretically this would prevent multiple disks spinning up to grab art for a single show.  Ultimately though you are right it may not be worth the future headaches to impose such restrictions.

 

Is there a way to change the web UI banner?  I am using the dark theme and the golden beach sunset just does not mesh well.

Link to comment

The reason I wanted to keep the shows on the same disk is because all of my metadata is stored on disk (fanart, nfo, etc).  Theoretically this would prevent multiple disks spinning up to grab art for a single show.  Ultimately though you are right it may not be worth the future headaches to impose such restrictions.

 

Is there a way to change the web UI banner?  I am using the dark theme and the golden beach sunset just does not mesh well.

 

You can define "spin-up groups" which will cause all drives in a group to spinup when one does -- this could allow shares that need more than one drive to hold all their content to ensure you didn't have delays while data that might span the drives was needed.

 

As for changing the banner -- I believe that's a "coming attraction" (not sure, but I think I've read that in one of the release threads) ... but in any event you can turn off the banner altogether, which will get rid of the "golden beach sunset"  :)

Link to comment

The reason I wanted to keep the shows on the same disk is because all of my metadata is stored on disk (fanart, nfo, etc).  Theoretically this would prevent multiple disks spinning up to grab art for a single show.  Ultimately though you are right it may not be worth the future headaches to impose such restrictions.

If you get your split levels correct then you can apply this sort of limit.    However if the disk selected by the split level gets full then you will start getting out-of-space errors on trying to add additional material.

 

One split level that is not mentioned that often is level 0 (manual).  At this level you create the top level folders on specific disks.    The way this works is that unRAID starts trying to match as much of the path as it can against the top level folders on the disk (including the share folder as the first part).  If only one disk matches the final criteria then that is selected as the target.  If multiple disks match then the allocation method is used to select from the possible choices.  Once a folder is created this way then it is never split across disks without you taking manual action.  This can provide a nice mix of manual and automatic control for those who want a bit more control over where media is stored. 

 

For instance I store TV shows under letter folders (e.g. A, B, C etc).  If a letter is contains too much media for one disk I can create the same letter on another disk and unRAID starts using that as well applying the allocation method (I use Most Free).  This does mean that if a disk gets full I may need to manually move a show to a different disk, but that is relatively rare.

 

Is there a way to change the web UI banner?  I am using the dark theme and the golden beach sunset just does not mesh well.

At the moment the choice via the GUI is to have it on or off.  You can change it by delving under the hood bypassing the GUI, but this is more effort than most want.  Using a user selected banner via he GUI has been promised as something that is forthcoming.
Link to comment

Thank you for the extra information, I may use a manual splitting method like that.  I'll be sure to check around the options for disabling the banner.

 

I just completed my preclears and attempted to create the array.  Is there a reason that the array is trying to build parity even though all disks in the system should have the "preclear" flag on them?  Does this mean all disks will be spinning another 12 hours and I should hold off on copying anything over to them?

Link to comment

Thank you for the extra information, I may use a manual splitting method like that.  I'll be sure to check around the options for disabling the banner.

 

I just completed my preclears and attempted to create the array.  Is there a reason that the array is trying to build parity even though all disks in the system should have the "preclear" flag on them?  Does this mean all disks will be spinning another 12 hours and I should hold off on copying anything over to them?

The pre-clear flag is ignored as far as the parity disk is concerned so yes I afraid the disk will keep spinning as far as the parity build is concerned.  Ideally you should then run a parity check to make sure it was built OK although that can probably wait.

 

The case of having all disks precleared is a rather special case and there is no special treatment for it.  When you first build the array unRAID does not care whether disks have been precleared or not - it simply build the array and then build parity.  The preclear becomes important if you later want to add an additional disk without breaking parity.

Link to comment

to build on the above, parity doesn't know what's on those drives, it's only doing the math based on the 1's and 0's is sees at the bit level to calculate the parity.

 

I suspect it's pretty rare that anyone adds all precleared drives to start an array, so it would be an edge case to handle this situation differently than when someone adds drives which already have some data/media on them, which definitely requires parity to be built.

 

Anyway, it sounds like you're getting close now; enjoy :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.