[SOLVED] Advice for a N00b


Recommended Posts

Hello to the group.  

 

UnRAID Newbie here, and well not quite even that far.  Right now I have a media server based on Windows 7, and am very close to making the decision to swap over to UnRAID.  I have spent the past 2-3 days with my spare time reading about UnRAID have corresponded with some users over at AVS Forum about what they are doing.  I have few questions that are just general operation questions that have come up as I’m reading.  I think I know the answer to most, but just want to make sure I'm understanding correctly.

 

My Current Server:

Windows 7 64 Ultimate

FlexRAID

Antec 1200

Biostar TA790GXE 128M

AMD Athalon II x2 3.0

Intel EXPI9301CTBLK 10/100/1000 NIC (Realtek on MB disabled)

4 GB OCZ DDR2 1066

OCZ ModXStream 600W PSU

1 - 250 GB Drive for OS (old 5-6 yrs / noisy, probably will go away)

1 - WD10EALS 1.0 TB 7200 RPM Blue drive (currently holds everything but media (scanned documents, photos, software, etc.. and is backed up to an external hard drive)

1- WD20EARS 2.0 TB 5400 RPM Green drive (currently runs parity for FlexRAID)

2 - Hitachi 7200 RPM 2.0 TB 7220 models don't have the whole model # to fingertips now (media drives)

1 - Hitachi H3IK20003277SP 2 TB 7200 drive (media drive)

 

All the drives except the 1st one are around a year old +/-.

I will probably put the 1.0 TB Blue drive into the array which it currently isn’t' and still have the data backed up to the external drive.

 

I am down to a little over 1 TB of space on the 3 media drives so I will probably be adding an additional data drive when I make any changeover.  I probably have a couple hardware related questions about drives, and a controller I now need to add (MB is tapped out), but will wait and do this in the hardware sections of the forum after I do a little more reading here.

 

So on to my questions:

 

1. I don't want to have to change my main system hardware at the moment if I don’t have too.  I saw at least one Biostar MB listed in the compatible hardware list, but my specific model was not.  Do you see a problem with anything in this list being compatible?

2. Pre-clearing drives obviously takes a while.  Can you Preclear multiple drives at one time, or do you have to do them one at a time?  Does preclear format the drive at the same time or is it two steps?

3. Cache drives – How much flexibility do you have in how often data is copied over to the parity protected drive?  If you set this to occur regularly what happens if I start copying something to the array while it is relocating already cached files?

4. One of the issues I have with FlexRAID, though it is a good product, it is snapshot RAID.  Utilizing a cache drive that only copies over at night feels like the same thing…. Though not quite as bad because with FlexRAID you copy onto the Array which is then invalid until the parity update is done.  At least with a cache drive the array is not at risk to drive failure, just the data on the cache drive.

5. With FlexRAID because it is not constantly running I used my faster drives for data, and used a green drive for parity.  It only runs at night.  With UnRAID is this still the correct strategy or do I need a fast parity drive?

6. Can a few of you tell me what copy speeds you really see on current builds?  I have read many tests and FAQ’s etc that note write speeds of 15-30 Mb/Sec.  Is this really the best case?  I have had a few people tell me they keep write times at 50 Mb/sec +/- and up to 60 or so with a cache drive.  I currently get speeds from 50-65 Mb/sec copying to the Win7 machine.  I used to get speeds in the 90’s across the network, but that is a rare occurrence now (not even sure what has caused it to change).  I’d really like to keep speeds in the 50+ range if that’s possible when copying over 20-50 GB rips (sometimes several at once).   Would it really cut my write time down that much from what it is now?  

7. I see drive shares are faster than user shares for write speed.  If I understand this correctly, a drive share is just a share of a physical disc.  The user share is a share folder that could span multiple discs, correct?  If this is correct, can you set up drive shares with parity protection turned off initially, copy over the data, then go set up parity protection and set up user shares to put it out on the network and get rid of drive shares?  I was figuring it would take a weekend to build and set this up, but with pre-clears and 4+ TB of data to copy, I’m not sure it can be done that fast anymore ?

8. Can drives formatted in UnRAID be installed in a Win7 machine and be read?  Or is the format not compatible.

9. With only 4 or so protected drives, I’m not too worried about 2 drive failure protection.  However, down the road I can see this being something I would care about.  I saw it mentioned in a couple places and saw one older thread where it was discussed at length.  Is this something that is ever planned for?  I don’t’ see it in the future road map listing planned features.  There seemed to be some disagreement whether or not it was feasible at all, but I think FlexRAID does offer it (in snapshot) and that setup is similar (basically JBOD with parity).

10. I’m pretty IT literate when it comes to function and networking, but am not a Linux person and not an IT professional.  I learned computers on old DOS based PC’s, apples etc before Windows 3.0 was in development so not bothered by command line functions, but I am just not familiar with Linux at all.  How difficult is it to pick up on basics for customization if needed?

 

Well I think that covers the big ones for now.  I imagine a few more will come up as I keep digging and reading.  The further I go the more I like it.  

Thanks for the time from anyone who can respond.

 

 

STARTED A NEW THREAD SINCE I STARTED THE BUILD

 

Link to comment

Hello to the group. 

 

UnRAID Newbie here, and well not quite even that far.  Right now I have a media server based on Windows 7, and am very close to making the decision to swap over to UnRAID.  I have spent the past 2-3 days with my spare time reading about UnRAID have corresponded with some users over at AVS Forum about what they are doing.  I have few questions that are just general operation questions that have come up as I’m reading.  I think I know the answer to most, but just want to make sure I'm understanding correctly.

 

My Current Server:

Windows 7 64 Ultimate

FlexRAID

Antec 1200

Biostar TA790GXE 128M

AMD Athalon II x2 3.0

Intel EXPI9301CTBLK 10/100/1000 NIC (Realtek on MB disabled)

4 GB OCZ DDR2 1066

OCZ ModXStream 600W PSU

1 - 250 GB Drive for OS (old 5-6 yrs / noisy, probably will go away)

1 - WD10EALS 1.0 TB 7200 RPM Blue drive (currently holds everything but media (scanned documents, photos, software, etc.. and is backed up to an external hard drive)

1- WD20EARS 2.0 TB 5400 RPM Green drive (currently runs parity for FlexRAID)

2 - Hitachi 7200 RPM 2.0 TB 7220 models don't have the whole model # to fingertips now (media drives)

1 - Hitachi H3IK20003277SP 2 TB 7200 drive (media drive)

 

All the drives except the 1st one are around a year old +/-.

I will probably put the 1.0 TB Blue drive into the array which it currently isn’t' and still have the data backed up to the external drive.

 

I am down to a little over 1 TB of space on the 3 media drives so I will probably be adding an additional data drive when I make any changeover.  I probably have a couple hardware related questions about drives, and a controller I now need to add (MB is tapped out), but will wait and do this in the hardware sections of the forum after I do a little more reading here.

 

So on to my questions:

 

1. I don't want to have to change my main system hardware at the moment if I don’t have too.  I saw at least one Biostar MB listed in the compatible hardware list, but my specific model was not.  Do you see a problem with anything in this list being compatible?

Don't see a problem. You can test with the free version of unRAID.

2. Pre-clearing drives obviously takes a while.  Can you Preclear multiple drives at one time, or do you have to do them one at a time?  Does preclear format the drive at the same time or is it two steps?

You can clear multiple drives. Six is a safe maximum. See this guide with links to preclear: http://lime-technology.com/wiki/index.php?title=Building_an_unRAID_Server

3. Cache drives – How much flexibility do you have in how often data is copied over to the parity protected drive?  If you set this to occur regularly what happens if I start copying something to the array while it is relocating already cached files?

This is not an issue. Open files will not be copied. You can write to the cache while the mover is running.

4. One of the issues I have with FlexRAID, though it is a good product, it is snapshot RAID.  Utilizing a cache drive that only copies over at night feels like the same thing…. Though not quite as bad because with FlexRAID you copy onto the Array which is then invalid until the parity update is done.  At least with a cache drive the array is not at risk to drive failure, just the data on the cache drive.

UnRAID maintains parity in real time. I do not delete originals for at least 24 hours so I'm sure the data has been moved from the cache to the array. You can have the mover run more often or start it manually.

5. With FlexRAID because it is not constantly running I used my faster drives for data, and used a green drive for parity.  It only runs at night.  With UnRAID is this still the correct strategy or do I need a fast parity drive?

The target data drive and parity are accessed during array writes. Writes to the array will be limited by the slowest drive involved. Use of a cache drive mitigates slow write performance.

6. Can a few of you tell me what copy speeds you really see on current builds?  I have read many tests and FAQ’s etc that note write speeds of 15-30 Mb/Sec.  Is this really the best case?  I have had a few people tell me they keep write times at 50 Mb/sec +/- and up to 60 or so with a cache drive.  I currently get speeds from 50-65 Mb/sec copying to the Win7 machine.  I used to get speeds in the 90’s across the network, but that is a rare occurrence now (not even sure what has caused it to change).  I’d really like to keep speeds in the 50+ range if that’s possible when copying over 20-50 GB rips (sometimes several at once).   Would it really cut my write time down that much from what it is now?

The speed should be what your seeing now with a cache drive. Writing to the outer drive cylinders is significantly slower than writing to the inner so drive performance suffers as a disk gets full.

   

7. I see drive shares are faster than user shares for write speed.  If I understand this correctly, a drive share is just a share of a physical disc.  The user share is a share folder that could span multiple discs, correct?  If this is correct, can you set up drive shares with parity protection turned off initially, copy over the data, then go set up parity protection and set up user shares to put it out on the network and get rid of drive shares?  I was figuring it would take a weekend to build and set this up, but with pre-clears and 4+ TB of data to copy, I’m not sure it can be done that fast anymore ?

You can configure either Disk or User shares and write to them before adding parity. This is faster but less safe. If data is written that subsequently cannot be read you will wish parity had been in place during the initial write.

8. Can drives formatted in UnRAID be installed in a Win7 machine and be read?  Or is the format not compatible.

Drive can be read and written in Windows if reiserfs drivers are installed. However, they must be formatted by unRAID in case case.

9. With only 4 or so protected drives, I’m not too worried about 2 drive failure protection.  However, down the road I can see this being something I would care about.  I saw it mentioned in a couple places and saw one older thread where it was discussed at length.  Is this something that is ever planned for?  I don’t’ see it in the future road map listing planned features.  There seemed to be some disagreement whether or not it was feasible at all, but I think FlexRAID does offer it (in snapshot) and that setup is similar (basically JBOD with parity).

This is feasible and planned for unRAID.

10. I’m pretty IT literate when it comes to function and networking, but am not a Linux person and not an IT professional.  I learned computers on old DOS based PC’s, apples etc before Windows 3.0 was in development so not bothered by command line functions, but I am just not familiar with Linux at all.  How difficult is it to pick up on basics for customization if needed?

You'll get lots of help here. Just ask before doing anything your unsure of.

 

Well I think that covers the big ones for now.  I imagine a few more will come up as I keep digging and reading.  The further I go the more I like it. 

Thanks for the time from anyone who can respond.

 

 

Your Welcome and good luck.

Link to comment

preclear does not format a drive.  unRAID will format the drive once a drive is assigned to the array.  In almost no case will you ever need to do this using anything other than the "Format" button on the unRAID user interface unless you are doing something entirely outside of the unRAID protected array.

 

Also, I know of no "window's" driver that will write to a reiserfs formatted drive.  There are several that will allow you to read one.

 

Since BOTH the parity drive and the data drive MUST revolve at least one revolution when writing a sector the slower of them dictates the maximum "write" speed to the array.  Some will recommend a "fast" parity drive, but is has absolutely no impact on performance unless multiple data drives are being written at the same time.  If you are only writing to one drive at a time (99% of most use of the array) you'll not see any improvement.

 

If you think you might install 7200RPM data drives in the future, you might install a 7200 RPM parity drive now, but it will not help if all your data drives are "green" 5400 RPM drives.  It will only help when writing to a 7200 RPM data drive.

Link to comment

Thanks for the help. 

 

I've pretty much decided I'm going to give this a shot, just need to figure out when.  It's time to add a drive, so I'll probably get a blank and toy around with the free version to make sure everything works out before getting committed to reformatting and copying all my existing data.

 

I'm sure you will be hearing from me again :)

Link to comment

Thanks for the help. 

 

I've pretty much decided I'm going to give this a shot, just need to figure out when.  It's time to add a drive, so I'll probably get a blank and toy around with the free version to make sure everything works out before getting committed to reformatting and copying all my existing data.

 

I'm sure you will be hearing from me again :)

 

That's a good course of action - I suspect you'll be joining the OCD club in no time at all!

Link to comment

The speed should be what your seeing now with a cache drive. Writing to the outer drive cylinders is significantly slower than writing to the inner so drive performance suffers as a disk gets full.

 

Actually, reading or writing to the outer cylinders is faster (not slower) for sustained transfers because the drive makers fit in more sectors on a track with the rotational speed remaining a constant.  The disks fill from the outer cylinder to the inner one, which is why the speed drops off as the disk gets full, as you suggest.

Link to comment
  • 2 weeks later...

One step closer today as I ordered 2 more hard drives to use when I convert.  Went with Samsung 2 TB F4's.  I think I"m going to use my 1.0 TB 7200 RPM WD Blue drive for a cache drive after the array is up and running. 

 

I haven't decided if I"m going to continue to use my WD Green drive for parity or use that for data and use one of my hitachi 7200 RPM's for parity.

 

On a seperate note:

 

I've been reading and following a few threads just to learn more about the system, pitfalls, do's and don'ts.  I ran accross one where someone looked like they were going to wind up with data loss due to a controller error that may have turned into multiple drive failures.  I think the verdict is still out, but in following the story it brings up something I may not be clear on.

 

In the event of a hardware loss other than a hard drive (IE a controller card or MB) do you have to repalce with identicle hardware for the array to properly recoginze the drive?  Or for example if I have a drive connected to a 2 port SATA card, and it dies, can I buy someone elses 4 port or 8 port card, plug in the two drives and have it pick up where it left off?  Is it just a matter of re-assinging the drives in the array?  

 

To make it more complicated lets assume that the controller card goes with no warning and the drives are not visible so I can't make any changes until they are visible attached to a new controller.  

Link to comment

One more question.  How large is the OS?  How long does it take to load? 

 

I have a barely used Sandisc 2GB USB flash drive that is currently unused.  Sandisc is on the suggested list, however, they always seem awefully slow to me.  I typically use Patriot eXporter XT USB drives, but dont want to buy another unless I really need too. 

Link to comment

One more question.  How large is the OS?  How long does it take to load? 

 

I have a barely used Sandisc 2GB USB flash drive that is currently unused.  Sandisc is on the suggested list, however, they always seem awefully slow to me.  I typically use Patriot eXporter XT USB drives, but dont want to buy another unless I really need too.   

It will fit on a 128meg flash drive.  Most people use a 1 or 2 Gig drive.  Your 2Gig would work fine. 

The flash drive is only used when bootin and when changing settings.  As long as it is not USB 1.0 the speed will be fine. (  USB 1.0 will still work, but boot time might be 5 minutes or more.)

Link to comment

One more question.  How large is the OS?  How long does it take to load? 

 

I have a barely used Sandisc 2GB USB flash drive that is currently unused.  Sandisc is on the suggested list, however, they always seem awefully slow to me.  I typically use Patriot eXporter XT USB drives, but dont want to buy another unless I really need too.   

It will fit on a 128meg flash drive.  Most people use a 1 or 2 Gig drive.  Your 2Gig would work fine. 

The flash drive is only used when bootin and when changing settings.  As long as it is not USB 1.0 the speed will be fine. (  USB 1.0 will still work, but boot time might be 5 minutes or more.)

 

99.99% certain it's USB 2.0.  It's less than a year old.  I just always think Christmas will come before a write jobs finishes when I've used Sandisc.  If the whole OS is only 128 MB, I can't see it being an issue though.

Link to comment

One step closer today as I ordered 2 more hard drives to use when I convert.  Went with Samsung 2 TB F4's.  I think I"m going to use my 1.0 TB 7200 RPM WD Blue drive for a cache drive after the array is up and running. 

 

I haven't decided if I"m going to continue to use my WD Green drive for parity or use that for data and use one of my hitachi 7200 RPM's for parity.

 

On a seperate note:

 

I've been reading and following a few threads just to learn more about the system, pitfalls, do's and don'ts.  I ran accross one where someone looked like they were going to wind up with data loss due to a controller error that may have turned into multiple drive failures.  I think the verdict is still out, but in following the story it brings up something I may not be clear on.

 

In the event of a hardware loss other than a hard drive (IE a controller card or MB) do you have to repalce with identicle hardware for the array to properly recoginze the drive?  Or for example if I have a drive connected to a 2 port SATA card, and it dies, can I buy someone elses 4 port or 8 port card, plug in the two drives and have it pick up where it left off?  Is it just a matter of re-assinging the drives in the array?  

 

To make it more complicated lets assume that the controller card goes with no warning and the drives are not visible so I can't make any changes until they are visible attached to a new controller.  

 

UnRAID cares very little about hardware. As long as its supported with a driver anything will work. You can change cards or even the MB without trouble.

Link to comment

Just to touch on the cache drive question some more. You 100% do not want to be moving files over to the cache while the cache is moving files over to the array. Then, you just cause the cache drive to start seeking a lot which slows both operations down a lot. You might as well copy direct to the array if you need it protected that quickly because that will work better than having the mover running all the time. I guess you could install a good SSD to make it workable but it's pointless to attempt with a platter drive.

 

Peter

 

Link to comment

Just to touch on the cache drive question some more. You 100% do not want to be moving files over to the cache while the cache is moving files over to the array. Then, you just cause the cache drive to start seeking a lot which slows both operations down a lot. You might as well copy direct to the array if you need it protected that quickly because that will work better than having the mover running all the time. I guess you could install a good SSD to make it workable but it's pointless to attempt with a platter drive.

 

Peter

 

The cache drive can be read FAR faster than the protected array can be written.  I would think the seeks between the file on the cache drive being moved(read) and that being written would barely slow each other down.  (combined with the disk read ahead buffer, and the linux disk buffer cache, you probably will not even notice any slowdown)

 

Joe L.

Link to comment

My plan was to set up the cache drive after the initial copy over of my existing data.  I have about 4.2 (ish) TB of existing data and as noted I just bought two more drives. 

 

Here is my plan.  I consolidate my data to 2 - 2 TB and the 1 TB drives.  They will be pretty full, but there is only about 400 GB of truly irriplacable data  and that will be backed up in 2 places prior to the migration.  (This data stays backed up for protection).  The rest of it is replacable from media, it would jsut be a very time consuming pain in the rear.

 

This will leave me 3 - 2 TB drives and the parity drive to build the array with and then add the others when emptied.  In the end I will have 5 - 2 TB data drives on the array so I'll be 40-50% full and would like it spread accross the drives.  The options I havne't decided on:

1.  Copy all data to the array - mount and pre-clear drives together (to save time) - spread data back out to all drives.

2.  Copy data clearing drives one at a time, then mount and preclear each.  Result I think will take longer overall due to preclear time.  However, I will not have much to spread out at the end if any.  The last drive to be migrated willl be the 1 TB drive to become the cache drive so nothing to be copied back to it.

 

The other question is where to copy from. 

1. I can put the drives in my win 7 machine and copy accross the netwrok then move the drives over.

2. I have seen the instructions to copy off an NTFS drive inside linux.  I would only go through that process if it really was a lot faster.  Otherwise it seems to be more complicated and less "safe"

 

Any thoughts/expreience on those options is appreciated.

 

 

Link to comment

My plan was to set up the cache drive after the initial copy over of my existing data.  I have about 4.2 (ish) TB of existing data and as noted I just bought two more drives. 

 

Here is my plan.  I consolidate my data to 2 - 2 TB and the 1 TB drives.  They will be pretty full, but there is only about 400 GB of truly irriplacable data  and that will be backed up in 2 places prior to the migration.  (This data stays backed up for protection).  The rest of it is replacable from media, it would jsut be a very time consuming pain in the rear.

 

This will leave me 3 - 2 TB drives and the parity drive to build the array with and then add the others when emptied.  In the end I will have 5 - 2 TB data drives on the array so I'll be 40-50% full and would like it spread accross the drives.  The options I havne't decided on:

1.  Copy all data to the array - mount and pre-clear drives together (to save time) - spread data back out to all drives.

2.  Copy data clearing drives one at a time, then mount and preclear each.  Result I think will take longer overall due to preclear time.  However, I will not have much to spread out at the end if any.  The last drive to be migrated willl be the 1 TB drive to become the cache drive so nothing to be copied back to it.

 

The other question is where to copy from. 

1. I can put the drives in my win 7 machine and copy accross the netwrok then move the drives over.

2. I have seen the instructions to copy off an NTFS drive inside linux.  I would only go through that process if it really was a lot faster.  Otherwise it seems to be more complicated and less "safe"

 

Any thoughts/expreience on those options is appreciated.

 

 

I would copy the data over the LAN.  Yes, it is slower, but less risky.

I would assign the parity drive before I load the array with data.  Yes, writing with it is slower, but safer.  If you do not, and a data disk is subsequently un-readable you'll lose the data.  If parity is present, it can re-construct a un-readable sector.  You are less likely to lose data due to a new disk that is not yet parity protected.  (Yes, I know the pre-clear script should detect most un-readable sectors, but even it can only do so much.  A disk can still fail early in its life after being burned  in for 30 hours.  Since you are re-using your source drives, a parity disk is pretty much mandatory in my mind.)

 

Your choice... speed vs. safety.

 

If you transfer with no parity disk, at least use something like Teracopy which will do a checksum when the data is copied to ensure it got there OK.

Link to comment

[The other question is where to copy from. 

I would copy the data over the LAN.  Yes, it is slower, but less risky.

I would assign the parity drive before I load the array with data.  Yes, writing with it is slower, but safer.  If you do not, and a data disk is subsequently un-readable you'll lose the data.  If parity is present, it can re-construct a un-readable sector.  You are less likely to lose data due to a new disk that is not yet parity protected.   (Yes, I know the pre-clear script should detect most un-readable sectors, but even it can only do so much.  A disk can still fail early in its life after being burned  in for 30 hours.  Since you are re-using your source drives, a parity disk is pretty much mandatory in my mind.)

 

Your choice... speed vs. safety.

 

If you transfer with no parity disk, at least use something like Teracopy which will do a checksum when the data is copied to ensure it got there OK.

There is no way for my source to maintain parity, but I can enable the parity on the new media so it is protected once copied over. 

 

The remaining question:

Do I copy all the data at once, and then move preclear the reamining drives, then finally moving data to spread it accross all the drives.

or

do I copy over the last 3 drives one at a time, preclear it, then copy the next, then preclear etc...  the last drive to be copied over would become the cache drive. 

 

Link to comment

[The other question is where to copy from. 

I would copy the data over the LAN.  Yes, it is slower, but less risky.

I would assign the parity drive before I load the array with data.  Yes, writing with it is slower, but safer.  If you do not, and a data disk is subsequently un-readable you'll lose the data.  If parity is present, it can re-construct a un-readable sector.  You are less likely to lose data due to a new disk that is not yet parity protected.   (Yes, I know the pre-clear script should detect most un-readable sectors, but even it can only do so much.  A disk can still fail early in its life after being burned  in for 30 hours.  Since you are re-using your source drives, a parity disk is pretty much mandatory in my mind.)

 

Your choice... speed vs. safety.

 

If you transfer with no parity disk, at least use something like Teracopy which will do a checksum when the data is copied to ensure it got there OK.

There is no way for my source to maintain parity, but I can enable the parity on the new media so it is protected once copied over. 

 

The remaining question:

Do I copy all the data at once, and then move preclear the reamining drives, then finally moving data to spread it accross all the drives.

or

do I copy over the last 3 drives one at a time, preclear it, then copy the next, then preclear etc...  the last drive to be copied over would become the cache drive. 

 

Honestly, if it fits on two disks, leave it there.  Less spinning spindles, less power, less to spin up when searching for a file.
Link to comment

How long does it take to Preclear 2 - 2 TB drives simultaniously?

 

Got my 2 new drives in today and was going to load up the free version and look around, preclear those new drives etc without removing the windows OS or working with existing drives yet.

 

I'm going to be home the week of July 11 so will probably build the permanent server then. 

Link to comment
  • 2 weeks later...

Hoping to get to my conversion next week, maybe start Monday.

 

What version should I install?  4.7? or 5.0beta7? or 8a?

 

Since I'm a noob...  my instinct is to go with the stable 4.7.  However, is there anyting really important in the 5.0 releases I should consider, or any reasons I'm really better off avoiding them?

Link to comment

Since I'm bulding a new rig with existing drives, there is not a reason to "pre clear" instead of just having unRAID clear the drives during the intital setup?  If I understand correctly, the benefit to preclear is you can do it wthout taking the array offline.  Since I can't be online anyway.... does it matter?

 

My existing data drives are about a year old and my two new drives have at least been through a full disk test by hitachi (8 hours or so per disc).  I know thats not as robust as preclear but they aren't just out of the box.  

 

Won't the clearing process do what the preclear does, or is preclear that much better?  

Link to comment

Pre-clear is recommended for all new disks. About 1 in 5 disks will fail within 6 months. Pre-clear burns disks in to cause them to fail early if they will. The failed disks can be RMAed before you place data on them. I use it on all of new new disks, even those I don't use in unRAID. I don't trust a disk with my data until it has passed 3 pre-clear cycles.

Link to comment

Pre-clear is recommended for all new disks. About 1 in 5 disks will fail within 6 months. Pre-clear burns disks in to cause them to fail early if they will. The failed disks can be RMAed before you place data on them. I use it on all of new new disks, even those I don't use in unRAID. I don't trust a disk with my data until it has passed 3 pre-clear cycles.

 

What additional check is it doing that the standard "clear" does not?  is it just multiple cycles?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.