Jump to content

Raid 0 to unRAID server - Data Migration - Source drive has bad sectors


nikoB

Recommended Posts

First off, thanks to all of the forum participants!  I'm almost up and running on my first unRAID build due largely to all the useful information in this forum!

 

My dilemma: Copy data from a Windows HTPC to new unRAID server (running 4.7).  Windows box has two 1.5 TB drives setup in RAID 0 (hardware).  One of the two drives in the raid 0 setup has bad sector(s).

 

unRAID: Currently has two drives, one 3TB to be used for parity, one 2TB to be used for data.  The drives have been precleared and I'm ready to build my array (neither has been assigned to an array yet).

 

I have read posts suggesting that the parity drive be added to the array prior to any copying, but I'd like to copy the buik of the data before adding the parity drive in an effort to copy as quickly as possible.  I would then add the parity drive to the existing array before deleting any data on the existing drives.

 

1) Can I add a drive to an array, format to ReiserFS using unRAID, pull the drive out, connect to HTPC and copy files directly (most likely using a live ubuntu drive)? Would this cause issues with the preclear or problems when adding the drive back to my unRAID box?

 

2) Is it better to copy directly over the network and wait it out? Alternatively, should I copy using an external drive as an intermediary?

 

3) Since one of the RAID 0 drives has caused issues in my Windows HTPC, I'm a bit worried about the system hanging when copying files. Is there a recommended program to use for copying files from a drive with bad sectors? I use TeraCopy in Windows and have read about ultracopier for linux, but I'm not sure how either will do when/if they encounter bad sectors.

 

4)  If possible, I would like to use both 1.5 TB drives from my HTPC in this setup. I realize this may not be a good idea due to bad sectors on one of them and I may be left with one good drive.  Depending on the number of bad sectors, could the bad disk still pass preclearing and be used for the unRAID array? Am I crazy to consider using a disk that already has bad sectors? I'm not sure how badly the drive is doing at the moment.

 

Sorry if the post was too wordy...  :-\

Thanks again!

Link to comment

if you allow unRAID to format your data disk and then move it to your HTPC to migrate files BEFORE you add a parity disk you will be fine unless the target disk has any bad sectors,.  You will want to run a preclear cycle on it (or several) to minimize that as an issue.

 

It is certainly not the best approach, but will be faster.    I personally would put the parity disk into place from the beginning.  I would copy directly over the network.    (I cloned my first unRAID server to my second in exactly that way...  I let it run overnight.)

 

Before you use your potentially defective disks in your array, run them through the pre-clear process.  IT will give you an idea of their health.    A few re-allocated sectors is generally not an issue. A few hundred is, or an increasing number, or sectors pending re-allocation after a preclear...  I'd not use a drive in those cases.

 

Joe L.

 

 

Link to comment

Per your recommendations, I decided to go ahead and start copying over the network.  I'm seeing slightly slower speeds than I expected, but that is for another thread.

 

I decided to hold off on the parity for the time being, but I'll be sure not to delete any data until I've done this step.

 

Thanks for the responses!  ;D

Link to comment

I would use the badblocks program to test your disk out.

 

Do the 4 pass write test. it writes 4 patterns of 0xaa, 0x55, 0xff, 0x00.

I've had drives, recover bad sectors and drives remap bad sectors and become useful again.

 

After that run the preclear script for 1 pass.

 

While preclear is good, it's not as thorough as a 4 pattern test.

 

bad blocks will test and document each sector that is unreadable.

If you have any pending sectors or sectors in the badblocks -o (output) file after the 4th pass, the drive should not be used for unRAID.

 

While you can use it and get away with the problem temporarily, later on when you try to recover, you will reach the bad sector and have a potential failure recovering.  The bad sector must be remapped. i.e. no pending sectors.

 

Link to comment
  • 2 weeks later...

I would use the badblocks program to test your disk out.

 

Do the 4 pass write test. it writes 4 patterns of 0xaa, 0x55, 0xff, 0x00.

I've had drives, recover bad sectors and drives remap bad sectors and become useful again.

 

 

Thanks for the tip! I'm currently running badblocks on two 1.5TB WD Green Drives.  However, I'm a bit concerned with the time it's taking to run.  I started the test approx 36 hours ago and it's still on the reading and comparing phase of what appears to be the first pass (0xaa).

 

I took a screen shot last night and one disk was at 12686860/366284646.  This morning, about 12 hours later, the same disk is at 15581348/366284646.  Is this normal speed?  Based on what I've read, I was expecting it to take a while, just not THIS long.  Thoughts?

 

Thanks!

Link to comment

I would use the badblocks program to test your disk out.

 

Do the 4 pass write test. it writes 4 patterns of 0xaa, 0x55, 0xff, 0x00.

I've had drives, recover bad sectors and drives remap bad sectors and become useful again.

 

 

Thanks for the tip! I'm currently running badblocks on two 1.5TB WD Green Drives.  However, I'm a bit concerned with the time it's taking to run.  I started the test approx 36 hours ago and it's still on the reading and comparing phase of what appears to be the first pass (0xaa).

 

I took a screen shot last night and one disk was at 12686860/366284646.  This morning, about 12 hours later, the same disk is at 15581348/366284646.  Is this normal speed?  Based on what I've read, I was expecting it to take a while, just not THIS long.  Thoughts?

 

Thanks!

Just for comparison, I started a badblocks on a 2TB drive yesterday.  It is currently 50% through writing the second of the 4 patterns.  (it first writes, then goes back and reads each pattern in turn.) 

 

The time to write 0xaa, then read it back, then write 50% of 0x55 has taken 23 1/2 hours so far.

 

The drive is a 7200 RPM Hitachi drive.  I did not set the block size, so it is writing 64 blocks of 1024 bytes at a time.

 

Your drive does seem to be running slower, but not that much slower considering it is a  "green" drive and not spinning as fast.

 

Joe L.

Link to comment

The time to write 0xaa, then read it back, then write 50% of 0x55 has taken 23 1/2 hours so far.

 

The drive is a 7200 RPM Hitachi drive.  I did not set the block size, so it is writing 64 blocks of 1024 bytes at a time.

 

Your drive does seem to be running slower, but not that much slower considering it is a  "green" drive and not spinning as fast.

 

Thanks for the feedback Joe.  Your drive appears to be completing the process significantly faster than mine.  I did set the block size to 4096, bad idea?  Am I better off cancelling this current run and re-starting with the default option(s)?  If I remember correctly, I ran the following code:

badblocks -wsvb 4096 -o

Is there a better/faster way?

Link to comment

The time to write 0xaa, then read it back, then write 50% of 0x55 has taken 23 1/2 hours so far.

 

The drive is a 7200 RPM Hitachi drive.  I did not set the block size, so it is writing 64 blocks of 1024 bytes at a time.

 

Your drive does seem to be running slower, but not that much slower considering it is a  "green" drive and not spinning as fast.

 

Thanks for the feedback Joe.  Your drive appears to be completing the process significantly faster than mine.  I did set the block size to 4096, bad idea?  Am I better off cancelling this current run and re-starting with the default option(s)?  If I remember correctly, I ran the following code:

badblocks -wsvb 4096 -o

Is there a better/faster way?

I really do not know what is fastest.  This is the first time I'm using badblocks.  It is on a drive I'm going to RMA at that, since it has just over 1500 re-allocated sectors and every pass writing to it uncovers about 20 or 30 more in the reading phase.

 

I'm now at 26 hours, with 79% through comparing 0x55.  I simply typed:

badblocks -vsw -o /boot/badblocks_out_sdl.txt /dev/sdl

 

You are right though, I did not consider the size difference in our drives. yous is smaller.    A lot might have to do with other activity on the server.  Mine is completely idle otherwise. (It is my second server, used primarily to back up my first.) 

 

You are on the other hand, are running two badblocks commands at the same time on two disks... possibly having to share the same disk controller.  (Twice as I/O and cpu intensive)

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...