3T -> 2T Possible? If so, how?


SSD

Recommended Posts

Below are the steps I plan to take to add these drives to my array.  Input is welcome:

 

1. I will have to zero out the MBR and partition table of each of the 3T drives (dd if=/dev/zero count=200 of=/dev/sdX )

2. Make the usable space 2.2T on each of the 3T drives (hdparm -N p4294963168 /dev/sdX)

3. Reboot (not sure if necessary)

4. Reset the array configuration

5. Add the 2 2T disks as parity and disk1 (add other disks, which were formatted with unRAID)

6. Start the array.  unRAID should partition the new disks and start building parity.  Only the 3T disk should appear unformatted - the rest are all already loaded with data and should just be included in the parity calculation.

 

Am I missing anything?

 

So based on all of your testing, I took the plunge and picked up 2 Hitachi 3TB drives! One of my 2TB EARS drives is dying, so I interpreted that as a sign that I need to go bigger! ;D

 

2 questions:

 

Q1: Why do you need step 1? Once I preclear these 2 drives, can't I go right to step 2 and Fake-HPA them to the correct size?

 

Q2: How would I figure out how large to make them in hdparm if I want to make them 2TB even? I'm not hurting for space, so I figured it would be less hassle/risk to just use them as 2TB until they are fully supported, since that way I won't need to rebuild my parity drive twice...

 

[EDIT] Just re-read the thread and noticed Joe L mentioned 3907029168 as the number for 2TBs. Would that be safe to go with? I wouldn't want to be off by 1 too large and upset my parity drive... :) [/EDIT]

Link to comment

4294963168  * 512 = 2,199,021,142,016

 

And 4294963168 is kind of a special number that conforms to the rules for sector counts used by manufacturers since the 250G drives.

 

It is very close to the limit of MBR, but under.

 

If you'd rather limit your drive size to 2T that is up to you.

 

This count works, I have used it.

Link to comment

4294963168  * 512 = 2,199,021,142,016

 

And 4294963168 is kind of a special number that conforms to the rules for sector counts used by manufacturers since the 250G drives.

 

It is very close to the limit of MBR, but under.

 

If you'd rather limit your drive size to 2T that is up to you.

 

This count works, I have used it.

 

I'd be fine using your calculated number (especially since you've tested it already!), but as I was hoping to avoid messing about with my parity drive until 3TB drives have been approved, I'm limited to the number of sectors for a 2TB drive....

Link to comment

I'd be fine using your calculated number (especially since you've tested it already!), but as I was hoping to avoid messing about with my parity drive until 3TB drives have been approved, I'm limited to the number of sectors for a 2TB drive....

 

Before you'll be able to use a 3T as an array disk, you will have to have a 3T parity drive.  It would make sense to replace your existing parity with the 3T parity, and use your 2T parity as a data disk.  Otherwise, when 3T drives are supported, you'd either need a new 3T for parity or would have major data shuffling to do. 

 

If you plan to get a different 3T drive for parity (for example, a 7200 vs 5400 RPM), then I could see some logic in waiting to update parity.

 

But to answer your question, the number of sectors in a 2T drive is 3907029168.

 

If anyone is planning to replicate what I have done (and WeeboTech) with the Areca ARC-1200 and creating a RAID0 pair for parity, PM me.  There are some special tricks I can share to save you time and aggravation.

Link to comment

Thanks for the tips guys, have 4 hitachi 3tb 5400s running.  Parity drive is @ 3tb and building fine.

Used sector count of 4280000000 for 2.19TB on the other 3.

 

What's the appropriate way to reverse hdparm -N?  I set the size back to normal value for the parity drive and it's back to 3TB, but wondering if there's a more proper way to set it back to default.

 

 

Link to comment

Thanks for the tips guys, have 4 hitachi 3tb 5400s running.  Parity drive is @ 3tb and building fine.

Used sector count of 4280000000 for 2.19TB on the other 3.

 

What's the appropriate way to reverse hdparm -N?  I set the size back to normal value for the parity drive and it's back to 3TB, but wondering if there's a more proper way to set it back to default.

 

 

you will not want to change the apparent size of the parity drive back  until unRAID supports the larger size.  It will only extend the time it takes to do a parity cal/check by a third and have no other benefit.  Additionally, the partition in the MBR will be grossly mis-calculated as if cannot define a partition greater than 2.2TB.  It will likely be identified as a invalid disk when upgrading unRAID.

 

You've been warned...  you are completely in un-charted waters.  Put the HPA in place on the parity drive too. remove it ONLY when unRAID supports drives > 2.2TB.

 

(Yes, you can remove it simply by using the hdparm -N with the full size of the drive.)

 

Joe L.

Link to comment

(Yes, you can remove it simply by using the hdparm -N with the full size of the drive.)

 

Hi Joe,

 

Can I just confirm that by this statement you mean: Once unRaid supports 3TB drives, to 'reset' 3TB drives back to original size, the hdparm  is all that is required. No need to rebuilt data on content or parity drives that are being re-sized?

 

 

Link to comment

(Yes, you can remove it simply by using the hdparm -N with the full size of the drive.)

 

Hi Joe,

 

Can I just confirm that by this statement you mean: Once unRaid supports 3TB drives, to 'reset' 3TB drives back to original size, the hdparm  is all that is required. No need to rebuilt data on content or parity drives that are being re-sized?

 

 

 

No, a data rebuild will be required as the drive should appear "new" to unRAID because of the change in size.  Not to mention that the fact the MBR style will have to be rewritten as a GPT style partition to actually be able to access the full 3TB drives.

Link to comment

edit: This is on 5 Beta6a btw

 

Experiencing much nastiness with this technique.  Was having trouble at 2.2TB so brought them down under 2TB.  Also followed advice on parity drive and it's now below 2TB.  Thought I'd be smart and skip a future step there without really thinking it through.

 

Anyways, parity builds fine, shares show up fine.  Writing directly to disk shares murders the server in about 15 seconds flat... nasty crash.  Attached is a picture of the terminal after crash, was unable to telnet in or retrieve syslog.  Attached is syslog after reboot if that's any help.

 

I would like to have unRAID write a new 4K aligned partition and reformat the drives, how can I do that? I have a SystemRescueCD (sysresccd.org) on flash I can also use if it's easier to use that.  I know it seems like a silly question, but even after destroying the partitions and running initconfig unRAID thinks the disks are ready for use.

 

Just messing around right now, I know this is all unsupported but if anyone has an idea or two I'd be happy to hear it.

 

EDIT: NM, discovered the beautiful thing called preclear, giving it a shot.

death.jpg.60110eb20047004d9cfb7d77f69e4508.jpg

syslog.txt

Link to comment

Electroglyph -

 

Sorry you are having problems getting your array set up.

 

I have run beta 5.0b6a almost since it came out.  Created the 3T -> 2.2T process, and have not had any crashes.  Array has been solid.  Besides a few minor bugs, like spindown problem with my controller, I'd almost say "rock solid".  I do not think that the instructions in this thread to set up an HPA on 3T disks are your problem.  But until we understand what you have done we won't know what the problem is.

 

It is easy to get frustrated and try things and get more frustrated.  But the truth is, there right ways and a wrong ways to setup an array, and to recover from errors.  Random trying things is a good way to lose data and seldom helps narrow down a problem.  I need you to slow down, take a deep breath, and give a step by step of what you did.  

 

First there are the basic questions.  What hardware (MB, controllers, and disks) are you using?  How much memory?  Have you run a memory test?

 

In your first sentence you say "Was having trouble at 2.2TB so brought them down to 2TB."  What does "trouble" mean?  What was the command you used to bring the 3T down to 2.2TB?  Did the array report 2.2T drives?  Were you able to format the disks?  Did parity build?  How long did it take?  Any errors in the syslog after building parity?  What trouble did you have?  Were there errors?  What were they?  Did your problems present writing to user shares?  If so, how are the user shares set up?  Did you try writing to disk shares?  How do you bring the size to 2TB?  What error did you see? ...

 

Based on your update, it sounds like you may be starting over.  Not a bad option.  I'd recommend resetting the HPAs to full size before beginning.  Here is a (somewhat) step by step of setting up a new array.  This is basically what I did setting up my new array.

 

The best way to begin with a new server is to create your memory stick, boot, and select the memory test.  Let it run at least 8 hours.  Then boot unRAID, and capture a syslog.  Look for unusual errors in the syslog.  You many not know what unusual is.  Post any questions.

 

If your memory was good, your syslog looks normal, then download the preclear script.  And run it on all of your disks simultaneously from the console.  Up to 6 at a time.  Alt-F1 - Alt-F6 on the console will let you switch between telnet sessions.  If you are using 3T drives, you can preclear them without creating the HPA first.  Running preclears on several disks at once is encouraged, as it stresses the system and preclears the disks.  Disk temps should stay under 45C.  If they go over 50C, stop the preclears and improve cooling in the case.  Then restart.  After you are done, look carefully at the preclear results.  Make sure you aren't seeing reallocated sectors or pending sectors.  Also check your syslog.  You should not be seeing errors in the syslog from the preclears.  You could rerun your memory test if you had any problems to see if something went bad.

 

Now, if you have the 3T drives, you will need to create the HPAs.  I document how that is done earlier in this thread (see HERE).  If there are questions, let me know.  Creating HPAs can be a little tricky.  Sometimes you get an error and have to reboot power cycle the server.  The HPA can only be altered once per boot power cycle.  I always rebooted power cycled after setting the HPAs, and confirmed they were set with the hdparm command and with the unRAID GUI.

 

Now it is time to define your array.  Assign parity and your data disks.  Press the start button.  Parity should start to build.  Wait 2-3 minutes, refresh the browser, and note the parity build speed.  You should be getting > 30 MB/sec.  If you are not, wait a few more minutes and try again.  Report to the forums if your speeds stay < 30 MB/sec.  If speed looks good, press the format button to format the data disks.  Formatting while building parity takes 5-10 minutes.  (You can wait until parity is built before formatting - but I don't).  Let the parity build.  Check the syslog afterwards.  Look for errors.  Run a smartctl report on parity.  Look for signs of reallocated or pending sectors. Report questions or problems.

 

Do not create user shares.  Copy some data to your data disks.  100G+.  Big files are good.  Don't try to copy everything.  Copy at least 10G to each disk.  Copy files from a variety of machines you plan to use to copy files to the array.  Run md5 on the source and destination files, and compare them.  If they aren't identical, post your results.  Run a parity check.  If you are not getting >45MB/sec speeds after 3-4 minutes of running, you may be having a problem.  (Constantly checking the speed slows it down, so wait the 2-3 minutes and then check it once.)  If you get ANY sync errors, report them.  Check your syslog for errors.  Anything unusual, report it.  

 

If all is clean, congratulations!  Your array is now in a trustworthy state.  It is now time to load your data.  I recommend loading your data directly to the disk shares.  Many users insist on creating the user shares early, and this is the earliest point I'd consider doing it.  But writing to user shares is slower than writing to disk shares.  And if you f*ck up the user share settings, which is incredibly easy to do, your files that you want to stay together may be spread across disks.  My advice is to set up your directory structure and copy your files to the disk shares.  Set up the user shares later and experiement with them when you don't have as much data to copy.

 

You can use teracopy or rsync to copy data to run verification on the copies.  I don't do this.  I already had you md5 over 100G of files.  If that worked, I tend to trust the network will copy files reliably.  I realize I can get a bit error once in a blue moon.  And that once in a blue moon of blue moons, that bit error may affect something important.  It's a risk I'm willing to take rather than read-verifying everything.  This is a personal decision.

 

Many users have to copy some data, and then add the disks that used to hold that data to the array, before they can copy more data.  If that is your situation, double and triple check that all your files copied.  Compare total byte counts and file counts between source and target.  Linux has limitations of directory depth, and if you exceed, some files won't copy and you may not know it.  I would be extra anal before having confidence that all is copied.  Run a few md5 or file comparisions if you like.  Only when I was absolutely sure would I pull the disks, load them into the unRAID server, and preclear them so they can be added.

 

You array is now loaded.  Run a parity check.  Check the syslog for errors.  Run smart reports on all of your disks.  Save the syslog and smartctl reports in a permanent location (not on the array) to be used for comparision purposes if problems occur in the future.  Also, back up your memory stick.

 

Now is the time to start playing with addons.  I'd recommend starting with the more basic ones, like the unmenu (the package manager helps a lot with installing addons), the Powerdown script, the UPS program (if you have a UPS).  Add these and (I probably sound like a broken record), check the syslog for errors.  Don't try to add every addon in the world at once, 1 or 2 at a time is plenty (1 for the complicated ones).  Get if fully working before adding another.  Backup your memory stick after every major success, so you can restore if something goes wrong.

 

Hope this helps get your started.  Let me know if any questions.  Post the answers to my questions early in this post if you have them.  Track progress through these steps and keep us updated on your progress.  Don't be shy about asking questions.  And if something goes wrong, don't do anything until you understand what has happened, what the possible causes are, and make an informed decision about a next step.  The worst thing to do is guess.

 

Good luck!

Link to comment

from what I've read, it might be you can only alter the HPA once per power-cycle to the drive.  I think you must remove power, and then re-apply power. (by turning the server off, then back on) 

 

It might not be enough to simply "reboot" without removing power.

 

Joe L.

Link to comment

I did HPA from 3TB to 2.19TB, parity built fine, array was good to go by all appearances, had same crash that I reported.  HPA all drives down to 2TB (was first thing I thought of doing), rebuilt parity, same crash.  (Taking note of Joe's input now, there wasn't a power cycle between)  Now I just precleared all drives just to rule out hardware issues since it's all new gear (Yes I should've done that first but I'm a noob here).  All 4 precleared fine.  Now that I've ruled that out I'm going to start over from scratch and report back.  I also just started with a fresh unRAID install, so we'll see how things go from here.  Back to 3TB, gonna power cycle, HPA to 2.19TB, power cycle, and get the ball rolling again.  I'm not frustrated... not worried about anything right now, no data involved... I'm just learning how everything works.

 

MB:  Gigabyte E350N-USB3

HD:  Hitachi 3TB 5400s x 4

 

See you guys in a day or two, not gonna start it right yet, got to go out right now.

Link to comment

I do not see any signs that the motherboard you have selected has EVER been used successfully by an unRAID user.

 

After a very little bit of research, it seems that the network chipset is the Realtek RTL8111E.  HERE is a thread of someone having a problem with that network chipset.  This may or may not be your problem, but it could be.  We have no idea of what your crash was, but I highly suspect it has something to do with your motherboard.  Linux support of new motherboards comes more slowly than for Windows.

 

It is very highly recommended that users select a motherboard that others have verified to be compatible, and if you select a new motherboard that no one else has used, realize you may run into compatibility problems that no one here can solve.

 

I highly doubt the HPA on the 3T drives has anything to do with your problem.

Link to comment

Hmm, you're spot on.  Thanks for the link.  Will update...though I did realize a hilarious little tidbit that was confusing the hell outta me, somehow along the way I started making temporary HPAs, lol.  ::)  that p is rather important.

 

UPDATE:  BIOS is latest version, crash still happens with flash share on unstarted array, definitely strange network behaviour so I'm chalking it up to network chipset ATM.  With only a single PCIe expansion slot adding an ethernet card is not an option, it's reserved for extra SATA.  Not giving up quite yet.  I'll stop posting about it in this thread though since it seems it's not 3TB related.  Have plenty of things to try but it'll have to wait until tomorrow.

Link to comment
  • 2 months later...

Do you only need to worry about compatibility of the motherboard when you are plugging in to the sata ports on it?

 

I bought one of the recommended boards, when doing my initial build ( Gigabyte GA-MA74GM-S2 ) and then later added the Supermicro AOC-SASLP-MV8 Marvell 6480 .    If I am correct, the Supermicro AOC-SASLP-MV8 should be ok for this.

 

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.