Jump to content

Server Consolidation/Upgrade Questions...


david81

Recommended Posts

So I've decided that the old Windows box in the closet doesn't need to exist anymore since I can do everything it does from the Unraid box. To do this, I'm considering a couple things with my current setup.

 

Currently the Unraid system consists of an LE-1660 2.8Ghz single core, 4GB RAM on a GA-MA74GM-S2 with a Corsair CX400 powering 2x2TB WD Greens and 4x1TB drives (mix of WD and Samsung 54k drives). One drive is currently used as a cache drive. A 2 port PCI SATA card is present (the cache drive lives there currently).All is wrapped up in a Centurion 590 case. The box currently does nothing except serve up files and receive backups for myself and family through Crashplan.

 

The end goal is for the Unraid box to take over my as my Usenet box (Sabnzbd, SickBeard,CouchPotato) and web dev box (PHP + MySQL). I'm assuming that a CPU upgrade will be needed and I know I'll need at least another 1TB of storage. So the questions are...

 

1. Do I go with an X2, X3 or X4 CPU upgrade? Will more than 2 cores really make a difference with the intended purpose? I know that Sabnzbd can really hit the CPU when the PAR check comes up.

 

2. For the extra 1TB do I add a 7th drive (I've got another 1TB sitting in the closet that was replaced by one of the 2TB drives) or do I replace another 1TB with a 2TB? Techinically I have the space and the SATA ports available, but I'm wondering if it just makes more sense to upgrade the existing drives.

 

3. Related to 2, am I approaching the limits of the current PSU with my setup? I originally didn't intend for it to get this big, but hey, I'm a packrat.

 

4. Anything else that I'm completely missing here?

 

Thanks in advance,

 

David

Link to comment

54k drives

Unless you have extremely fast drives, I expect you mean 5.4k or 5400 rpm drives.

 

1. Do I go with an X2, X3 or X4 CPU upgrade? Will more than 2 cores really make a difference with the intended purpose? I know that Sabnzbd can really hit the CPU when the PAR check comes up.

I can't answer this very well as I don't run these types of high-end add-ons on my server.  However, I conjecture that the quad and trip cores will be overkill and a dual core will work just fine.  In fact, you may want to try running it all on your current single core and then just upgrade if you actually run into issues.  Maybe your single core will be fast enough and you won't have to upgrade at all.

 

2. For the extra 1TB do I add a 7th drive (I've got another 1TB sitting in the closet that was replaced by one of the 2TB drives) or do I replace another 1TB with a 2TB? Techinically I have the space and the SATA ports available, but I'm wondering if it just makes more sense to upgrade the existing drives.

Ultimately up to you.  With your current setup, I count only 6 SATA ports (4 on the mobo, and 2 on the PCI card).  So how would you support the 7th drive?

 

Upgrading a 1 TB drive to 2 TB gives you more space without impacting your power consumption.  If you jump on the current deal, you can get a 2 TB WD Green for $70.  By comparison, the 1 TB WD Green is $65.  I think it is pretty obvious that the 2 TB is a much better deal.  Personally I wouldn't buy a 1 TB drive today.

 

I would also recommend that you upgrade your PCI card to one of these cheap 2 port PCIe models.  You should even be able to run two of them in your server (one per PCIe slot) for a total of 8 SATA ports, giving you future expansion options.

 

Of course you could also upgrade to an 8 port Supermicro AOC-SASLP-MV8 card, but I don't get the impression that you need that kind of expansion at the moment.

 

3. Related to 2, am I approaching the limits of the current PSU with my setup? I originally didn't intend for it to get this big, but hey, I'm a packrat.

Your PSU has 30A on a single +12V rail.  It should be good for up to about 13 or 14 green drives.  That's plenty of headroom, no need to replace it.

 

4. Anything else that I'm completely missing here?

Not that I can see.

 

By the way, any HPA issues with that motherboard?

Link to comment

Unless you have extremely fast drives, I expect you mean 5.4k or 5400 rpm drives.

 

Oops. You got me. I wasn't supposed to leak that super secret info about upcoming drives from WD. There goes my NDA. ;-)

 

I can't answer this very well as I don't run these types of high-end add-ons on my server.  However, I conjecture that the quad and trip cores will be overkill and a dual core will work just fine.  In fact, you may want to try running it all on your current single core and then just upgrade if you actually run into issues.  Maybe your single core will be fast enough and you won't have to upgrade at all.

 

I installed the trio of usenet apps last night, so maybe I'll do that and see if I run into any CPU issues. I'll have to try a torture test somehow...

 

Ultimately up to you.  With your current setup, I count only 6 SATA ports (4 on the mobo, and 2 on the PCI card).  So how would you support the 7th drive?

 

The mobo has 6 on board actually. I'll have to double check the model number when I get home. I pulled the one in my original post from my Newegg order history and I see that one only has 4. That's a bit of a mystery now...

 

EDIT: Mystery solved. Looks like I have an older version of the board. They seem to have dropped 2 SATA ports from newer versions...

 

http://www.gigabyte.com/products/product-page.aspx?pid=3063#sp

or

http://www.gigabyte.com/products/product-page.aspx?pid=3153#sp

I would also recommend that you upgrade your PCI card to one of these cheap 2 port PCIe models.  You should even be able to run two of them in your server (one per PCIe slot) for a total of 8 SATA ports, giving you future expansion options.

 

Once again, I apologize. The SATA card is indeed the PCIe card you referenced.

 

 

By the way, any HPA issues with that motherboard?

 

Funny you should bring that up actually. I haven't experienced any issues, but I knew about it going in to the build and planned accordingly (parity is on the last physical onboard SATA port). Now that I've come back to the forums after a bit of a break, I see that HPA can be disabled with some newer BIOSes, so I planned on checking out the situation on the server tonight to see if I can get rid of it on mine. Of course, I'll actually have to confirm the MOBO model first :)

 

I'll know more when I can get home and dig into the server a bit more. I guess it's nice that I haven't had to mess with the server much (if at all) since it was built so I don't know this stuff off the top of my head.

Link to comment

In that case, why are you using a PCIe card at all?  Why not just use all the onboard SATA ports?  You would save a bit of power that way.

 

Anyway, no matter if you decide to replace a 1 TB drive or add a new drive, I would still recommend buying a 2 TB drive instead of a 1 TB drive, for the reasons I explained above.

Link to comment

In that case, why are you using a PCIe card at all?  Why not just use all the onboard SATA ports?  You would save a bit of power that way.

 

I've been using it for situations such as new drive preclears in swaps/upgrades. When I added in my cache disk, I put it on the PCIe card due to physical location in the server.

 

I've got 2x 4-in-3 racks to hold my drives. To make sure I always know which drive is which, I place the drives in the physical location that corresponds to the port they are plugged into on the MOBO, starting with SATA0 on the bottom and working my way up. My parity drive has always been in the sixth (SATA5) position due to the whole HPA thing. I put the parity in that last spot and filled in my array from SATA0 to SATA4.

 

When I recently added the cache drive, I kept it physically separated and put it in the 8th slot (PCIe2). There's probably no real benefit to setting things up the way I did, but it made sense at the time and it works for me :)

Link to comment

I've got 2x 4-in-3 racks to hold my drives. To make sure I always know which drive is which, I place the drives in the physical location that corresponds to the port they are plugged into on the MOBO, starting with SATA0 on the bottom and working my way up.

 

I understand this desire to keep things organized, and there's nothing wrong with doing this.  However, there's another layer of abstraction between this hardware configuration and unRAID that you can't control.  The unRAID will assign the drives as SDA, SDB, etc. depending on the order in which the drives were recognized by the motherboard.  So if your drive in slot SATA0 takes slightly longer to initialize than does your drive in SATA1, then the SATA0 drive may end up as SDB and the SATA1 drive may be SDA.  I wish there were a way to force the drive letter for each drive to always be the same, but I don't think there is.  Luckily, unRAID is very tolerant of these drive letters switching around, so from your end you never even see it as a problem.

 

My parity drive has always been in the sixth (SATA5) position due to the whole HPA thing. I put the parity in that last spot and filled in my array from SATA0 to SATA4.

 

Are you saying that your motherboard always writes HPA to the lowest numbered SATA ports?  While that may be true, I wouldn't trust it to always work like that.  Just like above, if one or more of your disks takes longer to initialize for any reason, then the motherboard may write HPA to any other disk (including the parity).  If it is possible to update your board's BIOS such that HPA is disabled by default, then I would HIGHLY recommend doing that.  If it is not possible, then you might consider replacing the motherboard altogether.  HPA is a nasty thing, and it is most likely to hit you at the worst possible time (such as when you have a drive failure).

Link to comment

Good points. I had a feeling something like that was happening with the drive letters. I, too, wish I could more easily identify which drive is which. Something like the little light they put on servers in the rack that you can turn on from the console to identify the unit in question.

 

I remember when I first looked into the HPA thing, it was advised to do things similarly to how I did. (data drives first, then parity last)

 

I certainly plan on checking on a BIOS upgrade as soon as I get home tonight to see if I can get it disabled for good.

Link to comment

Sounds good.  While HPA on the parity drive is almost always disastrous, it can be harmful on a data drive too.  When the motherboard writes HPA, it overwrites whatever was in that spot before, without moving the data first.  Generally the HPA is so small that there isn't any problem, but it is possible for it to overwrite the first few bytes of an important file and corrupt it.

 

You can see why we all try to avoid these older Gigabyte boards altogether.

Link to comment

That's good news.  Definitely update that BIOS asap.

 

Removing the existing HPAs is up to you.  They only take up a few kbs.  I've still got HPA on some of my old drives which I never bothered to remove.  The HPA sitting on the drive doesn't do any harm, it is the motherboard's ability to create new ones that is dangerous.

Link to comment

That's good news.  Definitely update that BIOS asap.

 

Removing the existing HPAs is up to you.  They only take up a few kbs.  I've still got HPA on some of my old drives which I never bothered to remove.  The HPA sitting on the drive doesn't do any harm, it is the motherboard's ability to create new ones that is dangerous.

Having an HPA on a drive will complicate the upgrade to unRAID 4.7 (and potentially any others that follow)

It does not stop you from using 4.7, but the disk will be detected as changed in size from the 4.6 series and previous and the array will not start until you set a new initial disk configuration.  This has occurred two times I am aware of where I've guided the user in removing the HPA as they were preparing to upgrade.

 

Having an HPA on a data disk is NOT a safe way to handle the existence of an HPA. Eventually that disk will fail (all disks fail, it is just a matter of time)  When the disk fails the BIOS will choose a different disk to add the HPA.  When it does, that drive will change in size and be considered a different disk than originally assigned.  At that point you have one

failed drive and one wrong disk.  The array will not start with two invalid disks.  You've effectively lost the data on them unless you can convince unRAID to accept the disk with the changed size.

 

Please do update your BIOS.  It is the only way to be safe (other than replacing your motherboard)

 

Joe L.

 

Link to comment
2. For the extra 1TB do I add a 7th drive (I've got another 1TB sitting in the closet that was replaced by one of the 2TB drives) or do I replace another 1TB with a 2TB? Techinically I have the space and the SATA ports available, but I'm wondering if it just makes more sense to upgrade the existing drives.

 

Since you have 1TB disk in the cupboard this is (licence allowing) the cheapest way to add another 1TB of storage, however this depends on your unRAID licence status, in an ideal world you want minimum number of disks for the amount of storage you need.

 

PSU is fine BTW(400 has 30A 12v rail) you're using around 30% of that currently and it will drop marginally as you replace more 1TB drives with 2TB green drives, the 2TB WD green, black, RE-GP and RE4 series all use approx. the same or less than your current 1TB Samsung and WD drives (depends on models), you might even make a big saving if you have 4 plater 1TB WD drives (@1A per drive). 

Link to comment

Please do update your BIOS.  It is the only way to be safe (other than replacing your motherboard)

 

BIOS was updated successfully last night. HPA is disabled by default now.

 

It sounds like removing the HPA would be a good thing before upgrading to 4.7, yes?

Link to comment

Please do update your BIOS.  It is the only way to be safe (other than replacing your motherboard)

 

BIOS was updated successfully last night. HPA is disabled by default now.

 

It sounds like removing the HPA would be a good thing before upgrading to 4.7, yes?

 

I think it is required to be done before the 4.7 upgrade if you want the array to function correctly from the get-go.

Link to comment
2. For the extra 1TB do I add a 7th drive (I've got another 1TB sitting in the closet that was replaced by one of the 2TB drives) or do I replace another 1TB with a 2TB? Techinically I have the space and the SATA ports available, but I'm wondering if it just makes more sense to upgrade the existing drives.

 

I'd recommend using the 1T in the closet for now, unless you have some other use for the drive.  And in that case, I'd recommend getting another disk and not replacing an existing.

 

But once you are full, the decision of when to upgrade gets harder. First, it is hard to part with a disk that you spent a significant amount only a year or two ago. Second, the new disk is only going to provide the delta increased capacity as what you are paying for (a 1T to 2T upgrade nets you only 1T), so the cents per gig is really quite a bit higher for the new capacity. Third, you need to consider your ability to wait a little longer for the next higher capacity to come out and drop to a reasonable price. (Going from 1T to 3T is equivalent to upgrading 2 1T drives to 2T in terms of additional space). Fourth, can repurpose or easily sell the disk you are replacing.  Trumping everything however is urgency of need. If you need the space and upgrading is the only option, you have to upgrade.  In my opinion, the energy savings is not a primary consideration.

 

Right now 2T drives are well priced, but 3T are coming. Its a tough call to wait or buy now. For new drives to add to the array, or to upgrade 500G or even 750G I'd consider the 2T if I needed the space, but for upgrades of 1T or larger, if your need is not urgent, you might want to wait for the 3Ts (or even 4Ts) to upgrade.

Link to comment

Alright, so after the BIOS upgrade, I decided it was time to remove the HPA from the offending disk.

 

I read the thread where Joe walked someone through the hdparm command and I think I'm pretty comfortable with that. Before proceeding, I ran the command without setting a size to confirm that I had the right disk and received this:

 

root@Vault:~# hdparm -N /dev/sdc

/dev/sdc:
max sectors   = 1953523055/7368112(1953525168?), HPA setting seems invalid (buggy kernel device driver?)

 

As expected the numbers match what I see in the syslog.

 

I ran it on another couple disks to see what a "good" disk looks like and got a couple interesting results:

 

root@Vault:~# hdparm -N /dev/sde

/dev/sde:
max sectors   = 3907029168/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)
root@Vault:~# hdparm -N /dev/sdd

/dev/sdd:
max sectors   = 1953525168/7368112(1953525168?), HPA setting seems invalid (buggy kernel device driver?)

 

sde is a recertified WD20EARS that I just received through an RMA with WD that appears to be working OK, but I'm not sure what to make of that output.

 

sdd is a WD10EADS that has been running great for some time.

 

Do I have anything to worry about here? Any disk I run the hdparm -N command on, I get the HPA invalid message.

Link to comment

As the hdparm message says, it is a buggy kernel driver that mis-reports the native size on some disks/disk controllers.

 

Don't worry about the fist disk that is working, or the recertified disk. 

 

One huge clue is the reported size.  If it ends in "168", it does not have the HPA added by the BIOS. 

If it ends in "055", as in your /dev/sdc, it does.

 

/dev/sdc:

max sectors  = 1953523055/7368112(1953525168?), HPA setting seems invalid (buggy kernel device driver?)

 

/dev/sde:

max sectors  = 3907029168/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)

root@Vault:~# hdparm -N /dev/sdd

 

/dev/sdd:

max sectors  = 1953525168/7368112(1953525168?), HPA setting seems invalid (buggy kernel device driver?)

Link to comment

Well, I finally got around to seeing what this little AMD processor can do, and, well, I think I'll need an upgrade. I had a download needing repair through SABNZBD/par2 and once the repair kicked in, the CPU hit 100% and the download speed tanked. Given that nothing else was going on at the time, this was OK, but I wouldn't mind having a bit more headroom on the CPU.

 

Any recommendations on X2 vs. X3 vs. X4 these days?

Link to comment

On the Intel side I can absolutely recommend something like an Intel i3 530 CPU for a solid balance between price, power, performance.

 

Since you already have an AMD motherboard, first make sure it can handle any CPU upgrade you're thinking of. Generally the more cores the better. Last I looked there wasn't any significant difference in price between the X3 and X4, so I'd opt for at least the X4.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...