Would this be a good board to use for a up to 20 drive project ?


Recommended Posts

Hello , i have this board from a old build : http://www.newegg.com/product/product.aspx?Item=N82E16813121059

 

Have a q6600 from the old build as well and 8gb ram on it , would it be a good option for somewhat of a big array ?

 

Was thinking of climbing to 20 hard drives thru a norco or antec 1200 with it , comes with 8 sata ports already (2 different controllers) , only has 2 pci express

 

Would it be a solid bet ?

 

Thank you very much

Link to comment

I think it has a good shot looking at the specs on paper..

 

It would consume a slightly more electricity then a newer motherboard CPU combo, but not enough to throw i away and start over. It still has good life in it.

 

It would be a lot more horsepower then unraid needs. you could almost downgrade the CPU if you already have one laying about.

 

I would disable the sound in the bios and and anything else you're not using.

 

It has no onboard video, that could be a slight issue.

the PCIe buss is 3x16 (1 electrical x16 or x8, 1 electrical x8, 1 electrical x4) keep that in mind when putting  HBA's on it.

I would look for a cheapo video card for it.

 

 

EDIT: I got ninja'd while posting.. there is your answer.. yes with a minor mod

 

Link to comment

Will try to boot it with no videocard , i will only need 2 pci express raid cards , should be no problem right ? not sure of the implications of the 3x16 (1 electrical x16 or x8, 1 electrical x8, 1 electrical x4) when it comes to raid cards , but yeah figuring i will need 2 max from what i saw in the build thread .

 

Just finishing reinstalling ubuntu server on mine to see if everything is working 100% and will see if it posts without a gfx , but yeah like adammerkley said worst comes i'll get a cheap pci video card

 

Thanks alot for the help

 

I will also need a pci lan adapter since the onboard one is fried , the intel pci ones are good enough to take advantage of gigabit correct ?

Link to comment

Well turns out it booted fine without a graphics card : )

 

So i guess still good to buy a pci one for first time install and emergency stuff but it does fine without it , good to take that heat out of the equation (still had a 8800 gtx on mine wich obviously was going to replace heh)

 

Also my pci express lan card was choking on the last pci express slot (doing git clone xxx to get something and compile would get to 17% and lan died , switched it to the second pci express slow and no problem) , so i guess i wouldn't want a raid card on the bottom one

Link to comment

hmm... so you have to buy a pci video card and a pci network card because something is fried on the board.

Plus running certain cards in particular slots gives questionable performance.

 

OK, now I wouldn't use this board. I wouldn't spend much money on technology I know has issues from the start.

Maybe if I could work a few trades with someone, I would use it.

 

When you build a 20 drive server you start to really depend on it, and when it fails, you'll be without and have to deal with fixing it. It can be a pita when you have to replace the board later. Assigning drives to proper slots will need to be done carefully.

 

As a testbed and something to experiment with, sure, However, I would not run my server long term on this board.

 

I think the PCI network card did it for me. LOL, when you start to move data and depend on a fileserver, you don't want to be choked there. But that's me. Don't let it stop you. It might be OK for you.

Link to comment

Will use it in the start as i'm growing it then switch it if it's giving problems , have another server so won't depend too much on it in the start to see how it behaves

 

Anyway always read that pci is enough for gigabit connections (and reading the pci specs seems like it) and yeah that bottom pci express port is choking but that one would stay unused anyway

 

Will keep it in mind tho , just aiming for a smaller initial investment as the norco case alone will be a good chunk + hard drives at the price they are atm

 

But yeah won't want to skimp on a motherboard when investing so much in the rest , just want to see how it goes in the start since it has been with me for some years now and don't like seeing it collecting dust

Link to comment

hmm... so you have to buy a pci video card and a pci network card because something is fried on the board.

Plus running certain cards in particular slots gives questionable performance.

 

unlike server boards;

the shared PCIe card performance has been pretty common on desktops since the dawn of SLI.

It is all marketing "smoke" to make you "pee your pants" over your new board with 4 way SLI and 4 pretty 16x ports.

you then get home to find out the shared bandwidth is about 16x for the entire PCIe subsystem.

you end up with some horrid combo like 8x-4x-4x-2x once fully populated with 4 16x cards.

to make it worse you might find out the last port or two are shared with things like the USB3 and you cant use both at once.

this is why true sever boards are better for raid/hba cards.

 

for home server builds on desktop boards, this is usually not a big issue as long as the add on sata/sas cards are in port 1 &2.

 

 

 

As far as the fried NIC.. While I tend to agree with you in general that a fried NIC is the sign of whats to come..

I will point out that at my work in the web farm, we have literally hundreds upon hundreds of intel desktop quality boards in "Rackables" racks.

some of these boards are even of Pentium 4 vintage (not many of those left, mostly fail-over and high load usage now).

its not uncommon for a nic to fry on these things.

when that happens, we just pop a pci 10/100 in the pci/pcie slot and move on ... (10GbE to the load balancer/1GbE to each rack/100 MB to the webhost)

it is not uncommon at this point in our farm to see 1 or 2 per rack like this. (some of these are 5+ years old now)

they then usually go for years running just fine like this in a production environment... then again some don't make it..

 

We are phasing out the rackables and moving to virtual clusters on HP Blades for and even higher density, we just have always just run these things to their death with bandaids.

(I find it funny that we are pulling out HP G6 blades and putting in G7's while the 7 year old boxes keep humming along.)

 

Now that I have gotten way off topic... sorry bored at work..

 

I think he'll be fine for a home server (minus any defective hardware)  on what he has as long for a while as long  he is aware of the limitation.. Maybe Santa will bring him an X8 or X9 ?

 

 

Link to comment

It's fine, and I was being very honest and direct.

 

Having to purchase two cards, one I agree on and one because something died on the motherboard.

 

I wouldn't put my valuable data on it long term. No less have to deal with swapping it out later on.

My time is precious and I hate having to waste it doing the same thing over again.

 

So to requote the original post.

Would it be a solid bet ?

 

My answer is now, no. I would not bet my data on it.

You're starting with trouble on the board right from the get go.

 

I once had an old vp6 board that did strange things. Worked like a champ 99% of the time.

What I ended up discovering was a very specific pattern of data would lock up the board. somewhere on the memory controller.

Everything else worked fine, but g'fbid this pattern showed up. lock city.

After careful inspection I found a capacitor that was marginal and starting to try out.

and little by little it was corrupting data on the hard drive as time went on.

 

Would I throw it in a junk case and test with it, beat the hell out of it to really test an environment out, sure.

 

Everyone has different priorities, I would certainly tell you to test it out well.

Do all kinds of md5sum and/or use tera copy in test mode to insure what you put on the server is what you get off it.

 

We've seen issues with people in the past where they put data on a server and what they pulled back did not match.

It took a long time to decipher those issues.

Link to comment

It's fine, and I was being very honest and direct.

 

I Totally hear you.

I also know that working with a budget sucks..

i also try to leave all my old data on its source drives for months to a year after I bring up a new a new storage server.

I am only just now starting to reclaim my WHS drives from a box I copied to unraid and decommissioned in July.

and thats only due to the drive shortage.

Link to comment

Yes i appreciate the honesty , the lan port has been burned for a long time before i had ups's and since it was my main pc at the time i didn't want to rma it and lose a board when i could just use a replacement pci card .

 

But yeah i will be moving to another board as soon as possible , just need to figure wich one and where i will be able to find one in europe , saw the ecs one and looked interesting but can't find it in european retailers

 

Thanks for your help

Link to comment

I suppose if you know the reason the port died and that's been taken care of, then you're OK.

But running a card in the PCIe slot and having question able performance as you indicated is another point of concern.

 

If there is a problem on the bus, memory, Northbridge, PSU, or sata cables. unRAID will bring it out at the worst time. I.E. a data recovery operation.

 

When everything is fine, unraid is light on the machine (Except for intial spinups).

But when a recover issue pops up all drives will spin and be read to fill the missing drive and/or rebuild it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.