Jump to content

Can an inexpensive build include stability?


JP

Recommended Posts

So I feel like I've made a mistake building my current unraid server.  Where I went wrong was instead of just buying new equipment I tried to take some dated components I had around and make them work well with unraid.  Don't get me wrong, it can certainly be done, but personally I feel like I'm throwing more money in to a system that doesn't look very elegant.  Basically, the motherboard I was planning to use was an ASUS A78NX-VM, which required me to purchase a PCI SATA controller card and Intel Gigabit ethernet controller card.  Not a huge investment, but now it appears I'll need to buy some RAM as well since all I have is 256 megs.

 

So as I start spending more money I don't know why I didn't just take this money and put it in to new equipment to start with.  I guess part of the problem is I did build a test unraid server with this equipment already and it worked fine.  However, now that I purchased some 2TB drives I need more RAM to format them.

 

SO, I've tried looking through the hardware compatibility information and it is hard for me to tell what is a good price/perfomer in regards to a Motherboard, CPU, and RAM.  Has anyone had good luck with a particular setup?  Are there any combos out there that people know of that seem to be working well?  Naturally, I'm trying to keep most of my needs on the motherboard and not add additional controller cards.  I won't have a huge number of drives in this machine so it isn't necessary for now.  Thanks for any recommendations. 

Link to comment

Look at the Recommended Builds page.

 

How many hard drives are you planning one?  I'd recommend you start with a mobo with 6 sata ports.

 

Here are a couple of combo's for example.  They come with dual cpu which is overkill.  You could probably get them separately and buy a Sempron 140 for about $100.  You could buy a 4 sata port board for somewhere in the $40 - $50 range and use the PCI sata card to add 2 ports as well.

 

http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.436272

 

http://www.newegg.com/Product/ComboDealDetails.aspx?ItemList=Combo.436273

 

The budget build mobo is available at some Fry's for $50.

 

Edit:  I bought a combo from Fry's recently for $50 that was an Intel e3400 cpu + some G31 mobo.  They run specials frequently.

Link to comment

Thank you!  This helps a great deal.  After reading this I just decided I would go ahead and buy some RAM for my somewhat antiquated build.  I'm a little disheartened I spent $50 on RAM that is so old (PC2700) but oh well.  Once I dropped the RAM in the build the disc formatted right away without a hitch.  However, I probably need to do a bit more research because I sort of did expect faster transfers.  Without the parity disc assigned my transfer rates plateau at around 37 megs/second.  Thanks again.

Link to comment

No, its a gigabit network.  Only 2 2TB drives on the PCI Card and actually on the whole server.  I want to do some testing like this before I commit to expanding the array at all.  Is 37 meg/second across the network really that bad if the parity drive isn't assigned?  I'm giving some thought now to just scrapping this build altogether and building the Biostar budget build.  I'm doing a parity sync at the moment to see what kind of transfer speeds I get after doing that.  Any chance you might know what transfer rates most people get with and without the parity drive assigned?

Link to comment

No, its a gigabit network.  Only 2 2TB drives on the PCI Card and actually on the whole server.  I want to do some testing like this before I commit to expanding the array at all.  Is 37 meg/second across the network really that bad if the parity drive isn't assigned?  I'm giving some thought now to just scrapping this build altogether and building the Biostar budget build.  I'm doing a parity sync at the moment to see what kind of transfer speeds I get after doing that.  Any chance you might know what transfer rates most people get with and without the parity drive assigned?

With parity I get somewhere around 30-35 MB/s with 7200 RPM drives.  5400 RPM drives with parity enabled will probably get between 25 and 30 MB/s. 

 

Sorry, but I can't advise about a rate without a parity drive...

 

Joe L.

Link to comment

I think I've gotten myself confused and here is why.  If the parity sync rate is indeed indicative of what my write performance will be, as you mentioned, then at first glance I should probably be happy because it is reading ~40,000 KB/sec (~39/megs/sec).  However, if this is the case then why in the world was my transfer speed without assigning a parity drive 37 megs/second?  I thought without the parity drive assigned you should see an increase in write performance, not a decrease.

 

I'll be interested to see what my write performance is once the parity sync is over (6 more hours) and leaving the parity drive assigned.  If it is indeed anywhere in the 25 - 40 megs/second range then I should be happy (right? :)) since Joe L. already mentioned that is reasonable for 5400 RPM drives.  However, I can't say I'm optimistic since I seemed to top out at those speeds without a parity drive assigned.   

Link to comment

Parity sync just finished.  Writing to the server across the network plateaus at 19 Mb/sec and reading from the server across the network is 41 Mb/sec.  Is this what you would expect given the limitations of the following hardware?

 

Motherboard:  ASUS A78NX-VM

CPU:  AMD 2500+

RAM:  1 gig PC2700

Drives:  2 TB Samsung 5400 RPM (Parity) and 2TB WD 5400 RPM (Data)

PCI Sata Controller Card:  Rosewill 4 port

PCI Gigabit Ethernet Adapter:  Intel

 

Others who are more knowledgable can probably tell me if I'm wrong, but from what I've been reading it is the PCI Sata Controller Card that is probably causing some choking here (bottleneck).  I'm giving some thought to just scrapping this server and buying the "Budget Build" components which would allow me much better expandability since all the SATA ports would be on the motherboard as well as the Gigabit Ethernet Adapter.  I have a bad felling if I were to add another drive on to this SATA Controller Card I would really see performance drop since they would all have to go through the same PCI slot.  Am I wrong?

Link to comment

it is the PCI Sata Controller Card that is probably causing some choking here (bottleneck)

Correct.

 

I'm giving some thought to just scrapping this server and buying the "Budget Build" components which would allow me much better expandability since all the SATA ports would be on the motherboard as well as the Gigabit Ethernet Adapter.

Your call.  Your performance actually isn't too shabby considering you are using older parts.  But if you want more speed and expandability, the Budget Build is a great option.

 

I have a bad felling if I were to add another drive on to this SATA Controller Card I would really see performance drop since they would all have to go through the same PCI slot.  Am I wrong?

This is also correct.  You will likely see a drop in performance if you add more drives to the PCI card.  A single PCI card can generally handle 2 drives without much impact on performance.  Any more than that and you will notice the slowdown.

Link to comment

You now have a system up and running that is running pretty fast for just repurposing old parts.  Unless you are needing more hard drives right now then I'd suggest waiting to buy.  In the next 60 days there will be lots of back-to-school marketing campaigns and some nice discounted combo's are surely going to be available.  

And if you can wait until the November sales stuff really gets discounted.

Link to comment

Thank you for all the comments.  It helps me a great deal.  I guess my biggest question now is how much performance gain should I expect to see if this same system was on the "Budget Build" (BIOSTAR A760G, AMD Sempron, 2 GIGs of RAM)?  If anyone is using something like the "Budget Build" would it be possible for you to tell me what kind of speeds you are seeing when writing to the server without the parity drive assigned, the write transfer rate with parity, and the read rate?  This would give me at least an idea of what I should expect to gain with the Budget Build.  Naturally, I am only looking for estimates here since I am aware there are other factors to consider (drives being used, network factors, etc.).

 

Ultimately, I probably see myself still moving to the Budget Build now.  It is the expandability that makes me think it is worth it.  I would just rather have something I know that I can grow in to with adding more drives rather than having to worry about changing out the motherboard, cpu, and ram because I am forced to later on when expanding the array.  I probably spent $100 on new components just to get this dated equipment to a state where unraid could use it.  Part of me wishes I would have just started with spending this money on the budget build.  I am shocked the budget build really only costs $100 for the motherboard, cpu, and ram.  Just an awesome package in my mind.  Anyway, not a total loss.  I've learned a great deal along the way and I can take most of this equipment and build a system that will be fine for my 6-year old Son. 

Link to comment

I designed the Budget Build around my server because it was cheap and it works well.  See my sig for the complete specs.  I generally see transfer speeds of about 25 mb/s writing directly to the parity-protected array.  Sometimes it will spike up to 40 mb/s and sometimes it will dip as low as 10 mb/s, but 25 mb/s is about the average.  Using a cache drive I get 60-70 mb/s writes.  My read speeds average 40 mb/s (low of around 30 mb/s, peak of around 45 mb/s).  I can't tell you transfer speeds without a parity drive assigned because, well, I'm not going to risk running without parity just to get those numbers.  I expect they would be around the same speed as my cache drive transfers, though.

 

Other considerations: my network is fully Gigabit LAN, my data disks are mostly 5400 rpm green drives, and my parity disk is a 7200 rpm non-green drive.  I also have two drives hooked up via a PCI card - if I had any more hooked up, I would expect slower speeds.

Link to comment

Thanks.  This is a huge help.  One last question if you don't mind.  Using real world statistics makes things much more clear to me.  The fact that the write speeds with the Biostar are somewhat similar to my own helps level set my expectations, but as I mentioned I think it might be a smart move for me just for expandability alone.

 

So this has me thinking.  Would moving to a completely different motherboard, like the recommended Supericro offer any performance gain (given all other items are constant)?  My assumption is it wouldn't, but I thought it would be worth asking.  It seems to me that once you have a motherboard with onboard SATA controllers it is the hard drives that become the bottleneck.  Does that sound accurate?

 

Finally, thanks for mentioning your write speeds with a cache drive.  Wow, what a difference.  I saw this mentioned a few times in the forums, but hadn't fully had time to research it or understand it.  If you are seeing that kind of difference using one then I think it is certainly something I want to look in to.

 

Addendum:  Rajahal, I just wanted to thank you for your posts around the cache drive.  As I started researching it I found that much of the consolidated information was done by you.  I feel like I'm covering quite a bit of ground because of your effort here.  Thanks again.

Link to comment

It looks like I have an added wrinkle here with this current build as well.  I assumed that since I got smooth playback from DVD and BluRay when this build was on a 10/100 network, using IDE drives, and with only 256 megs of RAM it would do just as well on a gigabit network, SATA drives, and 1.25 gigs of RAM.  Well, I just transferred some test DVDs and BluRays and they all exhibit slight stuttering from the server.  My guess is the issue resides with the PCI SATA Controller Card, but of course I can't be sure.  This certainly gives me another reason to move to the Budget Build rather than staying with this current build.  It just seems a little odd that my read speeds are still around 40 megs/second when actually transferring a file, but simple DVD playback adds stuttering.

Link to comment

It looks like I have an added wrinkle here with this current build as well.  I assumed that since I got smooth playback from DVD and BluRay when this build was on a 10/100 network, using IDE drives, and with only 256 megs of RAM it would do just as well on a gigabit network, SATA drives, and 1.25 gigs of RAM.  Well, I just transferred some test DVDs and BluRays and they all exhibit slight stuttering from the server.  My guess is the issue resides with the PCI SATA Controller Card, but of course I can't be sure.  This certainly gives me another reason to move to the Budget Build rather than staying with this current build.  It just seems a little odd that my read speeds are still around 40 megs/second when actually transferring a file, but simple DVD playback adds stuttering.

If you had no issue on a 100Mbit LAN and you do with a 1000Mbit lan the issue is your LAN not the disk.
Link to comment

If you had no issue on a 100Mbit LAN and you do with a 1000Mbit lan the issue is your LAN not the disk.

 

Thanks Joe.  However, please keep in mind it was more changes than just the LAN, again, I upgraded to a 1000Mbit switch, added a SATA Controller Card, added a 1000Mbit ethernet card, added a gig of RAM, and added two 2 TB SATA drives.  It indeed improved performance for file transfers, but for whatever reason normal DVD and BR playback is inhibited.  I moved everything back to the 10/100 LAN just to test and indeed the issue persists so I can only guess the issue resides with the server itself. 

Link to comment

I don't know.  I still feel like it is not the LAN and here is why.  It is all new equipment, very simple, and when I revert back to the old LAN where BluRays and DVDs streamed fine in the beginning the issue persists. 

 

I am currently using a new Dlink 8 port gigabit switch, all new CAT6 LAN cables from monoprice, and there are only three connections to the switch (one from the router, one from the HTPC, and one from the unraid server).  The switch reflects the HTPC and unraid server both are gigabit whereas the router connection is 100Mbit.

 

I elected to change everything and take things back to my 10/100 network using only my router essentially as the switch as well as use all my previous CAT5E cables.  I also enabled the onboard LAN on the motherboard, again, in an attempt to get everything back to its original state where DVD and BluRay playback was fine.  The stuttering still exists.

 

Ultimately, it is a frustration not knowing exactly what the issue is, but I don't see myself putting too much more time in to this since I've pretty much decided to move to the budget build anyway.  I suspect now that the Rosewill card isn't the issue.  It just may be that the PCI isn't very stable on this motherboard.  I do believe it does use the nForce2 chipset that I think others have said isn't very stable with unraid.  Who knows.

Link to comment

I have the same system that Rajahal recommended. Its not speedy because I'm running WD green drives, but honestly I only write to it now and then, but most of the time I do read off it for my HTPC and I've had 3 or 4 connections going at the same time and it hasn't skipped a beat yet.

 

(1) 480rip going

(2) 720rips going

Somebody viewing photos

 

Its super quite and other than a glowing case I forget half the time its even in my office. I have 4 wired connections that can connect to it at any time as well as 4 wireless connections and so far I haven't seen it glitch once all over Gigabit wired and wireless G and N.

 

For me the speed of writing was not the biggest deal. For me lower power, silence and expandability. I wanted a system that would more or less sit there and answer when I asked it to. I didn't want to baby sit a loud power using machine that reminded me hey I'm here.

Link to comment

So this has me thinking.  Would moving to a completely different motherboard, like the recommended Supericro offer any performance gain (given all other items are constant)?  My assumption is it wouldn't, but I thought it would be worth asking.  It seems to me that once you have a motherboard with onboard SATA controllers it is the hard drives that become the bottleneck.  Does that sound accurate?

 

Simply switching from the Budget Build motherboard (the Biostar A760G M2+ - is it sad that I have that memorized?) to the recommended SuperMicro will not improve performance.  However, switching to a board that has multiple PCIe x4 or faster slots will allow for more expandability without losing performance.  The Biostar board maxes out around 14 drives (6 onboard + 8 on PCIe) before you have to start resorting to the PCI bus.  A board with two PCIe x4 or faster slots will allow you to go up to 22 drives (6 onboard + 16 on two PCIe cards).  Of course, this will also add around $330 to the cost of your server (the mobo will likely be in the $150 range, and each SuperMicro AOC-SASLP-MV8 card costs around $100 each...you'll also need breakout cables at $15-30 each).  If money is a concern and 14 drives sounds reasonable to you, I don't think you can do much better than the Budget Build.

 

To be honest, I'm a bit confused myself about the bottleneck when you are using only onboard SATA slots.  I've heard some people say it is the hard drives, and others say it is the network.  It also depends on what kind of performance you are talking about - 5400 rpm green drives could be a bottleneck if we are talking about write speeds, for example.  For read speeds they are adequate.  Basically, don't worry about it.  ;D

 

Finally, thanks for mentioning your write speeds with a cache drive.  Wow, what a difference.  I saw this mentioned a few times in the forums, but hadn't fully had time to research it or understand it.  If you are seeing that kind of difference using one then I think it is certainly something I want to look in to.

 

The cache drive is BY FAR the cheapest and easiest way to increase your server's perceived write performance.  In case you haven't come across it yet, you may find this thread of interest:

To Cache drive or not to Cache drive

 

Addendum:  Rajahal, I just wanted to thank you for your posts around the cache drive.  As I started researching it I found that much of the consolidated information was done by you.  I feel like I'm covering quite a bit of ground because of your effort here.  Thanks again.

You are quite welcome.  Glad you have found my efforts useful.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...