Jump to content
megalodon

New 24 Bay Rackmount Build

71 posts in this topic Last Reply

Recommended Posts

Hi All,

INTRODUCTION

 

For the last 6 months I've been looking at building a new home server. At the moment I have a QNAP 8 bay NAS and have now filled its 16TB capacity. The NAS has been used to store my large collection of Blu-ray ISO backups and some photos and lossless music files, but it is primarily used for movie storage and my front end HTPC is a ZENO passive build that runs My Movies for windows media centre.

 

MY OPTIONS

 

So I have purchased a 24 Bay Rackmount case from a UK supplier that is based around the Norco RPC-4224 but is only 550mm deep as opposed to the Norco which is 600mm deep, this is so it can be mounted in my USpace 7210 Midi Cabinet that is 600mm Deep. The rackmount cabinet has a mesh front and back so cooling should not be an issue and it will be in my guest bedroom so I need it as quiet as possible. My problems come with the options I should take for hardware because Im either going to run unRAID or Windows Home Server 2011 and FlexRAID. I know this forum is based around unRAID but I have to look at all my options as I may be running 3 HTPC's and because Im a big fan of My Movies then Windows Home Server 2011 and FlexRAID may be my only option, although if Im just using 1 HTPC I could run unRAID although I wont get the automatic disc copy feature of Windows Home Server 2011. Money is always important but I will pay it to get good performance, reliability and future proofing.

 

THE BUILD

 

My build is fairly similar to what Johnm and others on this forum have been having success with, but with a few changes mainly because i live in the UK and some parts are easier to obtain than others here. Im not a seasoned server builder but I built my passive HTPC and Im fairly competent with most things and will spend time studying and reading about builds and components. Power consumption of this server is an issue due to ever increasing cost of electric here in the UK and the fact it will be on 24/7.

 

CPU: Intel Xeon E3-1265V2 45w TDP or Intel Xeon E3-1220LV2 17w TDP or Intel E3-1240V2 69w TDP (I need a balance between performance and low power.

 

Motherboard: Supermicro X9SCM-IIF mATX, my case is 50mm shorter than most but will still take a full ATX motherboard. (This seems to be the most current and well performing Motherboard around and has IPMI and KVM build in and as I've spent a long time reading Johnm's posts which have been a fantastic source of information to me, along with a lot of information on servethehome.com. If I didn't need KVM then I would maybe look at a i3 processor and motherboard.

 

Case:  X-Case - RM 424s 550mm deep.  To see a review on this case see this link:

 

Storage Adapters: Now I have spent a long time reading about the M1015 and could use three of them but it seems to be an issue with the Supermicro X9SCM-IIF running more than two of them and as the M1015 is now expensive in the UK I looked at getting a LSI 9201-16i and a LSI 9211-8i. I only have 12 Hard Drives at present so I could just get the LSI 9201-16i for now and the LSI 9211-8i at a later date. They would cost me £180 more than 3 x M1015 cards but will have a warranty on them as well. Im not sure about using a 1 x M1015 and 1 x Intel RES2SV240 SAS Expander. I dont want issues with bottlenecks.

 

HDD:  12 x (so far) Western Digital Red 3TB  EFRX.

 

PSU:  Corsair AX760, its made by Seasonic and easier to get in the UK.

 

Fan's: 3 x Noctua NF-P12 PWM running from 3 of the motherboard headers and 2 x Noctua NF-R8 PWM joined and running from 1 of the motherboard headers.

 

CPU Cooler: Noctua NH-C12P, I would run this on the ULNA but it is big and depending on the CPU I may change it to something smaller. Recommendations.

 

RAM: Not sure, It seems like kingston is the most common here. I will use EEC ram, so should I go for 2 x 8GB DDR3-1333 or DDR3-1600 and should I up it to 4 x 8GB.

 

Cable's: 6 x Ware multi lane SFF8087 to SFF8087 cable 0.5 metres. I know most people say 0.6 but my case is 2" shorter at the rear so Im hoping 0.5m will be long enough for any HBA.

 

OS Drive: 128GB or 256GB. My current thought is Samsung 128GB 840 Pro Series SSD. Any recommendations.

 

Blu-ray Drive: Startech slimline USB2 external case & 10ft USB lead. Sony BD-5750H-01 Slimline Blu-ray writer.

 

Comments:

 

So I already have these parts: 12 x HDD, Noctua NH-C12P CPU cooler, 3 x Noctua NF-P12 PWM and 2 x Noctua NF-R8 PWM. The rest I will order bit by bit as straight after xmas money is a bit short and also I want to see people's comments and advise before I order anything else. As the parts come through I will post photo's for you to see.

 

Anyway any advise on my hardware and software choices would be gratefully appreciated.

Server01.jpg.d8c6a47cb3a3584451ac9fe5efd045a3.jpg

Server02.jpg.4b48d81bf2f26116901ebed4b480f9ae.jpg

Share this post


Link to post

Okay guys, perhaps yo could help me with a few questions. Im really stuck on which cpu to go for, some of you are using the E3-1240V2. Should I go for this CPU or the lower TDP E3-1220LV2 with a TDP of just 17watts. Most of the work the server will be doing is serving movies but is it worth the increase in CPU speed over the power consumption. Also it is cheaper for me to buy 1600 ram as apposed to 1333, why I don't know but would this make any difference to performance and should I stick with 16gb or go with 32gb.

 

Thanks

Share this post


Link to post

Storage Adapters: Now I have spent a long time reading about the M1015 and could use three of them but it seems to be an issue with the Supermicro X9SCM-IIF running more than two of them and as the M1015 is now expensive in the UK I looked at getting a LSI 9201-16i and a LSI 9211-8i. I only have 12 Hard Drives at present so I could just get the LSI 9201-16i for now and the LSI 9211-8i at a later date. They would cost me £180 more than 3 x M1015 cards but will have a warranty on them as well. Im not sure about using a 1 x M1015 and 1 x Intel RES2SV240 SAS Expander. I dont want issues with bottlenecks.

 

The problem is actually a little more complicated than that. I just switched my box over to a X9SCM-F and the machine would not recognize any HBA I tried in slots three or four (the x4 slots). I have 1 9201-16i and 2 M1015s flashed to 9211-8i IT firmware. In no combination or order would an HBA in those slots be recognized. The issue, however, is addressed by using recent BIOS (2.0a, I believe) and tweaking a couple BIOS settings (enabling "Detect Non-Compliant Devices" was an important one, IIRC). Once you are using that BIOS and those settings, it's not a problem to boot with HBAs in slots three and four or with more than two total.

 

I would choose whatever combination of LSI HBAs and SAS expanders is least expensive. I recently moved from 2 HBAs passed through to my unRAID VM to one HBA with 4 of my disks in my RPC-4220 and the rest in a DAS with a Chenbro CK12803 expander and I haven't noticed any real performance difference. Is the expander a theoretical bottleneck? Yes, but for media storage and serving (which it sounds like is almost all of what you're looking to do), you shouldn't ever approach that limit. You'll saturate your network links to your 3 HTPCs before you saturate the links between the HBA and SAS expander if you go that way.

 

HDD:  12 x (so far) Western Digital Red 3TB  EFRX, I will use 2 for Parity.

 

Right now unRAID only supports a single parity drive (apparently this capability is on the roadmap for a future version). It may be possible to use RAID1 to present two drives as a single drive and use that for parity, there's no real advantage to doing so and real disadvantages (silent corruption) to introducing hardware RAID into a situation that doesn't require it. Now, if you were going to use a ZFS-based solution instead of unRAID, RAIDZ2 is definitely an option worth considering.

 

PSU:  Corsair AX760 or Corsair AX760i, is a 760w going to be enough for 24 drives and is it worth paying the extra £25 for the 760i model.

 

There are some good power consumption calculators out there. Grab one of the 3TB Red reviews and find how much they draw when 100% loaded (just under 16W in Anandtech's review) and then multiply times 24 to know your needs for drives. 760W should be more than sufficient given that you are not using 7200RPM drives. I am a huge fan of SeaSonic PSUs (I know others around here are also) and if you're not going to use one of their branded PSUs, I'd recommend one that they manufacture for someone else. In this case, the AX760 is manufactured by SeaSonic for Corsair, so I'd say go with that (that hybrid fanless mode is a nice feature). I don't think you're going to need the additional capabilities out of the "i" series, but if it fascinates you, go for it (that's what this hobby is all about).

 

Fan's: 3 x Noctua NF-P12 PWM running from 3 of the motherboard headers and 2 x Noctua NF-R8 PWM joined and running from 1 of the motherboard headers.

 

CPU Cooler: Noctua NH-C12P, I would run this on the ULNA but it is big and depending on the CPU I may change it to something smaller. Recommendations.

 

I'd say big and slow wherever possible. Saves you power and saves you noise. Noctuas can be a little pricey, but they're excellent. There are other options out there if you are wanting to save. If you don't care about noise, you can get away with using the stock Intel HSF for the proc as those Xeons won't run hot with what you're doing.

 

 

OS Drive: 128GB or 256GB. My current thought is Samsung 128GB 840 Pro Series SSD. Any recommendations.

 

If you used unRAID only, you wouldn't need an OS drive (just a flash drive for the OS), but you could use it as a cache drive. If you virtualize or choose WHS only, that's an excellent SSD choice for OS (or VMs). I want to move to a 256GB 840 Pro in my laptop. Just wish the prices would drop a little faster.

 

Given that you are buying VT-d capable hardware, have you considered running ESXi and virtualizing both unRAID and WHS 2011 on the same box? That's what I'm doing and I've been extremely happy with it.

Share this post


Link to post

Thanks Zuhkov, theres some interesting information you have given me here.

The Supermicro X9SCM-IIF should have bios 2.0a onboard so it will run the ivy bridge E3-1240V2 which is what Im leaning towards so hopefully there will be no issues with the HBA cards. Its interesting what your saying about the network being saturated before the HBA and SAS expander so I may look at that although there is really very little between them. Its a shame people have started charging silly prices for the M1015 cards as 3 of them would have done fine but maybe one of them and the intel or Chenbro SAS expander would draw less power and created less heat.

I will look more into the seasonic PSU and I know many forum members rate them very high, I know the AX760 is manufactured by SeaSonic for Corsair and it has good reviews, to me its enough and the hybrid mode is a bonus for keeping the noise down. But Im going to look at the equivalent to this model by sea sonic.

Given that you are buying VT-d capable hardware, have you considered running ESXi and virtualizing both unRAID and WHS 2011 on the same box? That's what I'm doing and I've been extremely happy with it.

All I read about on this forum is ESXI and I need to spend some time learning about it as all I really need it drive pooling for the 24 drives and WHS2011. A few people have suggested WHS2011 and flexraid but the unRAID forum is second to none for information and the drive pooling off unRAID seems much more stable from what I have read, I don't want to have to re-rip 80 ISO's to each 3TB disk that fails and thats another reason for the 2 parity drives with the flexraid option.

I think the motherboard with IPMI I'm looking at is a good choice and Johnm seems to know plenty about server systems and has given many members some good advise. My only worry before I go ahead and buy some more hardware is to go for the E3-1220LV2 at 17w TDP or the E3-1240V2 at 69w TDP or maybe even the E3-1265V2 at 45w. Its how much CPU power do I need for now and in the future, although its only really a media server and wont be used for anything else from what I can see at the moment. I know that the TDP has nothing to do with idle power but I cant find any specs on this for these processors.

 

Share this post


Link to post

If you're going the ESXi route, keep in mind that at least one member here on the forums was unable to pass 3 M1015s through to unRAID and have the third working correctly while using an Ivy Bridge Xeon on that board, no matter which BIOS version he tied. His problem only disappeared when switching to an older Sandy Bridge Xeon (http://lime-technology.com/forum/index.php?topic=22327.0).

 

I'm currently facing the same conundrum, I would like an Ivy Bridge build, but am unsure whether to just go with it and hope future BIOS and/or driver updates will fix whatever issues may exist at the moment, or play it safe and stay with the known to work Sandy Bridge build.

 

Share this post


Link to post

I guess the big question is what you're using it for. Let's say that you just run unRAID with no plugins. If you're literally just serving files (glorified NAS), then you're not really taxing a CPU. If that were the case, the E3-1220LV2 would be more than sufficient. There are plenty of people who run their unRAID boxes off of single core AMD Semprons and Intel Atoms (wouldn't choose that myself, but it's certainly an option). Even if you were going to virtualize both unRAID and WHS 2011, you could probably get away with the E3-1220LV2. The big drivers would be if you were doing transcoding (Plex, AirVideo, etc.) especially or a lot of usenet or torrent downloading (SabNZBD, Sickbeard, CouchPotato, etc.). That might require a little more oomph.

 

Having said all that, I think you're going to find that the difference in the idle draw between the E3-1220LV2 and something like the E3-1230V2 is going fall out in a system with plenty of disks, fans, the onboard GPU, and misc motherboard items. The IPMI setup with its NIC will draw 15-20 Watts just on its own (STH article with some useful power numbers: http://www.servethehome.com/intel-xeon-e31230-v2-ivy-bridge-xeon-review-4c8t-33ghz/). So it comes back to what you want to be able to do, now and in the future. If you are happy with serving files and running My Movies and want the lowest cost and lowest power, you should be fine with the E3-1220LV2. If you want maximum flexibility (perhaps you might want to do more with ESXi down the road or want to do significant transcoding), the E3-1230V2 would be my recommendation (it's cheaper than the 1240V2, but still has 4 cores and hyperthreading). That still gives you very low idle power and a significantly higher ceiling than the E3-1220LV2. The E3-1265LV2 may give you the best compromise between power and performance, but it is the most expensive.

Share this post


Link to post

If you're going the ESXi route, keep in mind that at least one member here on the forums was unable to pass 3 M1015s through to unRAID and have the third working correctly while using an Ivy Bridge Xeon on that board, no matter which BIOS version he tied. His problem only disappeared when switching to an older Sandy Bridge Xeon (http://lime-technology.com/forum/index.php?topic=22327.0).

 

I'm currently facing the same conundrum, I would like an Ivy Bridge build, but am unsure whether to just go with it and hope future BIOS and/or driver updates will fix whatever issues may exist at the moment, or play it safe and stay with the known to work Sandy Bridge build.

 

Hmm...virtually certain I've gotten ESXi to boot with 3 LSI HBAs. I've never tried passing three through to unRAID. I don't have 3 M1015s, but I do have two flashed to IT and a 9201-16i. I'll give it a try tonight and see if my results are different from RockDawg's.

Share this post


Link to post

If I remember correctly, his issue was that unRAID didn't see any drives on the third passed through HBA, while all three HBAs themselves showed up fine in both ESXi and unRAID.

Share this post


Link to post

Yes I remember reading that section on the forum about running 3 x M1015's with ivy bridge. I want to be future proof, no point going backwards and as Zuhkov said the E3-1230V2 is keeping me future proof so I will go that route even though Im just running a glorified NAS for the moment. Well actually I will spend an extra £15 and go for the E3-1240V2. As for the HBA cards this is something I need to think about more. I could just run a single 9201-16i for now as I only have 12 discs and this build means I will wait before getting anymore. Then I could face that dilemma further down the road. Or just buy 2 x M1015's for now in the hope the problem is sorted if I decide to use ESXI. Im not sure. I have spent the last 6 hours reading about my two PSU options and although the seasonic platinum is only £5 more than the AX760, the AX760 is supported much better in the UK, and is much easier to obtain. Although the Seasonic platinum should be a higher spec but I really cant see where and if it would be of an advantage to me. So now I will look at either 16GB or 32GB ram and either 1333 or 1600, although the 1600 is cheaper over here.

Share this post


Link to post

Thanks Zuhkov, theres some interesting information you have given me here.

The Supermicro X9SCM-IIF should have bios 2.0a onboard so it will run the ivy bridge E3-1240V2 which is what Im leaning towards so hopefully there will be no issues with the HBA cards. Its interesting what your saying about the network being saturated before the HBA and SAS expander so I may look at that although there is really very little between them. Its a shame people have started charging silly prices for the M1015 cards as 3 of them would have done fine but maybe one of them and the intel or Chenbro SAS expander would draw less power and created less heat.

I will look more into the seasonic PSU and I know many forum members rate them very high, I know the AX760 is manufactured by SeaSonic for Corsair and it has good reviews, to me its enough and the hybrid mode is a bonus for keeping the noise down. But Im going to look at the equivalent to this model by sea sonic.

Given that you are buying VT-d capable hardware, have you considered running ESXi and virtualizing both unRAID and WHS 2011 on the same box? That's what I'm doing and I've been extremely happy with it.

All I read about on this forum is ESXI and I need to spend some time learning about it as all I really need it drive pooling for the 24 drives and WHS2011. A few people have suggested WHS2011 and flexraid but the unRAID forum is second to none for information and the drive pooling off unRAID seems much more stable from what I have read, I don't want to have to re-rip 80 ISO's to each 3TB disk that fails and thats another reason for the 2 parity drives with the flexraid option.

I think the motherboard with IPMI I'm looking at is a good choice and Johnm seems to know plenty about server systems and has given many members some good advise. My only worry before I go ahead and buy some more hardware is to go for the E3-1220LV2 at 17w TDP or the E3-1240V2 at 69w TDP or maybe even the E3-1265V2 at 45w. Its how much CPU power do I need for now and in the future, although its only really a media server and wont be used for anything else from what I can see at the moment. I know that the TDP has nothing to do with idle power but I cant find any specs on this for these processors.

Zuhkov, can I ask what advantage you get by running ESXi and virtualizing both unRAID and WHS2011 on the same box. Im not 100% on what advantage this would give you, except that unRAID is Linux and WHS2011 is MS. Is it so you can keep using unRAID and don't have to change your drive pooling software.

Share this post


Link to post
Zuhkov, can I ask what advantage you get by running ESXi and virtualizing both unRAID and WHS2011 on the same box. Im not 100% on what advantage this would give you, except that unRAID is Linux and WHS2011 is MS. Is it so you can keep using unRAID and don't have to change your drive pooling software.

I keep my WHSv1 VM for the backups otherwise unRAID has replaced everything else.  I don't currently allow access to my network from the internet but I might use WHS to provide that so that it has another use that unRAID isn't designed for.

Share this post


Link to post

I used to have several different small physical servers that got consolidated on my ESXi machine. I started out with an Acer WHS v1 box providing backup, media storage and serving, and usenet access. When I suffered my second drive failure and was dealing with the pain of recovering and dealing with the combination of WHS's drive extender and appliance-like hardware (not in a good way), I started looking for an alternative that was more flexible, powerful, and provided better protection. That's how I found unRAID. I use SageTV for DVR purposes (the one server that hasn't been consolidated) and there's overlap between the community here and the community there, so I found both the product and the community a good fit. To get back to your question, I wanted to get away from WHS for storage, but I wanted to maintain the WHS backup capability (which is simple, hands-off, and effective). I also had an Atom-based PBX server that was having hardware issues and my wife had just gotten pregnant and I wanted to setup an IP camera server for the nursery. So on my ESXi box, I run unRAID for all media storage and serving (plus usenet and a couple misc plugins), WHS for backups only, PBX in a Flash for my IP phones, a Windows 7 VM with Blue Iris for my IP cameras, and a few misc VMs for trying things out. All this in one efficient box. The other reason I suggested you might want both WHS and unRAID is that you mentioned using My Movies. You could use the WHS VM as the My Movies server and then unRAID as the media server.

Share this post


Link to post

That interesting. I used to have a large collection of Blu-ray films and I decided that I wanted an easier way to sort them and stop my children damaging them. I brought a small NAS and then a larger NAS as my collection grew. My NAS cost a lot of money and after filling its 16TB I decided I needed something larger rather than run two systems and guys on the My Movies forum suggested a rack mount system or tower and hence thats why Im building it as I like a challenge. As my children get older they will want a TV, X-box etc in their own room so I decided I would build another 2 more passive HTPC clients as such and use WHS2011 for My Movies on a rackmount server as it has an automatic ripper built in for My Movies. I do love the system and have tried Plex and XMBC and this seems to work the best for me. As My Movies for WHS2011 and WHS2011 has no drive pooling I decided to look around and was told about unRAID and Flexraid. Hence my questions and the fact I want software raid with parity as if I lose a few drives its only the movies on the other drive I lose and wont have to re-rip my whole collection, unless of course I have a fire or something else happen. And then it would probably be the least of my worries. The iPhone side is worth me thinking about as I have an iPhone and use a macbook pro for downloading my torrents (A few US series I like that I want to watch before they come out on Sky TV). So I may look at other options and running ESXi may be a future option.

Share this post


Link to post

the problem people seem to be having with multiple HBA/RAID cards with the x9 series of boards is when mixing in an IVY-Bridge CPU.

 

The culprit might be the fact that its PCI architecture for Ivy-bridge is completely different then that of the Sandy-bridge. somthing is wrong in the PCI communication area.

 

 

I run both WHSv2 and unraid on the same box. unraid is my file server and WHS is strictly for client backups (and WHS then backed up to unraid).

 

Share this post


Link to post

So John would you suggest leaving the Ivy bridge CPU for the time being and using a sandy bridge, although it seems a shame when I'm starting a new build. Or change the motherboard or just see if an update will cure the problem. As I stated I could just run two M1505's for now or one 9201-16i as I only have 12 hard drives at the moment and wont be adding any more for at least a few months.

Share this post


Link to post

So John would you suggest leaving the Ivy bridge CPU for the time being and using a sandy bridge, although it seems a shame when I'm starting a new build. Or change the motherboard or just see if an update will cure the problem. As I stated I could just run two M1505's for now or one 9201-16i as I only have 12 hard drives at the moment and wont be adding any more for at least a few months.

 

I am not saying that i have the answers of all answers. I will say both at home and work, we are running X9SCM and X9SCL's with 2.0a on some of the boards. they all have Sandybridge CPU's and no issues.  The one Ivy we had, we swapped to a sandy.

 

 

I am running 4 HBA's (I also am running an X9SCM with 4 NICs all passed though) on  ESXi box with 2.0a with sandy's all passed through to the same VM no issues.

 

 

The use of an ivy CPU is obviously an after thought from intel and was intended for reference boards.. I am sure that this is not 100% stable in 100% of testing. there are many major changes between the 2 CPU's and I think we are pushing its intended upgrade.

 

 

I am seeing PCI bus issues when inter-mixed server side.

 

I am also seeing Video issues putting the HD4000 on the HD3000 boards, I am also seeing sleep issues and some odd usb3 issues when intermixing on the desktop side.

 

Maybe this can be fixed with software or bios upgrades in the future..

I rather buy what works now out of the box and not hope for a possible future fix.

 

Maybe intel should have come out with a new socket for ivy. Idk..

 

 

Share this post


Link to post

Yes I think you may be correct about Intel using a new socket for Ivy.

I suppose for now its down to if Im using ESXi as RockDawg stated in his post about his 3 x M1015 HBA's:

http://lime-technology.com/forum/index.php?topic=24632.msg215520#msg215520

I could never get all three to work in unRAID under ESXi with the Ivy Bridge CPU. All three would work with the Ivy Bridge CPU in bare metal unRAID, but not running in ESXi.

So I either go Sandy Bridge if I want to use ESXi and 3 x M1015's or Ivy Bridge with them and bare unRAID and look at a Sandy Bridge CPU if I ever need ESXi in the future. Maybe Supermicro may have sorted out the PCI bus issues by then or maybe not, but this motherboard X9SCM-iiF seems to be a good choice according to you and many others on the board, especially as I want IPMI after reading your posts. Hell I didn't even know it existed until I read all you build posts.

Share this post


Link to post

Yes I think you may be correct about Intel using a new socket for Ivy.

I suppose for now its down to if Im using ESXi as RockDawg stated in his post about his 3 x M1015 HBA's:

http://lime-technology.com/forum/index.php?topic=24632.msg215520#msg215520

I could never get all three to work in unRAID under ESXi with the Ivy Bridge CPU. All three would work with the Ivy Bridge CPU in bare metal unRAID, but not running in ESXi.

So I either go Sandy Bridge if I want to use ESXi and 3 x M1015's or Ivy Bridge with them and bare unRAID and look at a Sandy Bridge CPU if I ever need ESXi in the future. Maybe Supermicro may have sorted out the PCI bus issues by then or maybe not, but this motherboard X9SCM-iiF seems to be a good choice according to you and many others on the board, especially as I want IPMI after reading your posts. Hell I didn't even know it existed until I read all you build posts.

 

It might be an ESXi glitch. its trying to access the PCIe2 Ports wth PCIe3 instructions..

Share this post


Link to post

Well, one thing to keep in mind is that Ivy Bridge seems to be the end of the line for LGA 1155. LGA 1150 looks to be the new socket for Haswell (which may or may not end up being delayed for desktop and server processors given the focus on ultrabooks). Is Ivy Bridge better than Sandy Bridge for desktop and workstation/server processors? Yes, but the difference is significantly more noticeable in mobile processors and marginal compared to the jump from Lynnfield/Clarkdale to Sandy Bridge (which has got to be one of the more remarkable new architectures of the past decade). Is Ivy Bridge worth the potential associated issues? I don't know. I run three Sandy Bridge procs and two Ivy Bridge at home and I'm happy with all of them, so maybe I'm not the best person to answer.

 

IPMI is absolutely worth it and if I had to choose between Sandy Bridge and IPMI or Ivy Bridge and no IPMI, I'd probably save the couple bucks on the Sandy Bridge and be a happy man.

 

Don't give up on the notion of a SAS expander or the 9201-16i. I swapped one of my M1015s (which will eventually make its way to another server) for a 9201-16i and it's great. They're both x8, but you get twice the ports. That card seems to me to be a potentially more futureproof investment than a CPU. PCIe is not going away imminently and that makes maximum use of a single slot. When you only have two x8 slots and two x4 slots, it's nice to make maximum use of each of them. My change was also driven by a desire to experiment with 10 gigabit ethernet, but that's neither here nor there. The SAS expander is also a good option (one of the ports on my remaining M1015 is hooked up to a SAS expander in another chassis and if I didn't know, I'd be hard pressed to tell), as it's one that typically requires no slot (though many of the cards can use one either for power or merely for mounting purposes). As discussed, you're unlikely saturate even a single HBA just serving media to 3 clients. Let's go through a brief example. Let's say each client is streaming a raw rip of a blu-ray and let's say each rip is hitting the ceiling for blu-ray of 50 megabits per second. An M1015 supports 8 SATA/SAS disks and delivers up to 600 megabytes per second per SAS lane (though PCIe 2.0 x8 is not going to give you more than 4000 MB/s in aggregate and a spinning disk is not going to approach 600 MB/s). You may not be able to stream all three off one disk, but you should have no trouble streaming them off two or three disks on the same SAS port. Even if you are streaming 8 blu-ray rips off 8 disks to 8 clients, you are still closer to saturating a network link (especially if it's not wired gigabit all the way to the client) than you are the HBA. Here's an old thread that may be of interest:

http://lime-technology.com/forum/index.php?topic=9746.0

Share this post


Link to post

It might be an ESXi glitch. its trying to access the PCIe2 Ports wth PCIe3 instructions..

 

Fascinating. PCIe 3 seems to be a pretty significant departure from PCIe 1 and 2 in a lot of areas despite being backwards compatible. I wish there was more common and affordable PCIe 3 equipment out there to play with. Well, something to hope for in 2013.

Share this post


Link to post

Do NOT buy a 1265 chip for a mainbord with a c204 chipset. It is not advised by supermicro. I tried it and it will give you unpredicted result.

 

Share this post


Link to post

Yes I think you may be correct about Intel using a new socket for Ivy.

I suppose for now its down to if Im using ESXi as RockDawg stated in his post about his 3 x M1015 HBA's:

http://lime-technology.com/forum/index.php?topic=24632.msg215520#msg215520

I could never get all three to work in unRAID under ESXi with the Ivy Bridge CPU. All three would work with the Ivy Bridge CPU in bare metal unRAID, but not running in ESXi.

So I either go Sandy Bridge if I want to use ESXi and 3 x M1015's or Ivy Bridge with them and bare unRAID and look at a Sandy Bridge CPU if I ever need ESXi in the future. Maybe Supermicro may have sorted out the PCI bus issues by then or maybe not, but this motherboard X9SCM-iiF seems to be a good choice according to you and many others on the board, especially as I want IPMI after reading your posts. Hell I didn't even know it existed until I read all you build posts.

 

It might be an ESXi glitch. its trying to access the PCIe2 Ports wth PCIe3 instructions..

 

Yes this is what I was thinking, its something around ESXi

Share this post


Link to post

Well, one thing to keep in mind is that Ivy Bridge seems to be the end of the line for LGA 1155. LGA 1150 looks to be the new socket for Haswell (which may or may not end up being delayed for desktop and server processors given the focus on ultrabooks). Is Ivy Bridge better than Sandy Bridge for desktop and workstation/server processors? Yes, but the difference is significantly more noticeable in mobile processors and marginal compared to the jump from Lynnfield/Clarkdale to Sandy Bridge (which has got to be one of the more remarkable new architectures of the past decade). Is Ivy Bridge worth the potential associated issues? I don't know. I run three Sandy Bridge procs and two Ivy Bridge at home and I'm happy with all of them, so maybe I'm not the best person to answer.

 

IPMI is absolutely worth it and if I had to choose between Sandy Bridge and IPMI or Ivy Bridge and no IPMI, I'd probably save the couple bucks on the Sandy Bridge and be a happy man.

 

Don't give up on the notion of a SAS expander or the 9201-16i. I swapped one of my M1015s (which will eventually make its way to another server) for a 9201-16i and it's great. They're both x8, but you get twice the ports. That card seems to me to be a potentially more futureproof investment than a CPU. PCIe is not going away imminently and that makes maximum use of a single slot. When you only have two x8 slots and two x4 slots, it's nice to make maximum use of each of them. My change was also driven by a desire to experiment with 10 gigabit ethernet, but that's neither here nor there. The SAS expander is also a good option (one of the ports on my remaining M1015 is hooked up to a SAS expander in another chassis and if I didn't know, I'd be hard pressed to tell), as it's one that typically requires no slot (though many of the cards can use one either for power or merely for mounting purposes). As discussed, you're unlikely saturate even a single HBA just serving media to 3 clients. Let's go through a brief example. Let's say each client is streaming a raw rip of a blu-ray and let's say each rip is hitting the ceiling for blu-ray of 50 megabits per second. An M1015 supports 8 SATA/SAS disks and delivers up to 600 megabytes per second per SAS lane (though PCIe 2.0 x8 is not going to give you more than 4000 MB/s in aggregate and a spinning disk is not going to approach 600 MB/s). You may not be able to stream all three off one disk, but you should have no trouble streaming them off two or three disks on the same SAS port. Even if you are streaming 8 blu-ray rips off 8 disks to 8 clients, you are still closer to saturating a network link (especially if it's not wired gigabit all the way to the client) than you are the HBA. Here's an old thread that may be of interest:

http://lime-technology.com/forum/index.php?topic=9746.0

Well the 9201-16i is £292 here and i have found a UK source that can get me 3 x M1015 New for £240. So it looks like I'm going the M1015 route unless I can find a second hand 9201-16i, but I doubt that and to be honest everyone has great things to say about the M1015. So now I decide if its sandy bridge or Ivy without the ESXi option for the moment.

Share this post


Link to post

Do NOT buy a 1265 chip for a mainbord with a c204 chipset. It is not advised by supermicro. I tried it and it will give you unpredicted result.

This is useful information, thanks for that. Im interested how do you find the low power (35w) i3-2100T CPU in your setup, is it fast enough, as I noticed your running the E3-1265V2 on a different board. Does this board have IPMI and why did you go for the 1265V2 over other CPU's ????

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.