Pleiades - an ESXi based unRAID server based on the ATLAS build


Recommended Posts

The build is heavily influenced by Johnm's excellent ATLAS build. A proven success based on many of the same components that I'm now sourcing.

 

A BIG thank you goes to the high level of documentation that went into the ATLAS build. That was deciding factor for me to embark on an ESXi build, having proven hardware and a very nice walkthrough.

 

Why Pleiades:

  • Pleiades are according to mythology the seven daugters of the Atlas - and as per above, Atlas is arguably the father of the setup
  • The server will in time serve seven(ish) purposes
  • Pleiades is the logo of Subaru  - I'm a Subaru owner and board member of the Danish Subaru Owners Club
  • It will be mythical, and like Zeus created the Pleiades stars on the Death of the seven sisters, this will be created as a living memory of the soon dead machines that it will replace

::)

 

Intended use:

 

OS at time of building: ESXi 5.0 + unRAID 4.7 (to be migrated to 5.0.xx)

CPU: Xeon E3-1230 or E3-1260L (I will measure the system power draw to see if the “L” is saving enough to warrant the extra purchase price)

Motherboard: SuperMicro X9SCM-F

RAM:

Kingston 16GB 2x (2 x 4GB) 240-Pin DDR3 SDRAM ECC Unbuffered DDR3 1333 (PC3 10600) Server Memory Model KVR1333D3E9SK2/8G

or

4*Crucial 4GB DDR3-1333 PC3-10600

RAID Card: 1 or 2 IBM M1015 to be reflashed. Purchased one here and one on eBay.

Case: My trusted Chenbro 10303 case

Drive Cage(s): Built in the case

Power Supply: Input very welcome - something power efficient and available in Europe/Denmark. Preferably detachable cables. Currently considering:

SeaSonic M12II-620 Bronze

Corsair TX 650W M

SATA Expansion Card(s): IBM M1015 intended to be reflashed to LSI

Cables: From the drawer

Fans: What I have already thrown in there

 

ESXi System drive: 60 GB OCZ Agility 3 Series SSD

Parity Drive: SEAGATE BARRACUDA GREEN 2TB 5900RPM SATA/60

Data Drives:

Parity: ST2000DL003

2*WDC_WD20EARS

3*WDC_WD10EADS

WDC_WD10EVDS

SAMSUNG_HD103UJ

Incoming:

2*SEAGATE BARRACUDA GREEN 2TB 5900RPM SATA/600

 

Cache Drive: No plans for implementing cache drive

Total Drive Capacity: 9TB (to be increased)

 

Primary Use: unRAID, nPVR, WeatherDisplay, Torrents, News ...

Likes: None yet

Dislikes: None yet

Add Ons Used: unMenu, APCupsd, Hamachi, S.N.A.P.

Future Plans: Expand with more VM’s. Maybe for VPN services, Torrents, News

 

Power use TBD after build is complete

Boot (peak):

Idle (avg):

Active (avg):

Light use (avg):

 

The MoBo, M1015 and Case are fixed decisions (already owned). Some of the rest is on order and can be refunded 14 days after reception.

 

Any input on the build is appreciated.  :)

 

 

Link to comment

I would like to have some input on how to connect the disks in the new setup?

 

I have currently 7 data disks + parity. I will most likely expand beyond that.

 

In particular, I'm in doubt how to allocate disks across controllers.

 

Will it be best to have the parity disk on the M1015 controller with most (and to start with all) of the data disks; or is it better to attach it to the SuperMicro SATA-3 port and have the data disks on the M1015?

 

And going beyond that - will it be best to expand the array on the motherboard ports or the second M1015 controller?

 

 

Link to comment

I also intend to build a server, based on John's ATLAS build.

 

I intend to use 3 M1015's. All will be used exclusive by unraid. They are all configured as VMDirectPath Hardware Passthough.

 

This seems to be the best way, if you want to use ESXi and unraid (much more flexible, easier to change/swap HD's).

 

So all unraid disks (also cache and parity) will be installed on the M1015's, and the MB sata ports will be used for VM Datastore Drives, ISO storage and VM backup, other guest VM's direct access HD's (RDM) (eg: SABnzbd's own VM and HD, not installed with unraid), etc...

 

I will bring some MB SATA connections outside to use an eSATA enclosure. But I will probably try first to install as many drives inside as possible.

 

Just my 2 cents...

 

 

 

Link to comment

Big your welcome...

looks really nice...

 

this just reminds me that i need to make massive updates to my thread now that I'm in production.

 

 

I would like to have some input on how to connect the disks in the new setup?

 

I have currently 7 data disks + parity. I will most likely expand beyond that.

 

In particular, I'm in doubt how to allocate disks across controllers.

 

Will it be best to have the parity disk on the M1015 controller with most (and to start with all) of the data disks; or is it better to attach it to the SuperMicro SATA-3 port and have the data disks on the M1015?

 

And going beyond that - will it be best to expand the array on the motherboard ports or the second M1015 controller?

 

 

Honestly, I would just leave all of the unRAID disks on the m1015's.

As long as you have them in the 2 PCIe 8x ports, You should get full speed to all mechanical drives.

In theory, you cant saturate the 8x SAS2 controller with plain sata drives. If you want you can start splitting the drive between the two for maximum performance (it really wont make a differance in your case)

the other option is put the cache/parity on one controller and add all your array drives to the second, then fill in the first as you need more disks. Once you cross the 16 drive barrier, then you need to assess other options... most likely adding a 3rd controller.

 

I did this on my build for 3 reasons.

 

1 It is just easier.... anytime you want to swap out an unRAID drive that is on a mobo port, you have to redo the single drive pass through hack/assignment to each drive you swap/add. that gets old quickly for large builds. it makes changing a drive from a simple "plug it in" (with controller pass though) to a 15-30 ordeal chasing down serial numbers and applying pass though command in the ESXi console (for single drive pass though).. this will get even more complex with unRAID 4.7 since drives are assigned by controller port instead of serial number in unRAID 5.x.

not to mention the possibility of rebooting your entire ESXi box with each disk change.

 

2 if my ESXi bombs/blows up/crashes... etc. I can rebuild it in minutes instead of hours dealing with pass-through drives. it takes as long an 3 mouse clicks and 2 reboots to pass entire controllers through in this configuration.

 

3 I can boot straight to unRAID instead of ESXi if I had to and it will still work fine. (in theory you should be able to in ur5.X anyways, but not in ur4.x)

 

I reserved my mobo ports to datastore and single machine pass-through disks.

This is especially important for me as I have SATA3 6Gb/s SSD's for datastore drives on the SATA3 ports.

 

One problem I ran into just this weekend with atlas, i cant preclear drives on my M1015's!! At least not Hitachi 3TB drives.

I posted in the IT-firmware thread to see if this is normal or if I missed some configuration step.

If I don't get an answer there I'll have to cross pot it elsewhere.

As of right now I still have an MV8 on atlas so I can preclear drives on that controller (I also have other boxes I can preclear on)

Using a mobo port would suck for a simple preclear as you would have to do issue #1 above.

 

I have no idea the limit of the number of drives on that case (for all guests, not just unraid). If you do go for a Xeon, 16GB RAM and massive number of drives, you might consider something larger then a 650 watt PSU.

It is better to spend big a few extra bucks now then double purchase a larger PSU in the future later when you come up short on power.

Then again, a 650 might be fine for your final build. you might want to  run a PSU calculator app.

 

 

 

 

Link to comment

I also intend to build a server, based on John's ATLAS build.

 

I intend to use 3 M1015's. All will be used exclusive by unraid. They are all configured as VMDirectPath Hardware Passthough.

 

This seems to be the best way, if you want to use ESXi and unraid (much more flexible, easier to change/swap HD's).

 

So all unraid disks (also cache and parity) will be installed on the M1015's, and the MB sata ports will be used for VM Datastore Drives, ISO storage and VM backup, other guest VM's direct access HD's (RDM) (eg: SABnzbd's own VM and HD, not installed with unraid), etc...

 

I will bring some MB SATA connections outside to use an eSATA enclosure. But I will probably try first to install as many drives inside as possible.

 

Just my 2 cents...

 

 

 

I completely agree...

 

Keep an eye out on my m1015' preclear issue. this might be a problem for all three of us.

 

Also, your 3rd M1015 will only be at 4x speed assuming you have a C202 or C204 based mobo (like the X9SCM)...

fill that one up last. after 5 or 6 drives, parity checks will start to saturate that controller.

Maybe only use a single Channel from that controller for the array and the second channel for Esata SNAP drives/preclears[if we get it to work]/non-array app drives.

Link to comment

 

Keep an eye out on my m1015' preclear issue. this might be a problem for all three of us.

 

 

I read about it, but I can't test it because I'm still waiting for my M1015's to be delivered. I'll test and let you know the results as soon as I have a M1015.

 

 

I still have another PC that I can use to preclear, so no biggy for me either.

 

 

 

Also, your 3rd M1015 will only be at 4x speed assuming you have a C202 or C204 based mobo (like the X9SCM)...

fill that one up last. after 5 or 6 drives, parity checks will start to saturate that controller.

Maybe only use a single Channel from that controller for the array and the second channel for Esata SNAP drives/preclears[if we get it to work]/non-array app drives.

 

Yes the 3rd M1015 will be PCie 2.0 4x (I think?), faster than a PCIe 4x MV8, and fast enough for me.

 

Since unraid only supports 20 data drives, a cache and parity drive at the moment, I still have 2 slots left for preclear/S.N.A.P. drives.

 

I probably won't use a non-array app drive, thats what VM's are for :-) ...

 

 

Link to comment

Thank you both  ;D . Controller passthrough for unRAID makes perfect sense. That's really valuable input.

 

I wasn't really thinking that I would have the whole 2 controllers saturated with unRAID, so I was/am thinking controller 2 could be individually assigned.

 

To start I think I will use the local SATA ports for all the rest, and the controllers for unRAID.

 

'Interesting' observation regarding Preclear. But I also have the option to use other hardware for Preclearing, so I will be OK for now.

 

Regarding Power Supply, I do not intend to go up to a 16+ build. Still, I was not thinking that there were PSU calculators out there. I will find a couple and put in some margin and see how it comes out.

 

It'll be fun to start playing when the rest of my stuff arrives allowing me to start playing and come up with more stupid questions  ::)

Link to comment

It'll be fun to start playing when the rest of my stuff arrives allowing me to start playing and come up with more stupid questions  ::)

 

I'm my experience, the stupid question was the one i didn't ask.

 

If your build is 16 Drives for all guests, I would guess that you'll be fine. I just like to have a margin for error.

Just make sure your PSU is EPS12V with that motherboard. both of the ones you listed should be.

Link to comment

I'm still missing some parts, and others I have in excess. But it's powered up and testing has begun. So far - it's progressing nicely  :)

 

I had some 'fun' trying to upgrade the OCZ Agility3 60GB to firmware 2.15, but succeeded in the end (in the third PC I tried). Temp. fix was also applied, so now it displays 0 degrees in stead of 128. I guess that's better in some ways. OCZ: How about trying to display the correct temperature... :P

 

Currently considering to return it and in stead get one of that are cheaper per GB, and potentially (?) not error prone like the Agility 3.

  - OCZ Vertex Plus Series Solid State Disk 120GB

  - Kingston SSDNow V100 128 GB

 

Anyway. I have done a successful preclear of one of the 5900 rpm 2TB's with my test unRAID stick on bare metal. That was a walk in the park - unRAID booted and worked immediately. Logs attached. Took close to 24 hours though. That's about the time I rememeber from last I did it.

 

Then I installed ESXi 5. Got the vSphere client and the off to the real stuff.

 

I have now successfully installed W7Ultimate in a VM from an ISO. It took literally 6 minutes from I initiated the install on a fresh datastore until I was at the final prompted choices.  :o A couple of tweaks, and now I have a perfect model machine that I have copied to new datastore folders. To achieve the aforementioned, I used Tip# 1 and #2 in the Atlas thread. Thanks again.  :-*

 

I also have my unRAID test booting from PLOP. I currently have (at least) two things to figure out.

  a) Why can't I access the unRAID server on its IP address like I could when it ran on bare metal?

  b) Why can't I save the PLOP setting - it tells me it doesn't up front, and... (oh wait - maybe it doesn't have a proper place to store it yet - gotta check that...)

 

Next thing to test on my very prototype style test bench is power consumption. Most importantly, the idle and full load power use from the 3 CPUs I have available here now.

  - E3-1220 (currently mounted)

  - E3-1230

  - E3-1260L

 

And then I also need to test the SeaSonic M12II 600 vs the currently mounted Corsair TX650M.

 

[glow=yellow,2,300]Any suggestions for a good (USB-)bootable testapp that can bring a relevant full load test on the CPUs?[/glow]

 

This is my testbed as it sits right now in the middle of my home office - a mess but it works...

SRTvU.jpg

syslog-2011-10-21_1.zip

Link to comment

OK - got some stuff done.

 

Power supply decision

The battle between SeaSonic M12II-620 Bronze and Corsair TX 650W M ended up with a victory (albeit a narrow one) to SeaSonic for the following reasons:

  • Power draw 3-4W less at 65 watts 5-6W less at 85 watts. Guess the margin will increase with load
  • More modular. The Corsair has a strand of SATA connectors fixed in the bundle - the SeaSonic only has the MoBo+Graphics card cables fixed and one more native SATA connector
  • Corsair almost silent - SeaSonic completely quiet

 

CPU testing

Well that threw a spanner in the works. My first round of testing was primarily aimed aty power consumption in idle and under load.

 

I have been running with a 12GB configuration for some days with E3-1220 and E3-1260L. But when I dropped in the E3-1230, the rig would not boot at all. It never presented the boot BIOS. After some removing/reinserting BIOS resetting etc., the conclusion is:

 

  a) The E3-1230 somehow triggered the board to no longer being able to boot with 12GB RAM

  b) This behaviour is now also the case for the E3-1260L (haven't tried the 1220)

 

[glow=yellow,2,300]Why did this happen, and why has this been contageous to previously functioning setups?[/glow]

 

I have another 4GB block incoming, so I guess I'll see if 16GB works. Still quite odd.

 

Software

Got unRAID visible. The "Flexible" network card that a VM defaults with did not work. Configured an E1000, and all is good  ::) Still need to fix the permanent PLOP config so I don't have to choose in PLOP menu every time. And the unRAID boot time is terrible.

 

The windows VM's look promising. Preparing them for their future tasks. Really fast performing - a pleasure  8)

 

Next up: Get nPVR and Weather-Display up and running while waiting for the remaining parts that will enable migration...

Link to comment
Power supply decision

The battle between SeaSonic M12II-620 Bronze and Corsair TX 650W M ended up with a victory (albeit a narrow one) to SeaSonic for the following reasons:

  • Power draw 3-4W less at 65 watts 5-6W less at 85 watts. Guess the margin will increase with load
  • More modular. The Corsair has a strand of SATA connectors fixed in the bundle - the SeaSonic only has the MoBo+Graphics card cables fixed and one more native SATA connector
  • Corsair almost silent - SeaSonic completely quiet

 

Just stumbled upon a reasonable offer on a SeaSonic X-560. Completely modular, highly efficient and apparently at least one level up on all parameters. According to eXtreme Power Supply Calculator it has plenty of power. Methinks me likealot  :D

 

Any experience with this one? Am I simply in shopaholic mode or does it make sense ?

 

 

Link to comment

Unfortunately, the 4'th RAM block did not help the problem quoted below. The board still only boots with 8GB RAM  :(

 

Seeking help in the X9SCM mobo thread: http://lime-technology.com/forum/index.php?topic=15046.0

 

CPU testing

Well that threw a spanner in the works. My first round of testing was primarily aimed aty power consumption in idle and under load.

 

I have been running with a 12GB configuration for some days with E3-1220 and E3-1260L. But when I dropped in the E3-1230, the rig would not boot at all. It never presented the boot BIOS. After some removing/reinserting BIOS resetting etc., the conclusion is:

 

  a) The E3-1230 somehow triggered the board to no longer being able to boot with 12GB RAM

  b) This behaviour is now also the case for the E3-1260L (haven't tried the 1220)

 

[glow=yellow,2,300]Why did this happen, and why has this been contageous to previously functioning setups?[/glow]

 

I have another 4GB block incoming, so I guess I'll see if 16GB works. Still quite odd.

Link to comment

OK - a bent contact in the CPU bay caused all my troubles with RAM. SuperMicro support in NL was most helpful in identifying that  ::) Bent it back, and the server is rolling fine with 16GB.

 

So now I just need to find a machine to reflash my M1015 after unsuccessfully using my own pool of machines (anyone here in the Copenhagen, Denmark area?), and migration can begin. And a fresh beta 13 to test as well. This is a day of good progress  ;D

 

 

Link to comment

sosdk: Thanks, but I have my uncles HP available here to use for reflash if needed.

 

Besides that, i have now successfully precleared the 2 new 2TB drives on the reflashed M1015 controller. Went without a hitch.

 

Meanwhile, the b13 got some kind of Münchhausen-by-Proxy syndrome, and claimed that the 400GB drive was now 756TB  :o .

 

I will now try a simple install with 3*400GB drives on b12a, and add some of the add-ons like Zerons vmWare-tools and SimpleFeatures.

 

My other VMs are moving forward as planned. I now have prepared the nPVR and WeatherDisplay machines, and I have a 'clean' machine, that I have copied to my 400GB datastore. It will be my template for creating new machives.

 

I found out that it was really easy to log on to the command line interface and do an internal copy of my template machine from the SSD to my 400GB secondary datastore. I'm sure there are other ways, but this is quick, clean and easy.

 

SSH into the server using putty or like product.
go to /vmfs/volumes and go into the volume in which you want to move your data:
cd /vmfs/volumes/Datastore2
mkdir <Name of VM directory>
cd /
Now switch back to source datastore
cd /vmfs/volumes/Datastore1
cp *.* /vmfs/volumes/Datastore2/<Name of newly created VM directory>

Remember to put "" around e.g. datastore names with spaces. E.g. my SSD is called "60GB SSD" which I would put in in stead of Datastore1 above

Ref: http://www.n2networksolutions.com/2009/02/26/moving-a-virtual-machine-from-one-datastore-to-another/

 

If needed, you then register the newly copied machine like described over in the Atlas thread.

Link to comment

Final pieces are coming together.

 

Datastore

I realised that 60GB SSD for main datastore was really not enough. So I have moved the VM's to a 120GB OCZ Agility3 SSD (after FW upgrading to 2.15). I used the built in mover in the data store browser. One disk was locked, and after several attempts (even after reboots), the only way I was able to copy it was using cp from the command line interface. That worked.

 

CPU

Surprisingly, the E3-1230 idles at 3-4 watts LESS than the E3-1260L. And performance is significantly higher. Maybe it peaks at higher watts, but it gets the job done - FAST! The 1230 stays in.

 

PSU

The gold rated SeaSonic X660 has yet to start its fan. And it is completely modular, which is something I like - in particular with my cabinet. The Corsair and M12II both have the motherboard and CPU cables fixed. One of them even a SATA string.

 

And even though it idles at 60W just 1W lower than the M12II, already at a load where the M12II showed 103-104W the X-series shows 96-97W. I guess at higher loads its even more. X-series stays.

 

 

 

Link to comment

Nice..

 

I was wondering about that single 60GB.. it still has some good use though.

I have 2x 120GB datastore drives for higher level guests and wish i had double that space.

I also have a 7200rpm spinner datastore for low impact guests.

 

 

 

I should test my CPU

 

I love those PSU's. expensive but godlike.

Link to comment
CPU

Surprisingly, the E3-1230 idles at 3-4 watts LESS than the E3-1260L. And performance is significantly higher. Maybe it peaks at higher watts, but it gets the job done - FAST! The 1230 stays in.

 

 

The E3-1260L has a slower clock speed and an embedded video card.

I think your power observations make sense.

More idle power -> video card.

Less peak performance -> lower clock.

Processor Number    E3-1230 E3-1260L

# of Cores                     4           4

# of Threads                  8           8

Clock Speed               3.2 GHz 2.4 GHz 

Max Turbo Frequency 3.6 GHz 3.3 GHz

Max TDP                         80 W 45 W

Processor Graphics             No   Yes

I think the 1260L is intended for a workstation application.

 

The E3-1220L might be more interesting for a server application (20W tdp).

But the things are quite expensive and not very available.

Processor Number E3-1220L

# of Cores 2

# of Threads 4

Clock Speed 2.2 GHz

Max Turbo Frequency 3.4 GHz

Intel® Smart Cache 3 MB

 

 

BTW I have a E3-1230 also and it spends most of the time idleing.  I have not yet had time to install ESXI but i look forward to it.

For me the small increment over the the E3-1220 was worth it for the extra 4 threads.

 

All 4 processors discussed above in a compare selection here:  http://ark.intel.com/compare/53401,52275,52271,52269

 

Kind Regards

 

Link to comment
The E3-1260L has a slower clock speed and an embedded video card.

(...)

BTW I have a E3-1230 also and it spends most of the time idleing.  I have not yet had time to install ESXI but i look forward to it.

For me the small increment over the the E3-1220 was worth it for the extra 4 threads.

 

Ah - I completely missed that the 1260L had graphics - it's supposed to be the 12x5-models... That certainly explains the power consumption  ::)

 

I agree with you on the 1220/1230. I also tested the E3-1220, but it really makes no sense to me - at least not in this configuration and with its pricetag.

Link to comment

OK - so today I pulled apart the olf unRAID server, and prepared the case for new occupants. I will build tomorrow. It will be great  ;D I'll take some pictures along the way.

 

I believe I will boot the 4.7 server first, and then do the upgrade. Objections or comments?

 

I also did some preliminary testing with the UPS plugin for vs. 5. Unfortunately, there seems to be something not quite right with the pasthru. It sees the UPS, but reports odd data. Haven't tested the actual shut-down yet. The beeping will wake the whole house here. So that will be for tomorrow.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.