ATLAS My Virtualized unRAID server


Recommended Posts

The issues i had heard reported with 2.0 was limited to ESXi 5. It was dropping HBA card in passthrough. i have never personally upgraded to 2.0. I dont think it will cause an issue in baremetal unraid.

 

I am pretty sure 2.0 is in EFI mode by default.

You must run 2.0 if you have a V2 E3 Xeon.

 

Sorry i have been a bit out of it the last few weeks. Movement in Detroit and the Spring Awakening in Chicago have drained me...

 

I have yet to get a "dropped HBA" under bios 2.0 and ESXi 5U1.

Link to comment

The issues i had heard reported with 2.0 was limited to ESXi 5. It was dropping HBA card in passthrough. i have never personally upgraded to 2.0. I dont think it will cause an issue in baremetal unraid.

 

I am pretty sure 2.0 is in EFI mode by default.

You must run 2.0 if you have a V2 E3 Xeon.

 

Sorry i have been a bit out of it the last few weeks. Movement in Detroit and the Spring Awakening in Chicago have drained me...

 

I have yet to get a "dropped HBA" under bios 2.0 and ESXi 5U1.

I have not seen it personally, I have heard people on other forums saying that VT-d with multiple HBA's stopped working or is unreliable with bios v2. its possible that has been patched or only certain HBA's? I do not have a definitive answer.

 

It would be nice if you post the bios version you have and PCIe cards are in you server for others. That might help for people to know what has been tested.

Link to comment

I got slightly thrown off.. it is V1.01A that breaks PCIe in ESXi.. it was V2.0 that breaks the onboard NICs.

(I have not checked but there was also a mention of OPROM not disabling correctly preventing parity checks in unRAID in V2.0)

 

Usually I do not mess with my BIOS unless I need to..

Unfortunaly there were many "rumors" and people asking "what did do if i want to run the new V2 chips?".

I decided to ugrade ATLAS a few days ago.

 

I upgraded to BIOS V2.0a and it "seems" completely stable.

 

BIOS 1.01a had some "Reported" issues with ESXi and PCIe issues.

BIOS v2.0 has had "reports" of the NICs getting their ROM corrupted upgrading to 2.0

(Supermicro has a fix to re-flash the NICs)

 

A few quick notes on the 2.0a upgrade.

I had to re-enable Vt-d in the BIOS

I also had to set PCI-E slots OPROM to disabled for my build.

 

 

Link to comment

I had Atlas out of the rack today.

 

I thought I might get a current photo of it's innards for those that like server pr0n.

 

lXE62l.jpg

 

I still need to replace the back fans. only one is hooked up right now.

 

I would like to still mod the chassis to hold my Supermicro 4in1 and relocate the SSD's to that.

Link to comment

Hi excuse what everyone will probably think is a stupid question, but: ESXi Datastore Drives you have two of them both 200+Gb what are they actually for, why two and could I use smaller?

The recommended ssds seem to be the only ssd drives which have not fallen in price, which is typical!

 

I ask this because I keep putting off building a copy of your server. I am not overly technical at this stuff so hope to breeze through it using your guide! (Many thanks for the great post, and 40+ pages)

 

Link to comment

Hi excuse what everyone will probably think is a stupid question, but: ESXi Datastore Drives you have two of them both 200+Gb what are they actually for, why two and could I use smaller?

The recommended ssds seem to be the only ssd drives which have not fallen in price, which is typical!

 

I ask this because I keep putting off building a copy of your server. I am not overly technical at this stuff so hope to breeze through it using your guide! (Many thanks for the great post, and 40+ pages)

 

The two SSDs are the datastores for the guest VMs, a few Windows machines doing things like download, torrent, catalog, stream, etc.

Link to comment

Hi excuse what everyone will probably think is a stupid question, but: ESXi Datastore Drives you have two of them both 200+Gb what are they actually for, why two and could I use smaller?

The recommended ssds seem to be the only ssd drives which have not fallen in price, which is typical!

 

I ask this because I keep putting off building a copy of your server. I am not overly technical at this stuff so hope to breeze through it using your guide! (Many thanks for the great post, and 40+ pages)

 

The dumb questions are those you dont ask... usually..

 

the datastores are where you store the "virtual disk images" for any guest OS you have.

you do not need to buy SSD's. in most cases you can just use a mechanical drive depending on your use.

I chose SSD's for pure performance. (I also have a ZFS storage array on the server for more datastore storage, thats a bit advanced for most people)

 

It is usually recommended that you use a "raid array" on a SANs or NAS and have ESXi map to it via NFS or iSCSI for the datastore.

I chose to keep it more of an all-in-one build. it is much simpler for my build. plus it is one less server to have on and cheaper both parts and electricity.

 

if you only plan to have unraid and maybe  a few windows/*nix guests. a 120-250GB mechanical drive might be all you need.

Link to comment

Hopefully this is not one of those dumb questions as well.  I purchased a SUPERMICRO MBD-X9SCM-F-O motherboard and 16 gigs memory with the Intel Xeon E3-1240 Sandy Bridge 3.3GHz 4 x 256KB L2 Cache 8MB L3 Cache LGA 1155 80W Quad-Core Server Processor based off Johnm's setup and have 2 Br10i's installed and unRaid is working fine. I am thinking of setting up ESXi server and was wondering about the pass thru setup on this motherboard? if i read this right the Br10i's passthru fine its just the MBD-X9SCM-F-O motherboard sata ports you have to setup correctly for unRaid to see them in passthru?  Please correct me if i am wrong. I am using all the sata ports on the motherboard and the BR10I's as well. Thanks again to Johnm for taking the time to post his setup! 

Link to comment

I think I lost you your explanation of your setup.

 

If I am understanding your situation correctly, you have 8+8+6 drives for 22 drives in unraid?

 

You must have at least 1 datastore drive for ESXi.  you have all ports in use.

 

if that is the case, you will need 1 more sata or sas controller. you have 2 options at this point.

 

the recommended option would be to get a another 8 port controller for unraid. (they don't make a 6 port... so 8 it is).

move your motherboard drives to the new HBA. boot up unraid and make sure it all works fine...

if all is good then you can pull the unraid hba's or drives and install ESXi.

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

 

the second method (not so recommended but should work, i have not tested it).

Buy a crappy 2 port sata card. put your datastore drive(s) on this card.

passthough you entire sata bus to unraid. (you cant split it, its all or nothing)

rest is same as above..

pull the unraid hba's or drives and install ESXi

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

Just be careful to not leave your unraid drives installed while installing ESXi. it will format them all...

 

Link to comment

I think I lost you your explanation of your setup.

 

If I am understanding your situation correctly, you have 8+8+6 drives for 22 drives in unraid?

 

You must have at least 1 datastore drive for ESXi.  you have all ports in use.

 

if that is the case, you will need 1 more sata or sas controller. you have 2 options at this point.

 

the recommended option would be to get a another 8 port controller for unraid. (they don't make a 6 port... so 8 it is).

move your motherboard drives to the new HBA. boot up unraid and make sure it all works fine...

if all is good then you can pull the unraid hba's or drives and install ESXi.

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

 

the second method (not so recommended but should work, i have not tested it).

Buy a crappy 2 port sata card. put your datastore drive(s) on this card.

passthough you entire sata bus to unraid. (you cant split it, its all or nothing)

rest is same as above..

pull the unraid hba's or drives and install ESXi

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

Just be careful to not leave your unraid drives installed while installing ESXi. it will format them all...

 

OK i confuse myself as well sorry about that!  I have the 2 white sata ports open on motherboard.  How do i setup pass thur on those ports? 

 

Thanks!

Link to comment

 

OK i confuse myself as well sorry about that!  I have the 2 white sata ports open on motherboard.  How do i setup pass thur on those ports? 

 

Thanks!

Ah..

Short answer, you cant

 

long answer, you cant. the cougar SATA controller is a 6 port controller, it is all 6 ports or nothing... (all single controller chip motherboards are like this)

 

If you look at this picture, it is the whole 6 port controller.

b4bmym.png

 

You are stuck with the instructions in my last post. sorry.

 

 

EDIT.. let me rephrase that... you cant pass only the 4 ports to ESXi for with direct passthough.

However..you can pass the other 4 unraid drives using RDM or raw device mapping.. it is a little more complicated.. and quit bit more work if you need to swap out a drive later.  but you could do it without buying any more hardware (assuming you have a disk for a datastore and a flash drive for ESXi)

Link to comment

HBA's

  • 1) M1015
    Replace The SASLP-MV8's with IBM M1015's (LSI SAS9220-8i) about $65-$85 on ebay.
    (you would need 3 for more then 16 Drives. put the first 16 drives on the 8x ports, then fill in the rest on the 4x port.)
    this upgrade will get you faster parity checks. the m1015 is a PCIe2 8X card with 8 SAS2 ports (Sata3 6GB/s).
    They natively support 3TB and larger Drives.
    If you ever dump your unRAID and move to a ZFSx solution, these should be compatible unlike the MV8's
    IF You do get these cards, You will need longer cables then those listed above in a Norco case.
    I recommend the 1M ones from monoprice at $9.49 each
    [Warning! These cards come with an IBM raid bios, you have to re-flash them to LSI IT-mode Bios to work. you can not flash them on the 9XSCM. You need to do it on another motherboard.]

 

I have a correction for this part of your guide. I just successfully flashed 3 M1015s to IT mode on an X9SCM.

 

Details here: http://lime-technology.com/forum/index.php?topic=20761.0

Link to comment

Hi excuse what everyone will probably think is a stupid question, but: ESXi Datastore Drives you have two of them both 200+Gb what are they actually for, why two and could I use smaller?

The recommended ssds seem to be the only ssd drives which have not fallen in price, which is typical!

 

I ask this because I keep putting off building a copy of your server. I am not overly technical at this stuff so hope to breeze through it using your guide! (Many thanks for the great post, and 40+ pages)

 

The dumb questions are those you dont ask... usually..

 

the datastores are where you store the "virtual disk images" for any guest OS you have.

you do not need to buy SSD's. in most cases you can just use a mechanical drive depending on your use.

I chose SSD's for pure performance. (I also have a ZFS storage array on the server for more datastore storage, thats a bit advanced for most people)

 

It is usually recommended that you use a "raid array" on a SANs or NAS and have ESXi map to it via NFS or iSCSI for the datastore.

I chose to keep it more of an all-in-one build. it is much simpler for my build. plus it is one less server to have on and cheaper both parts and electricity.

 

if you only plan to have unraid and maybe  a few windows/*nix guests. a 120-250GB mechanical drive might be all you need.

 

Thanks for this. As the WHS will only have mymovies, itunes server and thats probably about it on it I guess a 128GBish SSD will do just fine, and have the speed to boot.

 

Now I just need to stomach buying all the stuff, I don't suppose anyone has the winning lotto numbers they wish to share?

 

Regards.

Link to comment

I think I lost you your explanation of your setup.

 

If I am understanding your situation correctly, you have 8+8+6 drives for 22 drives in unraid?

 

You must have at least 1 datastore drive for ESXi.  you have all ports in use.

 

if that is the case, you will need 1 more sata or sas controller. you have 2 options at this point.

 

the recommended option would be to get a another 8 port controller for unraid. (they don't make a 6 port... so 8 it is).

move your motherboard drives to the new HBA. boot up unraid and make sure it all works fine...

if all is good then you can pull the unraid hba's or drives and install ESXi.

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

 

the second method (not so recommended but should work, i have not tested it).

Buy a crappy 2 port sata card. put your datastore drive(s) on this card.

passthough you entire sata bus to unraid. (you cant split it, its all or nothing)

rest is same as above..

pull the unraid hba's or drives and install ESXi

create a VM for unraid per the instructions.

power down, plug the drives in and go... you should now have unraid as a guest with all your old settings.

The best part is you can then pull the esxi flash, reboot and it should boot back to unraid if you dont like it.

 

Just be careful to not leave your unraid drives installed while installing ESXi. it will format them all...

 

So, SAS expanders are out? I have found no difference between a single LSI2008+RES2SV240 and (2)LSI2008 cards, even under ZFS.

Link to comment

that looks right at a glance.. 

 

how does your unraid boot? plop or VDMK?

if it is plop, did you set the CD to auto connect on startup?

 

what is your datastore? 30 sec might be to fast if it is not available yet.

 

look in your "recent tasks" status bar at the bottom it should give you an error if it failed.

Link to comment

my unRAID guest went down hard lastnight...

my OCZ SSD cache drive fried. it took the expander with it.. (or the other way around)

I could not get it to come back up, even after I pulled the SSD.

 

I had to gut the server to get it to reboot. I also had to switch to the molex power to get the expander back online.

turn OPROM back on.

reset the bios on the m1015

I had to run several new sas cables (not sure if that helped at all)

re-add it back in ESXi (even ESXi kicked it out.)

 

It is back up and running for now without the SSD... but I think it is just a bandage.

 

I need to test the ssd, the m1015  and expander on my test rig as soon as I can get the time.

The lights on the M1015 and the expander are both lit different then before.

(help me out here. what on the intel expander is lit in normal operation? I only have one so I cant check another unit.)

luckily i have plenty of spare HBA's if I do need to RMA anything.

Link to comment

my OCZ SSD cache drive fried. it took the expander with it.. (or the other way around)

that's awful :(

 

that looks right at a glance.. 

 

how does your unraid boot? plop or VDMK?

Plop

if it is plop, did you set the CD to auto connect on startup?

Yes

 

what is your datastore? 30 sec might be to fast if it is not available yet.

OCZ SSD  :-\ I've changed the value to 120 sec and same thing.

 

look in your "recent tasks" status bar at the bottom it should give you an error if it failed.

"Auto power On" says "Completed"

 

kyaEw.png

 

one weird thing is that every time that I change a value in the Auto start menu, when I press ok, my VMs are multiplied by 3. And then, when I click on any other menu and that I come back to the Auto start menu, everything is back to normal. I even restarted my whole ESXi from scratch because of this de-multiplication and the VMs not auto starting but they're still not auto-starting :(

 

fhqMq.png

 

3COMR.png

 

 

 

 

 

 

Link to comment

my unRAID guest went down hard lastnight...

my OCZ SSD cache drive fried. it took the expander with it.. (or the other way around)

I could not get it to come back up, even after I pulled the SSD.

 

I had to gut the server to get it to reboot. I also had to switch to the molex power to get the expander back online.

turn OPROM back on.

reset the bios on the m1015

I had to run several new sas cables (not sure if that helped at all)

re-add it back in ESXi (even ESXi kicked it out.)

 

It is back up and running for now without the SSD... but I think it is just a bandage.

 

I need to test the ssd, the m1015  and expander on my test rig as soon as I can get the time.

The lights on the M1015 and the expander are both lit different then before.

(help me out here. what on the intel expander is lit in normal operation? I only have one so I cant check another unit.)

luckily i have plenty of spare HBA's if I do need to RMA anything.

 

This is distrubing.  I use OCZ drives for my datastores attached to a M1015 and I kept having troubles where they would just dissapear and I couldn't get them back without reformatting them.  I finally figured out that somehow the partition table was getting lost and enter the "partedUtil setptbl" ESXi command to reset the partition table (my Agility 3 I have to reformat the partition table every 2-3 days, the Vertex 3 is once a week).  They also were doing this on the onboard LSI 2008 controller too.

 

I recently got a Plextor M3 and for the life of me I couldn't get it to work right on the LSI 2008 controllers (either of them).  I could format it, copy data to it, but I kept getting random IO and/or corruption issues.  For example when trying to install ubuntu I would either get a message saying that my CD was corrupt or it would refuse to boot due to a corrupt install.

 

So, I moved the Plextor and the OCZ to the onboard Intel ICH10 controllers and all problems have dissapeared.

 

I have nothing to back it up on, but from my impression the LSI 2008 HBA doesn't like SSD too much.

Link to comment

this will look crazy but I can't get my VM to auto-start  :'(

 

I am interested in this as well. I read that auto-start was broken in the 5.0 Update 1

http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html

 

THANK YOU, I may not be crazy! I'm effectively running 5.0 Update 1 (VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso)

 

Update 1 was released in March, more than 3 months ago... Why isn't this fixed yet? It's kind of ridiculous!?

 

I can't revert back as explained in the Blog post as I have a 5.0 Update 1 clean install. There's a sketchy workaround in the comments but I would prefer not to mess with that. So I guess that I will have to start all over from scratch with a 5.0 install (VMware-VMvisor-Installer-5.0.0-469512.x86_64.iso)

 

Should I be careful about anything in doing this? (except for unplugging every drives when re-installing ESXi)

 

Looking at the Update 1 release notes, I should be fine downgrading; my OSX VM wasn't working anyway..

 

And then VMware is gonna release Update 2 / a fix in a week haha

Link to comment

this will look crazy but I can't get my VM to auto-start  :'(

 

I am interested in this as well. I read that auto-start was broken in the 5.0 Update 1

http://blogs.vmware.com/vsphere/2012/03/free-esxi-hypervisor-auto-start-breaks-with-50-update-1.html

 

THANK YOU, I may not be crazy! I'm effectively running 5.0 Update 1 (VMware-VMvisor-Installer-5.0.0.update01-623860.x86_64.iso)

 

Update 1 was released in March, more than 3 months ago... Why isn't this fixed yet? It's kind of ridiculous!?

 

I can't revert back as explained in the Blog post as I have a 5.0 Update 1 clean install. There's a sketchy workaround in the comments but I would prefer not to mess with that. So I guess that I will have to start all over from scratch with a 5.0 install (VMware-VMvisor-Installer-5.0.0-469512.x86_64.iso)

 

Should I be careful about anything in doing this? (except for unplugging every drives when re-installing ESXi)

 

Looking at the Update 1 release notes, I should be fine downgrading; my OSX VM wasn't working anyway..

 

And then VMware is gonna release Update 2 / a fix in a week haha

I am in the same boat, clean install of update 1.

I am still new to ESXi and haven't tried the downgrade described in the blog.

I don't NEED auto start, but it would be nice to get it fixed as I belive it may be tied to auto shutdown (from the UPS) which I have not figured out completely.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.