ATLAS My Virtualized unRAID server


Recommended Posts

I think he'd start over with that board because you can actually get 32GB of RAM for it.  You can't currently find 4x8GB UDIMM modules (or at least I haven't been able to).

 

BTW, I have the M1015's passed through and haven't had any problems.  I haven't tried to preclear any drives while virtualized but I didn't have problems with preclearing on these cards before virtualizing.

 

EDIT:

Just tried preclearing, and it's started just fine.  We'll see.

Screen_shot_2011-10-25_at_2.42.38_PM.png

Link to comment

Thanks so much for your time and effort on this. It's inspirational for me - I would like to do the same thing.

 

I was just looking at hardware on Newegg, which I will need to start acquiring. One thing that occurred to me was to check with you what hardware you would buy now if you were starting again.

 

I will need a new Mobo, CPU and disk controller. What would you look at now given your experiences?

 

 

If i had to start over, I would consider the X8Sil .

 

Why?

 

And with the controller, on your first post you said you would get a couple of LSI controllers. Which ones? And why did you go with the MV8? Was it just cost?

 

Many thanks again

Aaron

It is just easier to get 32GB running on the X8SIL.

when i got the X9SCM, there was only one company offering 8GB sticks. since then, they stopped making them.

Although recently i did find some 8GB sticks, while they are cost prohibitive right now, it shows there might be some soon.

There is a demand for these sticks so I am hoping that other companies start making some. the c202/c204 is a popular chipset, also i believe Apple now uses these sticks. increasing the demand.

that is my only gripe of the board.

 

I had the MV8's already.  LSI is usually the best compatible with ESX.

I also was thinking about raiding my datastore. for that i needed an LSI card or hack in a driver for my my Areca.

that and at that time, M1015's were going for $75.

Link to comment

I think he'd start over with that board because you can actually get 32GB of RAM for it.  You can't currently find 4x8GB UDIMM modules (or at least I haven't been able to).

 

BTW, I have the M1015's passed through and haven't had any problems.  I haven't tried to preclear any drives while virtualized but I didn't have problems with preclearing on these cards before virtualizing.

 

EDIT:

Just tried preclearing, and it's started just fine.  We'll see.

I am going to reflash my m1015's this week and see if that makes a difference. if not, I'll live.

I didn't flash my own m1015's I got hem pre-flashed. so it might be an older firmware or a bad flash.

Link to comment

I'm about to reflash mine also. What exact firmware and version are You intending to use?

 

unkown.... the latest i would assume.

I also have an odd glitch on this build i just discovered. trying to IPMI into the bios, I just get white blocks like tetris.

if i pull the m1015's, i get in fine. I hope the re-flash fixes this.

 

 

So, my first steps into ESXi are complete.  I had to get another board and another SASLP but everything is up and running and seems to be going well for now.  Have to do some testing and see what happens but crossing my fingers that nothing weird happens.

:D I love it when a plan comes together.

Link to comment

I added Recommended Upgrades: and Recommended Alternate parts: to my build list.

 

I strongly recommend the #2 SAS Expander option if you are going more then 16 drives in VM'd unRAID

 

Would love to get the RES2SV240 combo'd with one of my M1015 (I have 3 of those) to free up some PCI-E slots.  I just don't see how this clips onto the existing M1015?  Do you have a picture?  It looks like it takes up another slot itself.

Link to comment

I added Recommended Upgrades: and Recommended Alternate parts: to my build list.

 

I strongly recommend the #2 SAS Expander option if you are going more then 16 drives in VM'd unRAID

 

Would love to get the RES2SV240 combo'd with one of my M1015 (I have 3 of those) to free up some PCI-E slots.  I just don't see how this clips onto the existing M1015?  Do you have a picture?  It looks like it takes up another slot itself.

 

The RES2SV240 does need power. you can use a 4x PCIe port or the molex power plug on the top.

 

How you hook it up depends on the performance you want and number of drives.

option 1: You could hook BOTH of the sas ports from the M1015 to the expander. This gives you 16 (fast) sata ports (4 SAS ports out).

 

option 2: You hook a Single SAS port from your M1015 to the expander. You then have up to 24 (4 full speed and up to 20 slowed down) SATA ports. 1 Free full speed SAS port off the M1015, and 5 SAS ports off the RES2SV240. while you will chop a single SAS2 port to 5 SAS2 ports, it should perform about the same as a SASLP-MV8, quite possibly better.

 

see this thread for a picture of fade23's nice setup with a RES2SV240.

 

If you already have the m1015, I would not bother with this path.

 

One advantage of SAS expanders would be to put 2x, 3x or 4x 20 drive unraid servers on the same single ESXi box. drool

 

Link to comment

So, about an hour ago i booted up my UnRaid VM for the first time, reconfigured the drive assignments and started the array...

 

Hooray!

 

So far all is working good!

 

Now i need to transfer all my VMWare Server VMs over to ESXi.

 

One question though, i only have 4Gb in the machine...i couldn't set unraid to 2Gb of ram when i enabled passthrough for my SATA Controllers, it would come up with a weird error talking about RAM allocation and on the resource allocation screen my total memory is apparently 2Gb...

Link to comment

Johnm,

 

about your setup.

I finally got around and am able to do some more testing before re-deploying my main unraid server.

I was wondering instead of using plop why don't you install unraid on a hardrive, or even a vmdk.

(http://lime-technology.com/forum/index.php?topic=15417.0 or http://lime-technology.com/forum/index.php?topic=3846.0).

 

wouldn't it make our life easier ?

I tested plop a while back for another application... and booting was so ridiculously slow that I gave up.

I do not know about the new version, so this is only a question.

 

R

Link to comment

Johnm,

 

about your setup.

I finally got around and am able to do some more testing before re-deploying my main unraid server.

I was wondering instead of using plop why don't you install unraid on a hardrive, or even a vmdk.

(http://lime-technology.com/forum/index.php?topic=15417.0 or http://lime-technology.com/forum/index.php?topic=3846.0).

 

wouldn't it make our life easier ?

I tested plop a while back for another application... and booting was so ridiculously slow that I gave up.

I do not know about the new version, so this is only a question.

 

R

I didn't do a writeup on it, but yes, I thought I mentioned in this thread that I made a vdmk of my flash drive and I boot from that.

booting from SSD is wicked fast. once you power on the guest, the image unpacking (dots) to kernel boot is about a second.

I believe the steps are in this thread http://lime-technology.com/forum/index.php?topic=7914.0

although for the last few days i have been booting from plop because of the beta13 testing.

 

plop is still very convenient (although very slow). If your boot is from your flash with plop, you still have the option to pretty much reboot your entire ESXi server to unraid instead of ESX in case of a problem. I also have the option of pulling my flash and unraid drives and putting them into my spare norco and boot from that. not that I would unless i had  a massive hardware failure. but it leaves the option.

 

I do need to do a lot of editing to this set up and time is just not on my side right now.

I never posted a few of the promised tutorials either. but it is on my to-do list..

 

thanks for reminding me.

 

Link to comment

Johnm,

 

thanks for your answer.

I do understand your rationals and they make total sense. In case of a crash it is true that having the usb key available makes it easy.

but how hard would it be to have an export of the vmdk whenever a modification is applied.

this way it would be easy to rebuild the key in no time in case of a crash ;)

 

for the test purpose you are 100% right, plop is so much easier.

I wish vmware would go a step further and allow us to boot on usb..

 

one thing that is keeping me away from plop is really not only the speed.. I also noticed that sometimes it would not pick my usb key no matter how long of a delay I would give it.

 

once again your comments and experience are highly appreciated.

 

Cheers,

R

 

 

Link to comment

The easiest way to get a really fast boot of unRAID in ESXi is a hybrid boot.

boot from hard drive/ssd and then have it switch to flash after extract and starting the kernel in ram.

 

this is a handy tip I picked up from brian89gp

this is from memory, i might have missed a step.

 

These steps assume you already booted unRAID in ESXi using plop.

 

In a windows box (any NT flavor).

  • Take your working (plop bootable) unRAID flash drive and plug it into your windows box.
  • rename your unRAID flash to a new name (I named mine "unraidboot5b13", "root" would work also)
  • using winimage, make a vdmk drive image of the flash drive. (I named my image "unraidboot5b13.vdmk")
    this should give you a bootable drive image the size of your flash
  • rename your unRAID flash BACK to "UNRAID" (so that it mounts the the USB for the config files and license)
  • plug your unRAID flash back into your ESXi box
  • add the "unraidboot5b13.vdmk" hard drive to your unRAID guest
  • If you had a Plop virtual drive mounted, replace it with this new drive image.

 

Assuming this is your only "virtual drive", unraid will now boot from this drive.

 

Optional:

Instead of using your actual unRAID Flash, you can make a copy of your working unRAID flash drive to a second (smaller) flashdrive.

then image the "copy" drive.

[That was what I did. I used a win7 guest with a spare flashdrive passed through. I mapped to \\tower\flash, copied all the files over,  ran the "make_bootable.bat", imaged it per instructions above. This was slower, but I do not have a licensed win image anymore and I did not want to install trialware on my main PC]

 

Optional 2:

Instead of using winimage. Create a new 8GB VDMK for one of you windows guests.

After adding that drive to your windows Guest, format it fat32 and name it UNRAID.

Install the version of unraid to that drive that you are running on your server.

run the syslinux bat file.

rename the VDMK to something else "root", "boot", "UR_Ver#"

shut down the windows guest.

unmount the drive from the windows guest and assign it to the unraid guest.

as long as your unraid flash is installed... fast boot.

 

Optional 3:

Use a windows guest.

(I just copy one of my basic win7 guests and install winimage demo. then erase the guest after or go back to a snapshot)

Map the same usb that your ESXi uses.

power down your unraid guest. (both guests cannot use the flash at the same time)

After unRaid is off, power up this windows guest.

use the winimage on the actual flash drive.

copy the vmdk to the datastore.

power off the guest.

power on the unraid.[/i]

 

PROS:

at some point during the boot (right after image extraction), it will mount your unRAID pass-through USB flash.

it will read all configuration files and the license file from the FLASH.

it will save all changes you make to "Flash"/"boot" to the USB drive, not the virtual drive.

You can still boot from your USB if you need to.

 

CONS:

If you upgrade to a new version of unRAID, you will have to create a new image file.

You can accidentally mount your virtual boot drive in unraid, adding it the the array and erase it.

 

 

[i tend to keep a back up of the flash and the vdmk together. this came in real handy when i had to roll back to B12 when B13 was not playing nice]

 

I'll try and add this as a "Tip" later once I get time to  make a new image and test the walk-through.

Link to comment

edit . please ignore this.. I formatted my 4.7 flashdrive.. re-created it... and now it works... beats me

 

Johnm,

 

ok I am going to sound like an idiot.... or an ignorant.

I am starting my tests (for now using plop)

when I use a 4.7 key.. and although it boots perfectly, I do not get any network.

if I switch to beta5... works like a charm.

 

I had the feeling that it should work with 4.7...

so two things :

1. I once again was wrong... it does not work with 4.7

2. I am having a problem

 

my bets are on 1... but once again I am turning towards you.

 

Cheers,

R

 

ps : and the million dollar question. which beta do you consider being the more stable ?

Link to comment

ps : and the million dollar question. which beta do you consider being the more stable ?

That is the 64 million dollar question and much of it is based on opinion.

 

I run 5.0b6a on my production server and have the latest beta on my test server(s).  The test servers have no where near the capacity of my production though.

Link to comment

edit . please ignore this.. I formatted my 4.7 flashdrive.. re-created it... and now it works... beats me

 

Johnm,

 

ok I am going to sound like an idiot.... or an ignorant.

I am starting my tests (for now using plop)

when I use a 4.7 key.. and although it boots perfectly, I do not get any network.

if I switch to beta5... works like a charm.

 

I had the feeling that it should work with 4.7...

so two things :

1. I once again was wrong... it does not work with 4.7

2. I am having a problem

 

my bets are on 1... but once again I am turning towards you.

 

Cheers,

R

 

ps : and the million dollar question. which beta do you consider being the more stable ?

4.7 should work fine. use the E1000 adapter when you make the Guest  (I assume you are not passing through an adapter?).

 

Err.. all the betas have some bug somewhere.

I am on 5B12 (not 12a) right now. 13 was redballing drives left and right on me due to the LSI drive/kernel issue. you're not running MV8's so 12/12a should be ok?

as always, read each beta thread before you beta. learn its limits/bugs.

 

5B6a was stable for me also, but that does not have 3TB or full LSI support.

Link to comment

4.7 should work fine. use the E1000 adapter when you make the Guest  (I assume you are not passing through an adapter?).

 

Err.. all the betas have some bug somewhere.

I am on 5B12 (not 12a) right now. 13 was redballing drives left and right on me due to the LSI drive/kernel issue. you're not running MV8's so 12/12a should be ok?

as always, read each beta thread before you beta. learn its limits/bugs.

 

5B6a was stable for me also, but that does not have 3TB or full LSI support.

 

as I mentioned in my previous post I got 4.7 to work.. I have no clue what happened and I was in a hurry so I forgot to keep a syslog... duh !! ;)

 

I have been following each beta, and there are pros and cons to all of them that's why I asked you.. but as 4.7 works.. I will patiently wait :)

 

you are right I am currently using 2 mv8.. but I am thinking of getting a RES2SV240/2x M1015 combo to get 24 drives on passthrough, while I will keep the motherboard ports for other use... [even though I would like to see if I can dedicate a few port as passthrough... I will see if I can manage that without crashing my esx... if it happens, I don't care as I am running it from an USB key, so it is easy to revert ;)]

 

I need two more cards for another machine.. so I can use my mv8 there as I won't be going esx there (no VT-d on the processor there)

 

currently I am doing my test on a workstation were I am passing a sil3231. lol

 

Cheers,

R

Link to comment

I thought you had m1015's. if you have MV8's, 5B13 might be good for you. it is a bit early to tell if you went beta.

I think i confused your build with Nia's.

5B6a would be my recommendation then if you do not run macs.

 

You are sticking to 4.7 so.. that is irreverent.

 

I just created a new win7 VM. I'll see if i can update that booting tip later today

 

Link to comment

I thought you had m1015's. if you have MV8's, 5B13 might be good for you. it is a bit early to tell if you went beta.

I think i confused your build with Nia's.

5B6a would be my recommendation then if you do not run macs.

 

You are sticking to 4.7 so.. that is irreverent.

 

I just created a new win7 VM. I'll see if i can update that booting tip later today

 

 

about update.

I wrote a quick esx setup (NTP) and update (patches) running only through the cli, so it is pretty simple.

do you want me to post it here so you can include it ?

 

Cheers,

R

Link to comment

I thought you had m1015's. if you have MV8's, 5B13 might be good for you. it is a bit early to tell if you went beta.

I think i confused your build with Nia's.

5B6a would be my recommendation then if you do not run macs.

 

You are sticking to 4.7 so.. that is irreverent.

 

I just created a new win7 VM. I'll see if i can update that booting tip later today

 

 

about update.

I wrote a quick esx setup (NTP) and update (patches) running only through the cli, so it is pretty simple.

do you want me to post it here so you can include it ?

 

Cheers,

R

 

anything that you feel that can help others. even I come back to the build notes when I forget something

Link to comment

So I'm getting close to purchasing some hardware. Because I currently live in Dubai, and have to order equipment from US and get it freight forwarded (Dubai's crap for this sort of thing), I am trying to cover all the bases and get it right first time. Much easier than trying to return/get new hardware at a later date.

 

From your earlier recommendation I am thinking of getting the X8Sil instead of the board you are using. I understand this makes it easier to get 32GB memory now, but I'm also sacrificing the onboard SATA3 connections, which I was thinking I could use for some fast SSD drives for running VMs. Do you think this is a reasonable tradeoff and would you still recommend the X8SIL instead of X9SCM?

 

Also I'm still unsure which raid card/controller card to go with for passthrough. I don't have anything, so I will be starting from scratch either way. I see people frequently talking about the MV8s and the M1015. Which would you recommend if you had neither, and why? I see you recommend the M1015 on your first page now as a recommended upgrade, but I'm still confused about the discussion in the thread of potential compatability issues with these cards. Am I going to have a bumpy ride with these?

 

I don't understand the raid cards at all. Is the M1015's an end of life product? Is the only way to buy these, to get second hand stock? If that's the case and I want to buy new, is there a different card you could recommend? I searched on newegg for LSI cards and there are a ton, but I really don't know what I should look for. How do you navigate the specs on these cards? For example, would the following work?

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

 

Haha... the more I look and think, the more confused I get  :)

Link to comment

So I'm getting close to purchasing some hardware. Because I currently live in Dubai, and have to order equipment from US and get it freight forwarded (Dubai's crap for this sort of thing), I am trying to cover all the bases and get it right first time. Much easier than trying to return/get new hardware at a later date.

That's a tough one. You almost need to find a person to build it, burn it in and ship it to you.

I completely understand the "get it right the first time" thing.

 

From your earlier recommendation I am thinking of getting the X8Sil instead of the board you are using. I understand this makes it easier to get 32GB memory now, but I'm also sacrificing the onboard SATA3 connections, which I was thinking I could use for some fast SSD drives for running VMs. Do you think this is a reasonable tradeoff and would you still recommend the X8SIL instead of X9SCM?

I like my X9SCM just fine. I was just putting it out there there that the X8SIL can run ESXi with 32GiB of RAM now (for a hefty price), while the X9SCM is lacking 8GiB RAM sticks. Honestly, for casual use, the 16 Gib is OK.

(note the ram i recommended for the X8SIL I have not personally tested, i went by what kingston said)

 

I have some databases running on my ESXi box that wants more RAM. More RAM can always be good.

There are other options where You can get up to 128GiB if you're building production boxes.

ESXi tends to run out of RAM before CPU.

Again, for home use, unless you are going crazy, 16GiB Is fine.

 

As far as using SSD for a datastore, I am sort of an oddball that I use SSD's for my datastore.

Most people use 7200 RPM mechanical drives for their home servers and Businesses use Hardware raid arrays.

 

I'll admit the SDD route is fast, very fast. but it not necessarily "safe".

some people will say I setting myself up for an SSD to die and loose my datastore.

I am also running a database on one of my guests that does massive disk IO's in conjunction with the fact that I am not doing trim or garbage collection (not supported by ESXi).

So, yes I am going to burn these SSDs out. (i am sure I'll get several years, but it will happen)

So yes.. backups.

 

Also I'm still unsure which raid card/controller card to go with for passthrough. I don't have anything, so I will be starting from scratch either way. I see people frequently talking about the MV8s and the M1015. Which would you recommend if you had neither, and why? I see you recommend the M1015 on your first page now as a recommended upgrade, but I'm still confused about the discussion in the thread of potential compatability issues with these cards. Am I going to have a bumpy ride with these?

That is a tough call. I think the the M1015 is a better card for this build for 2 simple reasons.

1. it is natively supported by ESXi (but the MV8 can be "hacked" to work)

2. It is a faster card with more bandwidth. While it will not make a big difference in unRAId. If you decided to use a port expander, these would be able to keep up while the MV8 would slow way down.

 

Be aware the M1015 needs to be reflashed, you can not do it on a lot of motherboards including the x9scm (or x8sil?). (I updated my recommended upgrade  to show this)

 

I don't understand the raid cards at all. Is the M1015's an end of life product? Is the only way to buy these, to get second hand stock? If that's the case and I want to buy new, is there a different card you could recommend? I searched on newegg for LSI cards and there are a ton, but I really don't know what I should look for. How do you navigate the specs on these cards? For example, would the following work?

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118112

 

Haha... the more I look and think, the more confused I get  :)

The M1015 is an LSI card made for IBM. it is far from end of life. it is very similar to the one you linked. We buy them off Ebay because a lot of people do not need them in their IBM servers and pull them out. We can get them second hand (yet still new) for $65 instead of buying the retail LSI version for $250.

 

You can then reflash the m1015 with the official LSI bios effectively turning the card into an LSI 9210-8i 6Gb/s

 

For you, I would assume the MV8 would be a better solution due to possible issues with the reflash.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.