ATLAS My Virtualized unRAID server


Recommended Posts

=

Your PLOP boot manager method is inefficient.  You can mount an ISO and have that ISO boot right to USB with no delay.  You can grab a pre-created one from the ESXi thread:

Hey thanks..

I could not find any tutorial for PLOP.

I did try the ISO(cd boot usb) Method.

It was not 100% at finding the USB. It would only catch it about 4 out of 5 times. the same thing happened if my countdown was low. I would get the red error (no device found). Right now it "seems" to boot 100%.

While i don't reboot unRAID that often, it is still a PITA to log into the ESXi console and reset the VM when that happens. I prefer stability over 15 extra seconds boot time.

 

I did not try that pre-compiled ISO. I'll try it out later today.

I am running parity now.

I'll report back once I do.

 

It would be better to use VMDirectPath for the USB drive as it will run at full USB 2.0 speeds, not the 1.1 you are seeing with passthrough.  I am unable to get USB via VMDirectPath working properly (hangs during unRAID boot), so I am curious to see if others are successful.

 

Keep up the thread, I think this will get several people to take the plunge.

 

I totally agree with you. On my board, the individual USB drives are not showing up for VMDirectPath (see my tutorial, the USB drives are plugged in and don't show). If they did, I would try it.

 

I looked at this page before I gave up and used the normal USB passthrough method.

as far as the speed, It is slow, but not 1.1 speeds. I would not put a USB hard drive on it though. I don't reboot unRAID enough to worry about the boot time (within reason), I am worried about performance once it is booted.

Link to comment

I was able to set it up through VMDirectPath.  It shows 8 USB devices for my motherboard (X8DTH-6F), 6 UHCI and 2 EHCI.  I set VMDirectPath on half of them (3 UHCI and 1 EHCI) since they all seemed to be attached to the same hub and group of ports (Attaching a keyboard to the same USB port was passed through to the guest OS no matter which of the 4 PCI devices I set to pass-through).  I then passed through the EHCI PCI device and it boots fine.

 

I am having problems with the fastpath fail state, but that is a problem with ESXi 4.0 that was fixed in 4.1.  It is important that you set all USB PCI devices that can see a  particular USB port for VMDirectPath.  If you do not, ESXi and the guest OS will fight for ownership and you get into the fastpath fail state problems, it completely freezes 4.0 and while 4.1 the issue was fixed it can still cause problems in the guest OS.

 

Thanks for the tip, I will try passing all USB ports through on the controller. I tried this initially and got a PSOD as my Keyboard/mouse were plugged into the same bus.  I'll see if I can identify a different bus and pass the other set of USB devices through.

 

Link to comment

Thanks for the tip, I will try passing all USB ports through on the controller. I tried this initially and got a PSOD as my Keyboard/mouse were plugged into the same bus.  I'll see if I can identify a different bus and pass the other set of USB devices through.

 

If you VMDirectPath the USB controller that your ESX usb is in, your ESX server will never boot again.

you will have to reinstall.

 

PS.

some of this conversation might be best in this thread. http://lime-technology.com/forum/index.php?topic=7914.0 just so that others can find it easier.

Link to comment

Parity Completed fine.

I used 3 3TB Hitachi drives. 2 data and 1 parity.

 

Drive temps and spindown works..

 

 

DFs9Sl.png

 

 

 

The next test will be to shut it down, swap USB flash drives with a production unRAID, swap the hard drives (god i love pullout drive bays) and boot.. run parity (without correction) on that array.

 

I am still in the air about giving unRAID a dedicated NIC card.. right now. even with 6 or so VMs. i do not have a lot of simultaneous network traffic.

 

Here is where the Tyan c204 board would have an advantage. it has 3x 1gig NICs.

I looked at it, but chose Supermicro to keep a standard.

hindsight... i love it.

Link to comment

Next up...

 

Putting the ESXi box on an APC USB UPS with clean(ish) shutdown of the VMs.

 

IF you have more then 1 Physical server that is on this up that is shut down by UPSAPCd host/slave, you need to also have a hub/switch on the UPS for communication.

this will be a real ghetto hack..

this sounds all long and crazy. It should only take 15-30 min to set up.

 

the sequence of events I'm going for:

Powerloss! > APCUPSd HOST (unraid) sends onbattery alert

3min onbatt > second unraid and win2k8 raid array shutdown (if on) [this is optional, you might not have other servers outside of ESXi on your UPS]

4min onbatt > Script PC sends shutdown alert to ESXi

ESXI sends graceful shutdown commands to all VM's including unRAID then powers off the server.

7min onbatt > unRAID host shuts itself down if something went wrong

 

This will require some testing and fine tuning.

in addition, all APCUPSd clients are also set with a fail-safe of shutting down if the battery is not going to last long enough for this disaster plan.

 

 

There is more then way to do this.

You can create your own flavor of this using several different methods.

The basic idea is to have APCUPSd call a shutdown script that shuts down the entire ESXi box instead of just 1 VM. Also, if you have a APC Pro with the network broadcast, that will be different instructions.. That something you will have to set up on your own, but should be similar.

 

Quick background on my setup.

On my ESXi server, I have a "light" windows 2k8r2 VM. I use this VM for all kinds of automation. All this VM does is kick off various scripts. this includes backup jobs, automate files transfers, downloading WSUS files for offline updates, waking and shutting down of PCs & Servers, and now ESXi shutdown.

 

Technically, you should have the UPS plugged into the windows/script PC. however, I tried this. for some reason I failed at getting it to see the UPS. I didn't try to hard to fix this, that is OK though..

I have 2 other file servers on the same UPS and a PC that is not on the UPS that monitors the UPS. I would have to edit all of the config files on all of those physical computers if i moved the UPS off my unRAID box.

Remember, I just moved "Goliath's" Flash drive and it's hard drives to the ESXi server. I never installed a new unRAID. APCUPSd is already setup on that unRAID Servers (now a VM).

 

What we need:

APC brand UPS with USB

Zeron's VMware tools for unRAID

Putty http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Plink http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

winSCP: http://winscp.net/eng/download.php

unMENU: ... I hope you know this one.

APCUPSd for both unraid and windows http://www.apcupsd.org/ (only download windows version)

A windows based VM (or *nix, I'm showing windoze method though)

[i use 2k8r2, you can use XP/Vista/Win7 also, the instructions will be the same]

 

OK, before we get started.

we need to install the UPS into the unRAID

power down the unRAID VM.

open "edit settings" for the VM properties. (this will be just like how we added the flash drive before)

"add" > USB Device > next > select your APC UPS > next > Finish

 

7TcBdm.png

 

 

Go ahead and exit out of this the VM settings and restart your unRAID.

 

OK. Now the fun part, making this all work.

 

 

1) Install vmWare tools for unraid if you have not.

you can place a line in the go script "installpkg /boot/open-vm-tools-2011.07.19-450511-unRaid5.0-beta11-Zeron1.01.tgz" (Edit this to reflect the vmWARE tools version you downloaded)

this installs the build for 5.beta11.  the file is located on the root of the flash drive in that example. modify this path if you need to.

you can also just place the .tgz into the "extras" folder. it should then auto install each boot.

or, you can use the new plugin install if you have beta11+ (this didnt work for me at the time of writing this)

 

If the install worked, the "Summary/General" section for the unraid VM will switch from "not installed" to "unmanaged".

 

sYUtJm.png

 

2) Install unMENU into your unRAID

 

3) Install APCUPSd in the umenu Package Manager.

 

I installed Apcupsd version 3.14.8. 3.14.3 should work just fine also. some people prefer that in unraid.

I set mine to run for:

20% remaining battery

10 Min remaining battery

7 min into battery cycle

Whatever comes first, is what it will do.

I turned OFF "powerdown the UPS"

 

As much as I would love to shut it down, the timer flag in the unraid build is set to soon. it would cut power to ESX before all my VM were shut down. if unraid was my last host shut down, it might work.

 

*Note: my second physical unRAID server is plugged into the same UPS. It is set to listen to this server. it is set to power off after 3 Min

 

r9Ktlm.png

 

Next.

in unMENU, install the "clean powerdown" script. (powerdown-1.02-noarch-unRAID.tgz) at the time of writing.

Make sure you enable Re-Install on boot.

 

4)

(you can also do all of this though telnet if you prefer command line)

Fire up winSCP and log into your ESXi Server.

I'll assume you don't  have a folder for scripts on your ESXi Server. We'll make a new one.

 

Change your directory to /vmfs/volumes/~yourdatastorename~

make a new folder for scripts if you dont have one (I called mine scriptfiles).

right click empty space and Make New folder "scriptfiles" (or whatever you wish)

go into that folder and right click empty space and Make New File "shutdownesxi.sh"

 

In the example you can see I used "/vmfs/volumes/15TB7200/scriptfiles"

9krsjm.png

 

Right click shutdownesxi.sh > Edit

 

We will only add one line into the script

"/sbin/poweroff" (without the quotes)

 

s1k80m.png

 

go ahead and X out of the edit box, it will ask to save and upload the edits. "yes"

 

now right click on shutdownesxi.sh and select properties.

Set the Octal to "0777" > OK and close winSCP.

 

izctqm.png

 

5)

launch vShpere client.

Edit the shutdown timeout for unraid and just the unraid VM.

I set it to something ridiculously high. in this case 10 min (I bumped it up to 15 min while writing this in case the VM launch fails and the internal script is mid run).

 

The reason for this.. I want VM to be shut down by ESXi "guest shutdown" script when we trigger the shutdown from the windows box.

In testing, i have noticed that if all of my drives are asleep and because i have spin-up groups. it can take unraid  several min just to "wake up" in order to shut down. then it might take a few more min to shut down.

 

Once this timer in ESXi hits the shutdown timeout, it just (virtual) power switches the unraid.

 

the other reason it is set so high. dont forget, we set APCUPSd inside unraid to shut down at 7 min. if for some reason the VMWARE tools misses the "gracefull shutdown" request. we still have a second request to shut it down from inside the VM from APCUPSd.

 

Once the VM is powered off, it will move on to the next VM it wont wait for the rest for the rest of the timeout countdown. you wont be sitting there for 10-15 min.

 

I also put the unraid VM last on the boot order. that way it is the first VM to power off.

if you have other VMs that map to unraid. you might want to put them after unraid in the startup sequence. that way unraid is online before them, and they shut down first in case they have any file locks.

 

It was pointed out by fade23, if unRAID is set as first boot/Last shutdown. the ESXi box fails to wait for a clean powerdown and powers the server off. this = bad

 

9FCy2m.png

 

 

 

6)

Lets move to the Windows VM and put all of this together.

install APCUPSd for windows with all default features except the UPS driver. no, not that guy that brings all the goodies from newegg. the device driver for the UPS we don't have plugged in.

 

modify the config file:

Change the following lines in the config.

 

UPSCABLE ether

UPSTYPE net

DEVICE goliath:3551 (use your own server)

POLLTIME 30 (optional 60 works)

SCRIPTDIR C:\apcupsd\etc\apcupsd

BATTERYLEVEL 20

MINUTES 10

TIMEOUT 240
#I don't think the next 2 lines work from a slave. but we can try
WAKEUP 60
SLEEP 600

 

 

Set up the launch scripts:

on the windows script VM

install putty and plink to c:\putty

 

open C:\apcupsd\etc\apcupsd\apccontrol.bat

SET SHUTDOWN="%sbindir%\shutdown"

Change it to…

SET SHUTDOWN="%sbindir%\shutdown-esxi.bat"

Save

 

in the C:\apcupsd\bin folder.

create a new batch file.

shutdown-esxi.bat

C:\scripts\shutdown_atlas.bat

*replace atlas with your ESXi server name

I like to keep all my scripts in one folder c:\scripts.

 

Go to (or create c:\scripts)

Create a new Batch file

 

shutdown_atlas.bat Name it what you called it in your other batch file

the batch file with have only 1 line in it..

"c:\Putty\plink" [email protected] -pw yourpassword "sh /vmfs/volumes/15TB7200/scriptfiles/shutdownesxi.sh"

Change the IP to your server, and yourpassword to your root password, change the 15TB7200 to your own Datastore

 

If you do not want your root password in a plain text file, you can save a Putty session called ESXi and use.

"c:\Putty\plink" -load esxi "sh /vmfs/volumes/15TB7200/scriptfiles/shutdownesxi.sh" 

Or, you can use a private key.

"c:\Putty\plink" -i /locationToPrivateKey/key.ppk "sh /vmfs/volumes/15TB7200/scriptfiles/shutdownesxi.sh" 

 

launching the last 2 batch files will shutdown your ESX box.. test at your own risk...

 

Important step I forgot!


  • Set the Windows Guest that is running the script to auto Login!
    If you do not do this step, your guest will be sitting at a login screen instead of listening for a power fail apparently..
     
    • 1.  Open Run command window
    • 2. Enter control userpasswords2 in the run window.
    • 3. This will open user account windows
    • 4. Uncheck the option “Users must enter a user name and password to use this computer” and click Apply. It will ask to enter the password for the account. Enter the password and click OK.
    • 5.  Final screen will look like this. Next time when you will reboot your machine then it will not ask for username password.

 

 

if you did it all correctly.. you should now have a power fail disaster plan in place.

Link to comment

I was getting inconsistent results scripting a clean shutdown of ESXi on power loss from the UPS; I found that if unRAID was set as first to boot, last to shutdown, ESXi was initiating the guest powerdown but then powering off before the guest had completely shut down, regardless of the timeout set. I think it would have worked ok if, as you've done, unRAID was the last to boot, first to shutdown, but as it was I ended up with unclean shutdown that resulted in lengthy parity checks on resumption of power.

 

If you need your VMs to shutdown in a different order than they start up, you can script your own shutdown sequence to call them one at time using this Perl library. Here's an example of the shutdown script my Windows 7 VM calls when the UPS reports a power loss:

 

perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname indra
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname xbmc
ping 192.168.99.99 -n 1 -w 120000 > nul
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname unraid
ping 192.168.99.99 -n 1 -w 360000 > nul
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action host-shutdown

 

The ping statements in there are a surrogate for a SLEEP type command, which Windows batch scripting does not support. the '-w xxxxxx' arguement is the number of miliseconds to wait in between each command.  Using this script I can shut down my VMs in any arbitrary order, and then issue the shutdown command to ESXi at the end. The only drawback is that you don't have feedback that the prior VM has completed its shutdown (though the Perl library does support querying the status, so one probably could build that in if it were important)

Link to comment

I was getting inconsistent results scripting a clean shutdown of ESXi on power loss from the UPS; I found that if unRAID was set as first to boot, last to shutdown, ESXi was initiating the guest powerdown but then powering off before the guest had completely shut down, regardless of the timeout set. I think it would have worked ok if, as you've done, unRAID was the last to boot, first to shutdown, but as it was I ended up with unclean shutdown that resulted in lengthy parity checks on resumption of power.

 

If you need your VMs to shutdown in a different order than they start up, you can script your own shutdown sequence to call them one at time using this Perl library.

 

Good info thanks.

 

I did look at the pearl script before i did it my way. this was was going to be my next step if the way I set it up didnt work.

In testing, I had no issue with the way I did it.

but, my unraid was first to shutdown. therefore, I didn't run into this issue, thanks for the heads up for others though!

If I run into issues i'll switch methods.

 

I intentionally set my unraid to shutdown first because I wanted to make sure it was off before the battery ran out of juice and to avoid the issue you mention. out of all my VM's, i only need unRAID and Newsbin (running a massive usenet database that could corrupt)to shutdown clean. the rest could die a painful death and survive..

 

 

perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname indra
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname xbmc
ping 192.168.99.99 -n 1 -w 120000 > nul
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action shutdown --vmname unraid
ping 192.168.99.99 -n 1 -w 360000 > nul
perl c:\scripts\esxi\esxi-control.pl --server 192.168.1.2 --username *** --password *** --action host-shutdown

 

The ping statements in there are a surrogate for a SLEEP type command, which Windows batch scripting does not support. the '-w xxxxxx' arguement is the number of miliseconds to wait in between each command.  Using this script I can shut down my VMs in any arbitrary order, and then issue the shutdown command to ESXi at the end. The only drawback is that you don't have feedback that the prior VM has completed its shutdown (though the Perl library does support querying the status, so one probably could build that in if it were important)

 

I just want to point out, the ping command only works if it CANT find a machine at that IP. otherwise it wont wait...

using a bogus address like 1.1.1.1 would be better. especially for me since 192.168.1.2 is my secondary router. it would respond..

I used that trick in my early testing script..

 

Ideally, it would be better if we could have APCUPSd in unRAID execute a command in unRAID on power fail.. then we would not even need the scripting/windows host.

Personally, I would not be sure where to look and I already have the script box running for other duties. so... that was the route i chose.

 

 

Edit:

PS.

in testing my scripts... i found it easier to plug everything that was plugged into the ups into the wall, then when i pulled my UPS power, there was no load on it so i could keep testing real quick without having to wait for the battery to refresh.

Once i had it all working. I put everything back into the UPS and did it for real.

the best part is.

The next day, the power DID go out and it all worked.

 

I am going to look into getting the UPS to auto power off sometime this week. that part failed me.

Link to comment

I found some typos/room for confusion in your instructions and made the fixes below. I haven't done a proper test yet but the script functionality seems to work fine. I just need to test whether the UPS will actually send out the right commands to initiate the entire process (which it looks like it will).

 

Next up...

 

...

 

4)

(you can also do all of this though telnet if you prefer command line)

Fire up winSCP and log into your unraid esxi server.

I'll assume you don't  have a folder for scripts on your esxi server. We'll make a new one.

 

Change your diractory directory to /vmfs/volumes/~yourdatastorename~

make a new folder for scripts if you dont have one (I called mine scriptfolder).

right click empty space and Make New folder "scriptfolder"

go into that folder and right click empty space and Make New File "shutdownesxi.sh"

 

...

 

Go to (or create c:\scripts)

Create a new Batch file

 

shutdown_atlas.bat Name it what you called it in your other batch file

the batch file with have only 1 line in it..

I changed scriptfiles to read as scriptfolder since that is what you had used above.

"c:\Putty\plink" [email protected] -pw yourpassword "sh /vmfs/volumes/15TB7200/scriptfolder/shutdownesxi.sh"

Change the IP to your server, and yourpassword to your root password, change the 15TB7200 to your own Datastore

 

If you do not want your root password in a plain text file, you can save a Putty session called ESXi and use.

Same change here.

"c:\Putty\plink" -load esxi "sh /vmfs/volumes/15TB7200/scriptfolder/shutdownesxi.sh" 

Or, you can use a private key.

Same change here.

"c:\Putty\plink" -i /locationToPrivateKey/key.ppk "sh /vmfs/volumes/15TB7200/scriptfolder/shutdownesxi.sh" 

 

launching the last 2 batch files will shutdown your ESX box.. test at your own risk...

 

if you did it all correctly.. you should now have a power fail disaster plan in place.

Link to comment
ESXi Datastore Drives:

2x OCZ Solid 3 SLD3-25SAT3-120G 2.5" 120GB SATA III MLC $155 Each Newegg.

1x 1TB, 1.5TB or 2TB 7200RPM Drive (For ISO's and Backups) (Free from junk pile)

 

So how much of a performance boost is using the SSD drives?

Does it have any impact on size/number of VMs ran?

How much of a hit would it be using regular HDD drives for the ESXi Datastore Drives?

Link to comment

ESXi Datastore Drives:

2x OCZ Solid 3 SLD3-25SAT3-120G 2.5" 120GB SATA III MLC $155 Each Newegg.

1x 1TB, 1.5TB or 2TB 7200RPM Drive (For ISO's and Backups) (Free from junk pile)

 

So how much of a performance boost is using the SSD drives?

Does it have any impact on size/number of VMs ran?

How much of a hit would it be using regular HDD drives for the ESXi Datastore Drives?

those drives are reading at about 550MB/s and writing at 500MB/s and i have no head seek lag.

running multiple VM's on 1 SSD is seamless in the VM. especially since some VMs are running heavy disk IO's

 

A mechanical drive might bawk at heavy IO VM's or start lagging after a few VM are on it. if you do run mechanical drives, i would raid them in a way to get  good data transfer speed. (then again, i personally would raid the SSD's too).

 

think about how your desktop mechanical drive starts to lag when you start unraring lots of large files at the same time to your C: drive.

 

The limit on the SSD's that i got are that they are smallish ones thinking I would raid them. so 3 maybe 4 small (30 Gig and less) VM's max per SSD.

 

short answer:

they fly and are very snappy.

from green start vm arrow to auto login desktop in about 8 seconds in 2K8r2.

 

Link to comment

I found some typos/room for confusion in your instructions and made the fixes below.

 

Hey thanks .. i was typing it with my laptops low battery alarm ringing in one ear and my GF's hunger alarm in the other ear. I never got to proof it, add the rest of the screen captures or add the hyperlinks to the downloads.

 

i just hit post and walked away and never came back to it.

 

I'll go back and clean it up...

 

oops and it is "scriptfiles"..

 

Edit:

Made quite a few edits and added the adding of the UPS to unraid.. oops thats sort of an importat step.. i'll re-read it again tomorrow.

Link to comment

I actually upgraded to 5.0 now that it is available using the upgrade option.

 

Since I did not do a clean install, I can not say if it would be the same line by line, It should be about the same if not identical.

 

I do not see a real reason to use 5 over 4 at this point other then better memory management and SSD support.

there might be some new device driver support, nothing that effected my build though.

 

4.1u1 is stable and most bugs are known. 5 is a new animal. Many people are choosing to wait and see.

 

I might try to install ESXi 5.0 to a guest later and see if there is any difference.

 

 

 

Build update:

I added one of my M1015's. now my unraid has 1 MV8 and 1 M1015(in IT mode). I now have 12 3TB drives in the box.

 

I am going to have to order new SAS cables. the Norco OEM cables are to short to go from the LSI card to the backplane..

 

I can get Port0 plugged in, but it strung tight like a bowstring. not good. I cant not get port1 plugged in at all.

 

As soon as monoprice gets them back in stock, I need to order 4 i guess..

can anyone confirm these are longer?

 

I have also noticed that the noctua fan config keeps my drives cooler then the stock fan config. not to mention the noctua config is silent in comparison.

 

My UPS took a dump lastnight. No clue what happened. swaped it out and "Brain Dead" reset the dead one

Link to comment

 

Build update:

I added one of my M1015's. now my unraid has 1 MV8 and 1 M1015(in IT mode). I now have 12 3TB drives in the box.

 

 

Johnm,

 

I would love to hear from you on the result with the m1015.

what are the reasons you opted for them ?

 

here the price are identical for MV8 and m1015...

 

R

Link to comment

 

Build update:

I added one of my M1015's. now my unraid has 1 MV8 and 1 M1015(in IT mode). I now have 12 3TB drives in the box.

 

 

Johnm,

 

I would love to hear from you on the result with the m1015.

what are the reasons you opted for them ?

 

here the price are identical for MV8 and m1015...

 

R

I went with the M1015 for several reasons.

 

1 being that they are now supported in the newer betas. (Temp and spindowns)

2 price. I got "new pulls" from ebay for $85 shipped. so $20-$25 cheaper

3 Faster card (x8 PCIe 2.0 6Gbps vs. 4X PCIe 1 3Gbps) (important for #4)

4 Expander aware. You can connect up to 32 drives to each one. (If i decide to add my second unRAID to the same ESXi server. all I need is 1 or 2 expanders and use one of my other Norco boxes as a DAS) See Here

5 raid0. In the back of my head, wanted to use one for my SSD raid0. I have not tested this yet.

 

 

 

 

Link to comment

I went with the M1015 for several reasons.

 

1 being that they are now supported in the newer betas. (Temp and spindowns)

2 price. I got "new pulls" from ebay for $85 shipped. so $20-$25 cheaper

3 Faster card (x8 PCIe 2.0 6Gbps vs. 4X PCIe 1 3Gbps) (important for #4)

4 Expander aware. You can connect up to 32 drives to each one. (If i decide to add my second unRAID to the same ESXi server. all I need is 1 or 2 expanders and use one of my other Norco boxes as a DAS) See Here

5 raid0. In the back of my head, wanted to use one for my SSD raid0. I have not tested this yet.

 

 

 

 

 

Johnm,

 

is there any question you can't answer ?? ;)

 

lol

R

Link to comment

Yes..

 

ok.

I got a couple more for you. ;)

 

1.

have you looked into expander cards ?

which one would make sense with an M1015 ?

 

I am still trying to make figure out why the use of an expander would be useful.

pricewise maybe buying only one hba + 1 expander makes it worth it. But the perf will be lower than getting 3 sata cards... won't it ?

 

the only scenario I could think of is if you plan to dupe the boxes with only one mobo... (i.e. 2 unraid running on esx with two enclosures)

 

am I right ?

 

2.

what about running the m1015 on 4.7 ? any hints on that ?

because you mentioned that they are rec

 

3.

you are running esx on a flashdrive... mmm. never thought of that.

but definitely smart if the whole thing is in memory.

I have to look into that.

I do not like the idea of having it running on a thumbdrive.. but I am pretty sure I can find an old usb harddrive laying in a drawer...

 

R

 

ps .  your help is really appreciated.

Link to comment

I do not like the idea of having it running on a thumbdrive...

 

Why not?  That was the intention.  Just like unRAID, it's read into memory at boottime and that's it.... no frequrnt updates unless you make a lot of config changes!

 

 

 

Jimwhite,

 

thanks for your comment.

I am testing this fact.

but I believe you and Johnm are right..

I don't know why I had a misconception that there was some writing on the esx partition for memory management.

but obviously and from all the reading I am doing I was wrong ;)

 

I found some details on the partitions (therefor the minimum size for the key) for esx 4 (http://www.vcritical.com/2009/08/if-vmware-esxi-4-is-so-small-why-is-it-so-big/)

but not yet for 5.0

 

I compared it to the footprint I had for a 4.1 and it is pretty similar.

I will see if I can quickly test a 5.0...

 

Cheers,

R

Link to comment

ok.

I got a couple more for you. ;)

 

1.

have you looked into expander cards ?

which one would make sense with an M1015 ?

 

I am still trying to make figure out why the use of an expander would be useful.

pricewise maybe buying only one hba + 1 expander makes it worth it. But the perf will be lower than getting 3 sata cards... won't it ?

 

the only scenario I could think of is if you plan to dupe the boxes with only one mobo... (i.e. 2 unraid running on esx with two enclosures)

 

am I right ?

There is no advantage in unraid. They were designed to build massive arrays for SANs type storage. Many allow you build 128 drive SAS arrays off one controller. This would be needed for hardware of ZFS arrays

 

Since unraid has a 22 drive limit (D+P+C), plus unRAID also sees the drives as individual drives, not an array. it is usually cheaper to buy 3 cheap controller cards then an expander aware HBA plus an expander.

 

The only advantage it has in UNRAID is the number of PCIe slots it takes, one.

In theory, I could use my Supermicro ATOM serverboard that has only 1 PCIe slot with an HBA and Expander and build a 22 drive low power unraid.

 

For ESX where you might have a lot of VM's that you want to assign passthrough PCI(e) cards, the expander saves slots.

 

 

what expander do I like? i like the both the HP and the Chenbro especially if I need external connectors.

For internal only, the Intel is a great buy. it has fewer ports, but, it is cheaper and comes with all the cables. that saves $80 right there.

 

 

2.

what about running the m1015 on 4.7 ? any hints on that ?

because you mentioned that they are rec

No clue.

I think the sas2008 driver is in 4.7 but it wont support drive spindown and drive temps. it was 5b7 i think that got enabled.

 

3.

you are running esx on a flashdrive... mmm. never thought of that.

but definitely smart if the whole thing is in memory.

I have to look into that.

I do not like the idea of having it running on a thumbdrive.. but I am pretty sure I can find an old usb harddrive laying in a drawer...

 

R

 

ps .  your help is really appreciated.

I think you got the answer to this..

Link to comment

Johnm,

 

I will follow your lead on the usb.

 

been messing around this afternoon with a couple of usb key (4.1 & 5.0)

just to see how it works.

you are absolutely right.

 

now I have to see for the MV8... I did not have time to look into that, but they do not appear in the configuration/storage adapter... I had to run before I could check the pass-through ;)

 

I will try to see if I find anything about the m1015 running on unraid 4.7.

I would rather keep it on 4.x until final 5 is out.

 

Cheers,

R

Link to comment

now I have to see for the MV8... I did not have time to look into that, but they do not appear in the configuration/storage adapter... I had to run before I could check the pass-through ;)

 

you might have to enable VT-D on in the bios.

It is usually disabled by default.

Link to comment

now I have to see for the MV8... I did not have time to look into that, but they do not appear in the configuration/storage adapter... I had to run before I could check the pass-through ;)

 

you might have to enable VT-D on in the bios.

It is usually disabled by default.

 

Johnm,

 

do you realize I am going to end up either hating you or marrying you !! lol

being a dude, I guess I have no choice but to hate you !

 

I had indeed the vt disabled (I love ipmi, I could check it from home)... I had setup the bios correctly on the computer that I used to prepare the key... not on the one I actually tested them.. duh !!!!!

unfortunately the key connected is the vmware so I will have to wait until tomorrow to see if it does work ;)

 

how many time will I have to thank you ?

 

Cheers,

R

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.