UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

SK you're currently using the ESXi LSI controller with RDM (like the topic says on this post), correct?  I'm guessing I could always use the 'soft' controller, and not do PCIe passthrough for the LSI card I'm actually using right now, especially if spinup/spindown is working correctly through the VM with it.

 

I've really got to get a decent NIC for my ESX box, the Realtek 8111c that's on the mobo now drops connection frequently under ESX, and I'm thinking at this point that it's doing the same thing under Unraid without VM as well (can't seem to get it to complete a rsync from my old box, the new machine hangs at a random point in the transfer).  If RDM works correctly with this version that may be the last part of the puzzle for me to actually get started on my migration..

 

 

I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs.

 

btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops.

 

 

 

 

 

 

Link to comment
  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs.

 

btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops.

 

Any advantages with PVSCSI over LSI?

 

I wonder if UNRAID were to officially support drives in ESXi which they would most likely support first.

 

Link to comment

I used EXSi with LSI controller before, but now switched to PVSCSI (paravirtualized scsi) adapter - that require relevant kernel module which not included in stock unraid (my iso posted above have it). As far as spinup/down - there are no errors in logs, but I need to physically check it if drives actually gets spin down as reported in logs.

 

btw my mobo GIGABYTE GA-MA780G-UD3H also has realtec 8111C but so far I had not experienced any network drops.

 

Any advantages with PVSCSI over LSI?

 

I wonder if UNRAID were to officially support drives in ESXi which they would most likely support first.

 

 

generally pvscsi gives better performance and less cpu utilization compare to lsi sas, especially with i/o intensive VMs. In 4.1 vSphere vmware fix the bug when latency was slight higher for pvscsi for relatively light i/o (< 2K iops). But this is for enterprise loads, for home systems loads thats probably not significant. For interested -  http://www.thelowercasew.com/more-vsphere-4-1-enhancements-welcome-back-pvscsi-driver

 

For unraid to support pvscsi is just enable another module during kernel build process. Bigger questions is for unRAID to _fully_ support vmware ESXi.

 

 

 

Link to comment

Hello,

 

I'm currently running my unRaid Server under ESXi 4.1 (hardware see sig.) using VMDirectpath for all unRaid drives. 6 via the onboard sata and the other 6 via the Br10i, the ESXi drive is connected to the Promise sata300 TX4 card.

 

Using the official unRaid version the configuration works apart from the device ids, temps, spin down.

 

Using the first bzimage/bzroot files from SK, I recieved a lot of error messages when files were written to the drives connected to the BR10i, the cache drive is attached to the onboard sata. When the files are moved to the raid then the "errors" counter starts to rise. Device ids and temps work.

 

Using the second bzimage/bzroot files from SK, all the drives attached to the BR10i came up as unformatted, plus I'm still getting errors and no temps.

 

Both times I copied the bzimage/bzroot files directly to the USB drive.

 

Syslog is attached, If there is any further information required just ask and thanks for the great forum.

syslog-2010-12-05.zip

Link to comment

Hello,

 

I wanted to post an update with my experiences so far with unRAID inside ESXi. I had it working nicely on my single ESXi box and vmDirectPath but then went on to learn more and more about ESXi and the power features. This lead me to want to create a SAN on a separate box, so I purchased a new motherboard for the purpose of running unRAID and Openfiler inside its own ESXi and then having my other ESXi store its datastores on the SAN. The config below works very nicely, drive temps spin downs etc are all passed through as though unRAID was booted natively.

 

Mobo Intel S3210SHLX

http://www.newegg.com/Product/Product.aspx?Item=N82E16813121345&Tpk=S3210SHLX

 

unRAID card - Supermicro AOC-SAT2-MV8

http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=AOC-SAT2-MV8

 

OpenFiler card - LSI MegaRAID SAS 9260

http://www.newegg.com/Product/Product.aspx?Item=N82E16816118105&cm_re=LSI_MEGARAID_SAS_9260-8I-_-16-118-105-_-Product

 

For openfiler, ESXi sees this card in RAID mode (10) and then i made an RDM of the array which Openfiler then sees.

 

For the unRAID to work, you need to have vmDirectPath turned on, which this mobo supports. This mobo is also on the VM HCL. It has 2 x PCI-X slots so if you grow to 16+ Unraid disks it will allow two AOC-SAT2-MV8 cards

 

Jon

 

 

 

Link to comment

why not make a VM of unRaid running on a full-slackware distro?

 

 

I think the issue with that is the drives aren't exposed to unRAID inside the VM.

 

In ESXi drives can be exposed to VM either using physical RDM or controller pass through (which require relevant supporting hardware with certain cpu features, etc). In first case not all features may be available such temp and spin down.

 

As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on.

 

Link to comment

Hello,

 

I'm currently running my unRaid Server under ESXi 4.1 (hardware see sig.) using VMDirectpath for all unRaid drives. 6 via the onboard sata and the other 6 via the Br10i, the ESXi drive is connected to the Promise sata300 TX4 card.

 

Using the official unRaid version the configuration works apart from the device ids, temps, spin down.

 

Using the first bzimage/bzroot files from SK, I recieved a lot of error messages when files were written to the drives connected to the BR10i, the cache drive is attached to the onboard sata. When the files are moved to the raid then the "errors" counter starts to rise. Device ids and temps work.

 

Using the second bzimage/bzroot files from SK, all the drives attached to the BR10i came up as unformatted, plus I'm still getting errors and no temps.

 

Both times I copied the bzimage/bzroot files directly to the USB drive.

 

Syslog is attached, If there is any further information required just ask and thanks for the great forum.

 

Device not ready errors are the same issue jamerson9 experienced and caused by Br10i (based on LSI 1068E chip) not supporting part of T10 SAT-2 standard correctly in regards to drives spin down. As you noticed only devices managed by Br10i (sdh to sdo have those errors), ones on internal SATA - not. I wonder if LSI1068E chip does support spin down correctly on any card at all - so far I have not seen such evidence.

 

 

 

 

Link to comment

why not make a VM of unRaid running on a full-slackware distro?

 

 

I think the issue with that is the drives aren't exposed to unRAID inside the VM.

As far as VM of unRAID running on full-slackware distro - my preference for unRAID VM is to keep it as small as possible and just do storage piece and leave other functionality for other VMs running on better suited OS distros such ubuntu, centos, windows and so on.

 

 

Good Point.  :o

 

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

Cheers for this, I'll install it tonight and report back :)

 

Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version.

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

Cheers for this, I'll install it tonight and report back :)

 

Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version.

Are you sure about the drive temperatures, as they have been missing on all the version I have tested on my BR10i.

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

I still need to get a decent NIC, but I'll give it a go.  Now for a 4.6 version :D.

 

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

Cheers for this, I'll install it tonight and report back :)

 

Ok its installed, as of yet there have been no errors when files are written to the array, what is missing are the drive temps for the drives connected to the br10i, they were present in the first version.

Are you sure about the drive temperatures, as they have been missing on all the version I have tested on my BR10i.

 

Pretty positive the temps were there but I'll switch back to the first version tonight just to make sure.

 

Well I've just checked the first version and I take everything back, no temps here nothing to see, my mistake :)

 

Thanks once again to SK

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

 

Link to comment

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working).

Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed.

 

..my 2 cents.

Link to comment

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working).

Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed.

 

..my 2 cents.

 

I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware.

 

Link to comment

Update for ESXi users (who use physical RDM disks) with vmware tools installed (which double distro size) and some minor improvements available at http://www.mediafire.com/?2710vppr8ne43

 

Plus for owners such as IBM Br10i cards based on LSI1068E chip (version with extended spindown disabled to avoid device not ready errors) available at  http://www.mediafire.com/?zeajy4mmk8j868k

 

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

both version are the same except unraid driver, second iso disables extended spindown code for BR10i LSI1068E that not conform to t10 standard.

 

As far as mvsas - the only revenant change was is the upgrade of Fusion MPT driver from 3.04.15 (linux kernel stock) to latest 4.24.00.00 (from LSI site) which compiles and work under my ESXi configuration without issues (when using virtualized LSI SAS controller with SATA Physical RDM disks).

 

Having look at logs would help, if this issue with updated Fusion driver we can certainly go back to previous one.

Given that 4.6 stable release is out I need to update to it anyway.. 

 

 

 

Link to comment

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working).

Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed.

 

..my 2 cents.

 

I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware.

 

 

either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller

Link to comment

I downloaded the second version as I have LSI1068E based controller but this one removes the support for "mvsas" (the AOC-SASLP-MV8 cards) so no go for me at this point.

 

I have 6 HDs on the motherboard, 3 HDs are on AOC-SASLP-MV8 card, one Dell branded LSI1068E card with a single HD not in the array for testing and an additional RAIDCore BC4852 PCI/PCI-X controller just for testing.

 

...the AOC-SASLP-MV8 is useless for an ESXi environment (no native support in ESXi and vmdirectpath not working).

Since that image is tuned for ESXi, it comes only natural that these kind of features/modules are being removed.

 

..my 2 cents.

 

I believe the 2nd version was intended to be run outside ESXi, since it looks to have been built to specifically address problems with running the older supplied version directly on the hardware.

 

 

either outside of ESXi or under ESXi with VMDirectPath I/O for LSI1068E controller

 

Hmmm...yes, mea culpa.

Too many paths to follow....

Link to comment

Got a good Intel nic and so far, I've been running fine with the last (v1.0.4) supplied files (the non VM, or direct PCI connected iso).  I'm currently running it native (not under ESXi), and it's happily transferring files much faster than the onboard POS NIC.

 

When I get a chance, I plan on giving it a go with ESX, but probably won't get to it until next week..

 

**EDIT**  I'm up to about 1.7tb total transferred, without any issues.  Looks like the new NIC fixed my issues...  Might try the onboard NIC when I set up my firewall, for the red interface...  See if that works with Passthrough.

 

**EDIT 2**  Got 4tb of data moved over, then set it up under ESXi, and it booted just fine.  I did have to re-configure the drives list, but once I did, it came back up just fine and the data was all there.  After that I added the parity drive onto the configuration, and now it's getting about 58m/sec on the parity generation.  Not bad compared to my old system...  That's with the controller on passthrough.

 

 

 

 

 

Link to comment

Hi folks,

 

the contents of this thread always has tempted me to actually try something...

Now, this weekend I took the chance ... Here is what I can share:

 

- installed ESXI on a SM X8SIL-F with a L3426 XEON and 8GB of RAM, running off a SSD as boot+datastore

- added a LSI 9240-8i (actually an IBM ServeRaid M1015, but they are identical) and three SATA drives as JBOD for testing.

 

The LSI is natively supported under ESXi, so the "ESXi - RDM - unRAID" scenario should work as desired.

Therefore I started with the attempt on using vmdirectpath with the LSI controller.

The LSI should work with most modern distros, that provide the "megaraid_sas" module (i.e CentOS 5.5, RHEL6, FC14, Ubuntu 10.10).

...yes, I know unRAID does not support that controller (yet?:-)

 

Activating vmdirectpath and assignment to another VM was not a problem.

However I could not get any of the linux distros to boot inside a VM.

Using the distro on bare metal, without ESXi, the LSI and the drives are properly seen.

 

So my first assumption was that the LSI is not working with vmdirectpath.

But finally, as a last resort, I tried ESXi and a Win7_HomePremium_64 inside a VM.

Guess what?...Win7 could see the controller and after installing the drivers from the LSI website, I was able to use

the drives...formated drives under Win7 and copied over some 100-GB from my external unRAID just fine.

I could also restart the VM and the LSI and drives  showed up every time...redid the copy tests fine.

 

Shut down the Win7-VM, installed a debian lenny inside a VM and baked a new kernel with latest drivers(v4.31) from LSI.

The megaraid_sas module would not load automatically upon reboot of the VM.

Loading the module manually was possible without error, however drives were not seen.

Shutting down the linux-VM and bringing up the Win7, the controller was gone inside Win7 ;-(

After a reboot of the complete host, Win7 could see the LSI again.

Maybe this is an issue with the linux driver initializing the controller the wrong way?

How can I further debug and dig into this?

Any suggestions welcome.

 

regards,

      Ford

 

Link to comment

Got a good Intel nic and so far, I've been running fine with the last (v1.0.4) supplied files (the non VM, or direct PCI connected iso).  I'm currently running it native (not under ESXi), and it's happily transferring files much faster than the onboard POS NIC.

 

When I get a chance, I plan on giving it a go with ESX, but probably won't get to it until next week..

 

**EDIT**  I'm up to about 1.7tb total transferred, without any issues.  Looks like the new NIC fixed my issues...  Might try the onboard NIC when I set up my firewall, for the red interface...  See if that works with Passthrough.

 

**EDIT 2**  Got 4tb of data moved over, then set it up under ESXi, and it booted just fine.  I did have to re-configure the drives list, but once I did, it came back up just fine and the data was all there.  After that I added the parity drive onto the configuration, and now it's getting about 58m/sec on the parity generation.  Not bad compared to my old system...  That's with the controller on passthrough.

 

Can you post your syslog for the native mode (not under ESXi) and your hardware config if it is not a big trouble.  ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.