UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Well, my cables still didn't arrive today, but I got impatient so I went to Fry's and managed to find a suitable cable (along with a pair of low-profile USB sticks to use for my UnRaid license)..

 

I've got the drive attached and it is recognized by the controller at power-on.....firmware is version 3.1.0.15N

 

I'm still able to boot into ESXi just fine without any PSOD.  For reference, I'm running ESXi off of a USB stick and I had the controller already configured for pass-through (which should basically "hide" it from the ESX OS) before I connected any drives (Ford Prefect: I remember from your earlier post that you had PSOD problems after connecting a drive, did you already have the controller configured for passthrough at that point?)

 

I have UnRaid up and running, but at least on the initial pass here I'm not seeing the new drive as being attached, but I'm not sure if there is anything else that I should need to do other than to configure the controller card for passthrough.

 

Edit:

I checked for the spindown latency and I was definitely able to notice it....thanks for the tip.

 

looking at /sbin/lspci -v I see:

13:00.0 SCSI storage controller: Marvell Technology Group Ltd. MV64460/64461/64462 System Controller, Revision B (rev 01)
       Subsystem: Super Micro Computer Inc Unknown device 0500
       Flags: fast devsel, IRQ 11
       I/O ports at 6000 [size=128]
       Memory at d9e10000 (64-bit, non-prefetchable) [size=64K]
       Capabilities: [48] Power Management version 2
       Capabilities: [50] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0 Enable-
       Capabilities: [e0] Express Endpoint, MSI 00

 

So it looks like it is finding the controller, but I'm still not seeing the drive.

I (currently) only have the unlicensed version, although I'm assuming I should have my license in the next day or two.

I only thought that would prevent me from adding additional drives to the array and not in the simple detection of drives....but I'm not sure.

 

I attached my syslog if anyone wants to take a look.

syslog-2011-01-31.zip

Link to comment

I'm still able to boot into ESXi just fine without any PSOD.  For reference, I'm running ESXi off of a USB stick and I had the controller already configured for pass-through (which should basically "hide" it from the ESX OS) before I connected any drives (Ford Prefect: I remember from your earlier post that you had PSOD problems after connecting a drive, did you already have the controller configured for passthrough at that point?)

 

...ESXi does not have a problem with that card, it will simply not support it, thus ignoring it.

You can still assign the physical device under ESXi for vmdirectpath.

For me, ESXi always booted OK; it was the VM that had the problems.

User nojstevens reported PSOD problems, my VM would either not boot at all or drives would not show up.

I also tried with other distros, i.e. Ubuntu 10.10 or Fedora 14...simple thing to do since they offer a Live-CD ISO...same results.

Link to comment

Good Morning!  I've been following this thread for some time now and thought I'd post my current status and way ahead.

 

Intent: Get unRaid, WHS & Win 7 running well on ESXi.

 

Current Equipment: AMD Anthlon II X2 250 Regor 3.0GHz, Asus M4A78-E, 4GB RAM, WHS, (2) 1TB Western Digital Caviar Black Drives, (3) 1TB HITACHI Deskstar 7200 RPM Drives, Sil3132 eSATA Card connected to Rosewill RSV-S4-X External Enclosure with (4) 2TB Western Digital EARS Drives (properly pre-cleared & aligned).

 

Current Status: unRaid setup, and working fine when running directly on the server -- NOTE: this is without going through ESXi.  The Sil3132 Bios only sees the first 2TB Drive, but once unRaid is boot up, it sees all 4 drives connected.  I've loaded ESXi onto an unused HD and I can boot in ESXi fine and load up vSphere on my desktop, properly configure VMs, etc.  Using the instructions in this post, I have created a vmdk of my unRaid install, as well as using the boot from CD Rom Image to load the USB directly connected, both work.  HOWEVER, once in unRaid, I can get not get it to see the Sil3132 or attached drives.

 

I created a new ESXi install image using a custom OEM.tgz that has support for Sil3132 and now ESXi sees the Sil3132 card, but only the first 2TB hard drive is recognized, just like in the Sil3132 bios.  So at this point, I could only setup the first drive with RDM, which does not jive with unRaid as its configured without ESXi.

 

Solution: If I were to get one of the new AMD MBs that supports IOMMU (some mentioned here already), a good list of confirmed working IOMMU motherboards can be found here: http://forums.amd.com/forum/messageview.cfm?catid=383&threadid=134410 (6th Post Down).  I have also confirmed separately with the VM-Help.com forums (http://www.vm-help.com/forum/viewtopic.php?f=22&t=2758) that a MB that support IOMMU will just pass through the Sil3132 card, and if it's passed through to the unRaid VM, if unRaid natively recognizing it and the drives attached, I should be good to go.

 

Here & Now: So before I pull the trigger on purchasing a new motherboard, I wanted to bounce this approach off the community.  Am I missing anything?  Is the MB purchase necessary?  If I can get ESXi to see the Sil3132 & the first HD, am I overlooking something that will allow it to see the other connected drives in the enclosure?

 

Thank you in advance for any comments/suggestions and I have to echo the previous comment, this is a great thread in a great forum!

 

- Cha

Link to comment

This post is of a IOMMU supported board Biostar TA890FXE - AMD 6x Phenom II Black - 3ware 9690SA used in an ESXi build in a Norco 4220.  The board looks interesting with 4 16x PCIe slots.  Sure would be nice if the 3ware adapters were cheaper.

Here is the board on newegg.

 

Maybe a new strategy is in order.  Instead of using an 8 way adapter, maybe getting one of the new 890FX boards and using 2 port adapters to fill out the drives.  Most of the 890FX ATX boards come with six 6GB sata ports  + two 3GB ports (one gigabyte had an additional 2 esata ports) and 6 PCIe slots.  So 8 + 12 sata ports seems like the way to go.  Six Sil3132 cards is only about $80.  With a $200 890FX board that price might be in line with a more commonly used six port board + two 8 port adapters.

 

Edit:  Here is a review of 5 890FX boards with and a nice summary table.

Link to comment

Interesting idea but I suspect a lot of people interested in ESXi would rather keep some slots available for passing through tv tuners, video cards etc, rather than using 6 ports just for sata controllers.  Also I think there is a limit of two devices passed through to each VM?

Link to comment

Interesting idea but I suspect a lot of people interested in ESXi would rather keep some slots available for passing through tv tuners, video cards etc, rather than using 6 ports just for sata controllers.  Also I think there is a limit of two devices passed through to each VM?

 

I don't know about limits, but I am passing through the 2 on-board controllers (SATA and IDE) plus 2 PCIe HBAs to my uNRAID VM. Asus M4A89TD Pro.

Link to comment

I guess everybody has a specific set of features they want to use.  I'm just thinking that the sil3132 chipsets are already suppored by ESXi and unRAID.  There are probably others as well.  It's easier to use those than  it will be to try to get new ones supported especially in ESXi.  Wouldn't the TV tuner by a PCI card?  Ok, well then leave one PCIe slot for a graphics card and have 5 with sata controllers.  8 + 10 is still a pretty good number of sata ports.  Maybe they all can  be passed in to unRAID VM.

 

Am I wrong in thinking that the only 8 port sata cards supported by ESXi that can be passed into a VM are really expensive?  Like not the SASLP_MV8?

Link to comment

The BR10i's can be snagged for like $50 (I got 2 for $45 each) on eBay depending on when you're looking. They're not "fully" supported by unRaid yet but are getting very very close especially with the new driver included in the latest beta. They are 8 port cards.

Link to comment

Strange, I've read that the limit is two (with the exception of pci devices on the same bus counting as one device) in many places.  Including somewhere on this forum I'm sure...

 

I found somewhere else today describing a limit of 6 devices.. Maybe it was increased in very recent versions of ESXi?

 

I believe it was increased in 4.1. There is a bit of confusion on the exact limit. I can tell you for sure it supports 4. I have to work through an interrupt sharing issue (killing network performance under certain conditions), so I may try to passthrough a 5th device (PCI nic) tonight.

Link to comment

The BR10i's can be snagged for like $50 (I got 2 for $45 each) on eBay depending on when you're looking. They're not "fully" supported by unRaid yet but are getting very very close especially with the new driver included in the latest beta. They are 8 port cards.

 

Alright, this is great.  I'll be reading about these.  Here's a quick link just to be helpful.  http://www.servethehome.com/ibm-serveraid-br10i-lsi-sas3082e-r-pciexpress-sas-raid-controller/

Link to comment

The BR10i's can be snagged for like $50 (I got 2 for $45 each) on eBay depending on when you're looking. They're not "fully" supported by unRaid yet but are getting very very close especially with the new driver included in the latest beta. They are 8 port cards.

 

Alright, this is great.  I'll be reading about these.  Here's a quick link just to be helpful.  http://www.servethehome.com/ibm-serveraid-br10i-lsi-sas3082e-r-pciexpress-sas-raid-controller/

 

Here's an existing thread on them in the Controller forum: http://lime-technology.com/forum/index.php?topic=7451.0

 

Link to comment

The BR10i's can be snagged for like $50 (I got 2 for $45 each) on eBay depending on when you're looking. They're not "fully" supported by unRaid yet but are getting very very close especially with the new driver included in the latest beta. They are 8 port cards.

 

Alright, this is great.  I'll be reading about these.  Here's a quick link just to be helpful.  http://www.servethehome.com/ibm-serveraid-br10i-lsi-sas3082e-r-pciexpress-sas-raid-controller/

 

Here's an existing thread on them in the Controller forum: http://lime-technology.com/forum/index.php?topic=7451.0

 

 

Up to this point I'd not really been terribly interested in unRAID 5 but now I want it badly.  I just ordered a BR10i adapter. 

Link to comment

If the BR10i's will be nearly fully supported once 5.0 comes out of beta that would be great.  I will probably attempt to use one of these cards along with VMWare ESXi on my Gigabyte board.

 

I have been doing some reading on the subject and it looks like I will have to jump through some hoops to get it working but I figure if I am going to go through the process of redoing my server in a couple of months I might as well do it "the correct way" (meaning way over the top).

Link to comment
  • 2 weeks later...

I've finally got my hardware assembled. SuperMicro C2SBX, E8400, 4GB, 3x2TB, 1x60GB SSD, in a Define XL case. I've got VMware ESXi 4.1 installed on the SSD. Created an unRAID VM with a 1MB disk, just to install PLOP so I can boot from the USB stick. That works. But I can't create RDM mappings for my 3x2TB disks in VMware. The RDM option is greyed out. I've googled and apparently that happens when there is no unassigned/unformatted LUNs available. I'm not really sure what that means. I can see the three disks in Configuration > Storage > Devices. VMware initially formatted two of them as datastores (the third was already pre-cleared, so I think it left that one alone). I deleted the datastores, but I still can't create the RDMs. Can anyone who is using ESXi give me a tip on what I need to do to get VMware to see my disks as unassigned and available to create RDMs?

 

Cheers.

 

Link to comment

That worked. But I couldn't pre-clear the disks when running under ESXi. It gave me an error about smartctl, and not an ata device or something. If I don't preclear, will I get the same result when starting the array (it will just take longer and the array will be offline), or is the preclear script going to be more exhaustive and better to do anyway to detect any errors on the drive, etc? I can boot from the USB and preclear the disks, but then I won't be able to use VMware to do anything else on the machine for 30 hours or so.

 

Link to comment

That worked. But I couldn't pre-clear the disks when running under ESXi. It gave me an error about smartctl, and not an ata device or something. If I don't preclear, will I get the same result when starting the array (it will just take longer and the array will be offline), or is the preclear script going to be more exhaustive and better to do anyway to detect any errors on the drive, etc? I can boot from the USB and preclear the disks, but then I won't be able to use VMware to do anything else on the machine for 30 hours or so.

 

 

Have a look at SKs patched 4.6 ISO:

 

http://lime-technology.com/forum/index.php?topic=7914.msg91181#msg91181

 

That might provide you with a bit more compatibility with unRAID.

Link to comment

That worked. But I couldn't pre-clear the disks when running under ESXi. It gave me an error about smartctl, and not an ata device or something.

The preclear_disk.sh script now has a -D option to eliminate the "-d ata" fed to smartctl and a

 

"-d type" option to feed the "type" to smartctl as an alternative to "-d ata"

 

Joe L.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.