UnRAID on VMWare ESXi with Raw Device Mapping


Recommended Posts

  • Replies 461
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

On ESXI 4.1 you can pass thru the AOC-SASLP-MV8 card and unRAID sees it natively, no need for raw device mapping. This allows the temperature readings and spin downs to work too

 

Jon

 

Are you sure? If so this is fantastic news! I didn't go this route because not being able using the AOC-SASLP-MV8, and spinning down drives.

 

Can you also pass thru the controller of you motherboard?

Link to comment

Yes, you can pass thru the controller of the motherboard - but there are some caveats on this and other devices. It appears you pass thru the buss that the PCI device is on (or other device). So in my example, my AOC-SATA8-MV8 is on the PCI bus. If I put in another PCI card, then that cannot be assigned to another guest other than the one sharing the SATA card. Similarly, the onboard graphics of the mobo appears to be on the PCI bus too so I loose my graphics. Not that you need it with ESXi.

 

See attached my unRAID guest settings with the SM card (it has a Marvell controller).

 

The SAS card should arrive in the next few hours, I will update if that works too

 

Jon

Screen_shot_2010-11-02_at_10_05.51_AM.png.6fc9a86161e089c13383566d02b05064.png

Link to comment

Yes, you can pass thru the controller of the motherboard - but there are some caveats on this and other devices. It appears you pass thru the buss that the PCI device is on (or other device). So in my example, my AOC-SATA8-MV8 is on the PCI bus. If I put in another PCI card, then that cannot be assigned to another guest other than the one sharing the SATA card. Similarly, the onboard graphics of the mobo appears to be on the PCI bus too so I loose my graphics. Not that you need it with ESXi.

 

See attached my unRAID guest settings with the SM card (it has a Marvell controller).

 

The SAS card should arrive in the next few hours, I will update if that works too

 

Jon

 

Thanks for your fast reply!

 

I'm now exploring this possibility, and yes i'm excited. :D Maybe now i can put my machines in one ESXi machine...

Link to comment

I myself have been eyeballing the X8SIA-F and X8SIL-F.  I'd love to go i7 but the cost (tri channel memory ugh although prices are dropping faaast) of going that route not to mention extra electrical costs (at least with the more affordable processors) have pushed me away.  One thing working against both is the introduction of the new sockets next year (already?)!

Link to comment

Yes, the power use is high, but i'm saving money with this mobo and ESXi. Prior to this I had 5 devices in my rack at home. Now I have 1 ESXi running the following VM's

 

unRAID 5 (much less power use than WHS and its inefficient use of HD space)

pfSense (Firewall/Router)

Mythbuntu (DVR/TV backend)

HomeSeer (inside Windows 7 for home automation)

Milestone (inside Windows 2008r2 for security cameras)

PBXinaFlash (VOIP/Asterisk/FreePBX)

 

Not to mention its much quieter in the basement now I'm only running one box and associated fans.

 

Jon

Link to comment

Yes, you can pass thru the controller of the motherboard - but there are some caveats on this and other devices. It appears you pass thru the buss that the PCI device is on (or other device). So in my example, my AOC-SATA8-MV8 is on the PCI bus. If I put in another PCI card, then that cannot be assigned to another guest other than the one sharing the SATA card. Similarly, the onboard graphics of the mobo appears to be on the PCI bus too so I loose my graphics. Not that you need it with ESXi.

 

See attached my unRAID guest settings with the SM card (it has a Marvell controller).

 

The SAS card should arrive in the next few hours, I will update if that works too

 

Jon

 

Are you sure this is how the pass-through works?  I was under the understanding that you could share specific devices to specific VM's no matter what buss the devices were on.  For example, on my system I'd like to pass 2 HVR-2250's to a Win7 VM, and possibly 1 or 2 LSI 8 port raid cards over to an Unraid VM, with all 4 of those cards being PCIe.  Or is that buss sharing part only for regular PCI devices (as far as  I know right now, I'll have none other than a PCI video card, which won't really be shared).

 

I was originally going to go with a Slackware + Unraid + Sage Linux install, non-vm, but found out my tuner cards are currently only working with digital signals, meaning I'd need at least 2 more tuner cards for the analog channels, so I went back to my idea of running all in ESXi VM's, but if you can't share another device on the same buss with another seperate VM, it kind of defeats the purpose..

 

Link to comment

I'm not really sure.... for example, I have a Firewire PCI card in one PCI slot and a SuperMicro SATA in another. I cannot share the Firewire with a different VM to the SATA Card. Also, the onboard VGA is on that bus and that goes off when I start the guest that uses the SATA card. Now for PCI-E I don't know if the behavior is the same - but will know in a few hours when the UPS van arrives.... he's always late when you wait for him...

 

Jon

Link to comment

All the extra pieces/parts I'm waiting on are due in tomorrow, so I'll get to play that waiting game :D.  Worst part is, some are coming in UPS, some are coming in FedEX, and one is due in USPS, lol.

 

One HUGE benefit of doing this is getting the mess my current array is in sorted (highwater mark has caused stuff to be scattered and very fragmented).  I'm just hopeful that I can dedicate the LSI SAS3081E-S 8 port SAS controller to the unraid VM, and the tuners to the Sage VM (I WISH you could dedicate more than 2 to a VM, but I'll work with what I have, lol).

 

I just assumed that I could drop those 2 PCIE tuners on the Sage VM install, and 2 of those 8-porters on the unraid side, but if it blocks the other devices on the same buss, that will definitely be a HUGE problem.

 

Link to comment

Bad news i'm afraid - it sees it, it boots, but during the boot process it scans the drives, i get the PSOD and have to reboot. 1 time in 10 it works, I can add disks to the array, but when I start the array it PSODs. It came with firmware .15N and i found new .21 firmware, applied it but same issue. If I boot direct from unRAID flash (i.e. not in VM) then it works great....

 

I'm gutted - not sure if i should do plan b or plan c (plan b being no more ESXi and plan c being no more unRAID)

 

Jon

Link to comment

Bad news i'm afraid - it sees it, it boots, but during the boot process it scans the drives, i get the PSOD and have to reboot. 1 time in 10 it works, I can add disks to the array, but when I start the array it PSODs. It came with firmware .15N and i found new .21 firmware, applied it but same issue. If I boot direct from unRAID flash (i.e. not in VM) then it works great....

 

I'm gutted - not sure if i should do plan b or plan c (plan b being no more ESXi and plan c being no more unRAID)

 

Jon

 

Why not get a motherboard that supports PCI X and use more of the AOC-SATA8-MV8 cards and get the full speed out of them.  I know supermicro makes some boards that have PCI-X on them.

Link to comment

I have that card already - it works in ESXI, on pass thru (PCI not PCI-x), nice and fast, but if I do a parity sync it tells me its going to take 171 days. So my array is unprotected.

 

Next try is this one  AOC-USASLP-L8i - its apparently natively supported in ESXi so i will try RDM and pass thru and report back.

 

Jon

Link to comment

I have that card already - it works in ESXI, on pass thru (PCI not PCI-x), nice and fast, but if I do a parity sync it tells me its going to take 171 days. So my array is unprotected.

 

Next try is this one  AOC-USASLP-L8i - its apparently natively supported in ESXi so i will try RDM and pass thru and report back.

 

Jon

 

OK, I am a little confused now.  I understand that you have the Supermicro PCI-X card in the PCI slot, which will make it ungodly slow with 8 drives attached.  Now, if you had a motherboard with a PCI-X slot and put this card in it could you not pass through the PCI-X bus and get the full speed of your PCI-X card.

 

This board may fit your needs perfectly (short of requiring you to buy some more parts).  It has an x4, x8, and 4 PCI-X lanes.  The 4 PCI-X are split into 2 groups of 2 so if you want to PCI-X cards in unRAID then put 1 card in the 100mhz buss and one on the 133mhz bus.  Pass through both of those buses and attach up to 16 drives.  That will still leave the PCI-e x4 and x8 buses for passthrough to another machine.

Link to comment

Bad news i'm afraid - it sees it, it boots, but during the boot process it scans the drives, i get the PSOD and have to reboot. 1 time in 10 it works, I can add disks to the array, but when I start the array it PSODs. It came with firmware .15N and i found new .21 firmware, applied it but same issue. If I boot direct from unRAID flash (i.e. not in VM) then it works great....

 

I'm gutted - not sure if i should do plan b or plan c (plan b being no more ESXi and plan c being no more unRAID)

 

Jon

 

I wonder if it's the passthrough, or unraid in a VM that's causing the problem?  The card I've got coming was specifically mentioned in the passthrough tech bulletin (LSI 3801e 8 port sas), so I'm hoping I'll have more success with it.  I thought it was due in today with the drives I ordered, but apparently I can't read a calendar, as it plainly said both parts were due in the 4th, lol.  I'll try to give it a go tomorrow on the bench here, and I'll post results either way.

 

Link to comment

This board may fit your needs perfectly (short of requiring you to buy some more parts).  It has an x4, x8, and 4 PCI-X lanes.  The 4 PCI-X are split into 2 groups of 2 so if you want to PCI-X cards in unRAID then put 1 card in the 100mhz buss and one on the 133mhz bus.  Pass through both of those buses and attach up to 16 drives.  That will still leave the PCI-e x4 and x8 buses for passthrough to another machine.

 

I'm trying that with my X7SBE and two pci-x MV8s right now. That system is now my backup server after finally swapping my "production" server to a more efficient G33 2tb drives setup. (120w idle with 21 drives vs 80w idle with 11, a smaller PSU is part of the savings.)

 

Now I'm going to be tempted to revert back.

 

Although truthfully, it's what I had in mine with the hardware when I bought it. A reasonably affordable, somewhat efficient, and VMware hardware compatible server to run any and every thing.

Link to comment

Thank you all for your advice so far. I could get a PCI-X mobo but I'm kinda attached to my X8ST3-F and I love the IPMI, so would rather get a PCI-E solution working. The PCI-X card is OLD also - BIOS is from 2005, so always blame it when I have issues. If the new PCI-E I ordered yesterday doesn't work then I will run 2 boxes for a while - i have an old Xeon PCI-X mobo that just needs a bigger PSU and that can be my unRAID box. I really want 1 box though, won't rest till I do.

 

I guess I could use Raw Device Mapping but don't want 8 drives spinning all the time.

 

Look forward to hearing your progress, will report mine at the weekend when I get my AOC-USASLP-L8i

 

Jon

Link to comment

This board may fit your needs perfectly (short of requiring you to buy some more parts).  It has an x4, x8, and 4 PCI-X lanes.  The 4 PCI-X are split into 2 groups of 2 so if you want to PCI-X cards in unRAID then put 1 card in the 100mhz buss and one on the 133mhz bus.  Pass through both of those buses and attach up to 16 drives.  That will still leave the PCI-e x4 and x8 buses for passthrough to another machine.

 

I'm trying that with my X7SBE and two pci-x MV8s right now. That system is now my backup server after finally swapping my "production" server to a more efficient G33 2tb drives setup. (120w idle with 21 drives vs 80w idle with 11, a smaller PSU is part of the savings.)

 

Now I'm going to be tempted to revert back.

 

Although truthfully, it's what I had in mine with the hardware when I bought it. A reasonably affordable, somewhat efficient, and VMware hardware compatible server to run any and every thing.

 

Let me know how it goes.

 

I probably won't do something like this in the near future but would love to within the next year.

 

If you get it working would you be kind enough to right up a guide?... does not have to be overly detailed.

Link to comment

Unfortunately I wasn't able to get the pci device pass-through working on the X7SBE tonight. VT and VT-D are both enabled in the bios so I don't know what else to do.

 

A quick search found a mailing list conversation aboutX7SBE VT-D problems in Xen:

after a few tests, i can confirm that VT-d on Supermicro X7SBE/X7SB4

doesn't work at all.

I assume this has to do with invalid ACPI table content.

 

Searching for X7SB4 & VT-D also yields a few posts about it not working with that similar board.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.