ESXi 5.x - pre-built VMDK for unRAID


Recommended Posts

You can't get spindown with RDM'ed drives - you need to go with the controller passthrough option, something like a M1015 or Supermicro SASLP-MV8. 

 

Beta, have you or anyone seen the SASLP-MV8 working on ESXi. Some googling around seems to indicate that it is not well supported on ESXi 5.1. I noticed that your Orion build was originally going to have the MV8.

 

For me, I need something that is x4 PCI-X. The mobo I have and want to use does not have an x8 slot.

 

Thanks.

 

I'm using an MV8 with my ESXi setup now. Make sure you follow the steps outlined here to get it to pass through properly:

 

http://lime-technology.com/forum/index.php?topic=7914.msg128847#msg128847

Link to comment

Also using the mv8's (two of them). Rockstable since initial setup a few weeks back.

 

Thanks mrow and helmonder for the votes of confidence. I have 1 on the way from Newegg along with the cables. Sometime next week I'll have my VT-d capable i5 cpu, the card, cables and new drives. I'll be able to start benching and testing then! Quite excited to get this going.

 

Thanks!

Link to comment

I've got everything working (I think) except I can't figure out how to map the USB key. I have my ESXi USB key and the UNRAID key both plugged into the esxi box. The computer boots to the esxi key. How exactly so I map the UNRAID key now?

 

You just add it as a usb device in your VM.

 

Go to the setting screen on your VM, Click "Add...".

 

Add USB Controller, then you add USB Device

 

See this post: http://lime-technology.com/forum/index.php?topic=14695.msg138465#msg138465

Scroll down to "VM#3 unRAID VMDirectPath Hardware Passthough"

Link to comment

I've got everything working (I think) except I can't figure out how to map the USB key. I have my ESXi USB key and the UNRAID key both plugged into the esxi box. The computer boots to the esxi key. How exactly so I map the UNRAID key now?

 

You just add it as a usb device in your VM.

 

Go to the setting screen on your VM, Click "Add...".

 

Add USB Controller, then you add USB Device

 

See this post: http://lime-technology.com/forum/index.php?topic=14695.msg138465#msg138465

Scroll down to "VM#3 unRAID VMDirectPath Hardware Passthough"

 

Thanks for the reply. Went through the steps but under Configuration>Advanced Settings it says "Host does not support passthrough configuration." I'm using a Dell Precision 690 with a Integrated Broadcom 5752 Gigabit Ethernet controller. I'm assuming that doesn't support passthrough? If I check the settings of the unRAID VM the network adapter says its Inactive. It info bubble says "The state of the attached network prevents DirectPath I/O.

 

Is this just a limitation of the build in nic?

Link to comment

Thanks for the reply. Went through the steps but under Configuration>Advanced Settings it says "Host does not support passthrough configuration." I'm using a Dell Precision 690 with a Integrated Broadcom 5752 Gigabit Ethernet controller. I'm assuming that doesn't support passthrough? If I check the settings of the unRAID VM the network adapter says its Inactive. It info bubble says "The state of the attached network prevents DirectPath I/O.

 

Is this just a limitation of the build in nic?

 

You don't have to passthrough the NIC. you can just add virtual NIC that's provided by esxi.

It's normal to say inactive under DirectPath I/O if you have a virtual NIC.

 

i dont know if precision 690 supports hw passthrough. If you go to BIOS, do you have VT-d or some kind of vitualization setting?

if it doesnt support, you're going to have to use RDM for all your hdd which is not too convenient.

Link to comment

Gotcha. Thanks. Last question. Is it possible to take an existing unRAID system and move it in under ESXi? If so, is there a link you know of with the info? Thanks for all the help.

 

Yes it's possible since your array data is actually stored in the flash drive and you use that to boot the VM.

That link should be a good guide on how to do this.

 

However, without being able to do hw passthroough, like i said before you have to do RDM for all HDD's you have, there is no way around it.

I am not a big fan of RDM personally, so I wouldn't go with esxi if my hardware doesnt support hw passthrough.

Link to comment

Also using the mv8's (two of them). Rockstable since initial setup a few weeks back.

 

Can someone comment on the performance gain or not between the M1015 and SASLP-MV8 controllers? The M1015 use x8 PCIx and a faster chip which allows for SATAIII ratings. However, with 7200rpm HDD's are going to see a big difference?

 

Thanks!

Link to comment

Can someone comment on the performance gain or not between the M1015 and SASLP-MV8 controllers? The M1015 use x8 PCIx and a faster chip which allows for SATAIII ratings. However, with 7200rpm HDD's are going to see a big difference?

Thanks!

 

I can comment on this. I recently switched from 1 x MV8 to 2 x M1015

 

First of all, HDDs can't even reach max speed of SATA II (3GB/s), so SATA III or SATA II, that doesn't affect much.

 

and there is almost no performance gain in terms of copying stuff to or from unraid when using M1015 or MV8.

 

HOWEVER, you see the improvement when it comes to Parity Check or Data Rebuild.

 

M1015 uses PCIe x8, which is able to do 1.6GB/s single direction.

MV8 uses x4, which is able to do only 800 MB/s

 

so let's say you have 8 drives plugged in to one of these cards. and let's say your hdd's are able to read 110 MB/s max (this is what i saw during preclear for most drives during pre-read).

 

If you were to use MV8, your speed will be limited to 800 MB/s or less. it can never reaches that high. that's just theoretical speed.

With simple feature stats plug in, you can actually see the transfer speed of all drive during data build / parity check.

I was getting about 500 MB/s. again, this is from SIMIPLEFEATURE STATS. On the index page, i was seeing about 80 MB/s for parity check.

 

When I switched over to M1015, I was getting about 700 - 800 MB/s total.

and about 100 - 120 MB/s during parity check. (i also removed green drives from my array which increased my speed as well).

 

SO with MV8, parity check took 10 - 11 hours, with M1015, took about 7 - 8 hours or less. it is a big improvement in my opinion.

 

 

So, to sum it up.

In regular usage, there is no difference because you're accessing 1 or 2 drive at time, so it doesn't get limited by PCIe speed.

However, when you're doing the parity check or data rebuild that uses all 8 drives at the same time, you see improvment.

Link to comment

I've got everything working (I think) except I can't figure out how to map the USB key. I have my ESXi USB key and the UNRAID key both plugged into the esxi box. The computer boots to the esxi key. How exactly so I map the UNRAID key now?

 

You just add it as a usb device in your VM.

 

Go to the setting screen on your VM, Click "Add...".

 

Add USB Controller, then you add USB Device

 

See this post: http://lime-technology.com/forum/index.php?topic=14695.msg138465#msg138465

Scroll down to "VM#3 unRAID VMDirectPath Hardware Passthough"

 

Thanks for the reply. Went through the steps but under Configuration>Advanced Settings it says "Host does not support passthrough configuration." I'm using a Dell Precision 690 with a Integrated Broadcom 5752 Gigabit Ethernet controller. I'm assuming that doesn't support passthrough? If I check the settings of the unRAID VM the network adapter says its Inactive. It info bubble says "The state of the attached network prevents DirectPath I/O.

 

Is this just a limitation of the build in nic?

 

Looking at a sampling of processors that were available in the Dell Precision 690 from listings on eBay those CPUs do not support VT-d which is required for DirectPath I/O in ESXi. Your only option then would be to RDM all your disks which can cause/be a major headache. IMO you're better of just running unraid on bare metal and not trying to virtualize with your current system.

Link to comment
Looking at a sampling of processors that were available in the Dell Precision 690 from listings on eBay those CPUs do not support VT-d which is required for DirectPath I/O in ESXi. Your only option then would be to RDM all your disks which can cause/be a major headache. IMO you're better of just running unraid on bare metal and not trying to virtualize with your current system.

 

I just realized that after looking it up further. Shame cause this sucker had dual xeons in it. I'm trying to get the number of systems I have down to a minimum here. I also have a Optiplex 755 with a Core2 Quad in it which I think supports VT-d. I'll probably just keep the unRAID separate and use the 690 as my esxi box. I'd be too afraid of trying to move my existing unRAID array to ESXi anyway.

 

Thanks everyone for the help. This may be cheesy to say, but this forum is seriously amazing. Always someone there to help and offer advice. You don't find that too often.

Link to comment

Looking at a sampling of processors that were available in the Dell Precision 690 from listings on eBay those CPUs do not support VT-d which is required for DirectPath I/O in ESXi. Your only option then would be to RDM all your disks which can cause/be a major headache. IMO you're better of just running unraid on bare metal and not trying to virtualize with your current system.

 

I just realized that after looking it up further. Shame cause this sucker had dual xeons in it. I'm trying to get the number of systems I have down to a minimum here. I also have a Optiplex 755 with a Core2 Quad in it which I think supports VT-d. I'll probably just keep the unRAID separate and use the 690 as my esxi box. I'd be too afraid of trying to move my existing unRAID array to ESXi anyway.

 

Thanks everyone for the help. This may be cheesy to say, but this forum is seriously amazing. Always someone there to help and offer advice. You don't find that too often.

 

The free hypervisor only supports one physical CPU anyway so the other one would have been doing nothing except eating up electricity.

 

Which Core2Quad do you have?

 

 

EDIT: It looks like the Optiplex 755 was offered with these 4 Core2Quads: Q9650, Q9550, Q9400 and the Q6700. The first three support VT-d, the last one doesn't.

Link to comment
EDIT: It looks like the Optiplex 755 was offered with these 4 Core2Quads: Q9650, Q9550, Q9400 and the Q6700. The first three support VT-d, the last one doesn't.

 

This is the output I get. Does this support VT-d?

 

root@Tank01:~# cat /proc/cpuinfo

processor      : 0

vendor_id      : GenuineIntel

cpu family      : 6

model          : 15

model name      : Intel® Core2 Quad CPU    Q6600  @ 2.40GHz

stepping        : 11

microcode      : 0xba

cpu MHz        : 2400.003

cache size      : 4096 KB

physical id    : 0

siblings        : 4

core id        : 0

cpu cores      : 4

apicid          : 0

initial apicid  : 0

fdiv_bug        : no

hlt_bug        : no

f00f_bug        : no

coma_bug        : no

fpu            : yes

fpu_exception  : yes

cpuid level    : 10

wp              : yes

flags          : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov

pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arc

h_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 x

tpr pdcm lahf_lm dtherm tpr_shadow vnmi flexpriority

bogomips        : 4788.19

clflush size    : 64

cache_alignment : 64

address sizes  : 36 bits physical, 48 bits virtual

power management:

Link to comment
  • 2 weeks later...

I am not getting the option to assign a VMXNET3 NIC to my unRAID VM. Furthermore, although not unRAID related directly, if I assign VMXNET3 NICS to my pfSense VM they do not show up.

 

For reference the NICs on this mobo are both Intel.

82574L and 82579LM

 

When I look in the vSwitch properties under Network Adapters tab I see that the driver is listed a e1000e. Curious if anyone has thoughts on what I need to fix.

 

Thanks!

Link to comment

...for the 82579LM to be recognized in ESXi, you will need to install extra, non-standard drivers.

 

Check.

 

...for  attaching the VMXNET3 device to a VM, in order to show up inside the VM, the vmware tools need to be installed in that VM.

 

Check.

 

Still no joy.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.