Jump to content
BetaQuasi

ESXi 5.x - pre-built VMDK for unRAID

461 posts in this topic Last Reply

Recommended Posts

@pjneder, to elaborate further on what WeeboTech said:

 

- VT-d is a feature of the memory controller.  On pre-Nehalem (Nehalem was first introduced with the first-gen i7 series) systems, this required the chipset to support it as the chipset contained the memory controller.  On post-Nehalem systems, the CPU is required to support it, as Intel moved the memory controller onto the CPU.

 

- The Q6600 does not support VT-d as it is pre-Nehalem, but if coupled with a VT-d-capable chipset, you can do controller passthrough.

 

So in your case, the question will be whether your board supports it or not - I'd update it to the latest BIOS from Asus (Beta or otherwise) and see if you have the option.  Note Intel Virtualisation Technology (VT-x) and VT-d are two separate things.  You'll often find the former without the latter, which will prevent you from using passthrough. 

Share this post


Link to post

@pjneder, to elaborate further on what WeeboTech said:

 

- VT-d is a feature of the memory controller.  On pre-Nehalem (Nehalem was first introduced with the first-gen i7 series) systems, this required the chipset to support it as the chipset contained the memory controller.  On post-Nehalem systems, the CPU is required to support it, as Intel moved the memory controller onto the CPU.

 

- The Q6600 does not support VT-d as it is pre-Nehalem, but if coupled with a VT-d-capable chipset, you can do controller passthrough.

 

So in your case, the question will be whether your board supports it or not - I'd update it to the latest BIOS from Asus (Beta or otherwise) and see if you have the option.  Note Intel Virtualisation Technology (VT-x) and VT-d are two separate things.  You'll often find the former without the latter, which will prevent you from using passthrough.

 

I'm all cleared up now. A bit sad because I didn't do all of the research I needed to do last fall, but I know what the deal is now.

 

For clarification, the old P5Q and Q6600 were purely test rig parts. I'm trying to lower my machine count and lower the operating costs. Which is why I don't want 24x7 spinning disks. I'm not worried about the disks failing nearly as much as the power/heat/noise component.

 

I have some pieces in place, but not the whole build for what I want to do. The board I got for my router, with thoughts of eventually going virtualized will work fine. It is a DQ77KB mini-itx. VT-d will work fine on that chipset http://www.intel.com/support/motherboards/desktop/sb/CS-030922.htm?wapkw=%28vt-d%29

 

Of course I screwed up when I put the i3-3225 on there as it does NOT support VT-d. Sigh... :'(

That board with a SLSVP-MV8 would be great for the virtualization needs I imagine. However, this is now turning into an expensive project. I think I'll keep the boxes separate for a while longer as I contemplate what I want to do.

 

Thanks for all the great info on this thread!

Share this post


Link to post

Sell the i3 on ebay and buy another used processor on eBay. I've been doing this for years.

 

I thought I was going to lower my machine count after Sandy.

Turns out I ended up just buying more laptops until I did all the research for buying new towers.

At least with ESX I will be minimizing my box count somewhat since I can boot slackware, centos, RHEL, Solaris, XP and other Dev environments on demand without using separate boxes.

 

I can't say how much VMWare workstation to ESX deployment saves me. I got a massive laptop with 32GB of ram, dual 512GB SSD's, 256msata and 17" monitor for building.. then deploy it to ESX. Really nice.  BetaQuasi's assistance with this VMDK saved me allot of work. Thanks B!!

 

Share this post


Link to post

Anyone that have tested to put ESXi server in sleep mode?

 

I was thinking that if I only run Unraid as an VM, when unraid go into sleep mode (s3 script) it should send a sleep command to the ESXi server, and then I like to send a WOL command to my ESXi server when this server is up then it shall automatic wake up unraid. is this possible ?

 

 

Maybe we shall have a own section with ESXi question in this forum ;)

 

//Peter

Share this post


Link to post

Anyone that have tested to put ESXi server in sleep mode?

 

I was thinking that if I only run Unraid as an VM, when unraid go into sleep mode (s3 script) it should send a sleep command to the ESXi server, and then I like to send a WOL command to my ESXi server when this server is up then it shall automatic wake up unraid. is this possible ?

 

Highly unlikely.  unRAID may go into suspend or sleep mode, but it will not tell ESX to do the same. ESX is designed to be running and available all the time. Although it may be possible, I've not seen anything to support ESX going into low power sleep/suspend mode.

 

Maybe we shall have a own section with ESXi question in this forum ;)

//Peter

 

We asked Tom for this, we'll see how it goes.

Share this post


Link to post

Lots of good info here, except that it's not what I want to hear... :o

 

I really want to consolidate my pfSense router, unraid, and then maybe VM a few other handy things. I guess maybe I didn't plan well enough when I was spec'ing my router box. It is based on an Intel DQ77 board with an i3. Way overkill for pfSense, but I was thinking about future needs.

 

I'm going to look more into the cards you mentioned. Actually, the IBM card is a x8 connector and the supermicro only needs x4.

 

Does everyone else using ESXi really tolerate their drives spinning 24/7? Is there no other way?

 

Thanks!

 

I would do a little more research to see if your board supports VT-D and if ESX will handle the Pass through. I don't know enough to guide you, but I'm not sure the i3 will support it either.

 

What we know works is the popular Supermicro X9SCM boards and the Sandy Bridge XEON CPU along with the IBM 1015 controller. I believe there is a hack for the super micro controller. Again. I'm not advanced on the topic. I would suggest reading JohnM's atlas thread.

 

 

Supermicro works, and its not really a "hack", its just a specific piece of configuration you need to do for a lot of passthru devices..

 

Share this post


Link to post

Lots of good info here, except that it's not what I want to hear... :o

 

I really want to consolidate my pfSense router, unraid, and then maybe VM a few other handy things. I guess maybe I didn't plan well enough when I was spec'ing my router box. It is based on an Intel DQ77 board with an i3. Way overkill for pfSense, but I was thinking about future needs.

 

I'm going to look more into the cards you mentioned. Actually, the IBM card is a x8 connector and the supermicro only needs x4.

 

Does everyone else using ESXi really tolerate their drives spinning 24/7? Is there no other way?

 

Thanks!

 

I would do a little more research to see if your board supports VT-D and if ESX will handle the Pass through. I don't know enough to guide you, but I'm not sure the i3 will support it either.

 

What we know works is the popular Supermicro X9SCM boards and the Sandy Bridge XEON CPU along with the IBM 1015 controller. I believe there is a hack for the super micro controller. Again. I'm not advanced on the topic. I would suggest reading JohnM's atlas thread.

 

 

Supermicro works, and its not really a "hack", its just a specific piece of configuration you need to do for a lot of passthru devices..

 

Is there a source of this information for 'other' devices?

 

Link for the SASLP-MV8, http://lime-technology.com/forum/index.php?topic=7914.msg128847#msg128847

however I'm interested in where to obtain information on other controllers that may support pass through with this sort of information.

Share this post


Link to post

Not that I know of, I googled around and found it on several other fora, also when totally not related to unraid or the MV8''s... easiest to do is google for pciPassthru0.msiEnabled = "FALSE" I guess.

 

I remember reading it is some kind of switch  telling esxi how to do the passthru..

Share this post


Link to post

Lots of good info here, except that it's not what I want to hear... :o

 

I really want to consolidate my pfSense router, unraid, and then maybe VM a few other handy things. I guess maybe I didn't plan well enough when I was spec'ing my router box. It is based on an Intel DQ77 board with an i3. Way overkill for pfSense, but I was thinking about future needs.

 

I'm going to look more into the cards you mentioned. Actually, the IBM card is a x8 connector and the supermicro only needs x4.

 

Does everyone else using ESXi really tolerate their drives spinning 24/7? Is there no other way?

 

Thanks!

 

I would do a little more research to see if your board supports VT-D and if ESX will handle the Pass through. I don't know enough to guide you, but I'm not sure the i3 will support it either.

 

What we know works is the popular Supermicro X9SCM boards and the Sandy Bridge XEON CPU along with the IBM 1015 controller. I believe there is a hack for the super micro controller. Again. I'm not advanced on the topic. I would suggest reading JohnM's atlas thread.

 

 

Supermicro works, and its not really a "hack", its just a specific piece of configuration you need to do for a lot of passthru devices..

 

Is there a source of this information for 'other' devices?

 

Link for the SASLP-MV8, http://lime-technology.com/forum/index.php?topic=7914.msg128847#msg128847

however I'm interested in where to obtain information on other controllers that may support pass through with this sort of information.

I've not had to do the hack for anything but the MV8 - if I remember correctly.  I've got AVerMedia Duet tuners, Hauppauge HVR-2250 tuners, USB 2.0 cards (PCI and PCIe), HighPoint 1742 & 622 HDD controllers and IBM M1015 HDD controllers all on passthrough.  Only the MV8 has ultimately required the hack to the vmx files for the VMs.  I have put the HDD controllers in the "passthru.map" file if ESXi doesn't recognize them.  All the information to put into the "passthru.map" file can be obtained in vCenter Client for ESXi - namely the VENDOR ID and the DEVICE ID - if you don't want to use "lspci" from the command line (Like Me).  I always set "fptShareable" to false so that the device in only used in one VM and not by any other VM or ESXi itself.  Better explanation is here: http://www.vmware.com/pdf/vsp_4_vmdirectpath_host.pdf

Share this post


Link to post

Definitely no way to put ESXi to sleep - it was never designed with this in mind, and I would imagine that, even if you could, putting a hypervisor to sleep would have some serious repercussions when you wake it up.

 

 

Share this post


Link to post

Anyone that have tested to put ESXi server in sleep mode?

 

I was thinking that if I only run Unraid as an VM, when unraid go into sleep mode (s3 script) it should send a sleep command to the ESXi server, and then I like to send a WOL command to my ESXi server when this server is up then it shall automatic wake up unraid. is this possible ?

 

 

Maybe we shall have a own section with ESXi question in this forum ;)

 

//Peter

 

If the only VM you're going to run is unraid then what is the point of running ESXi to begin with? Just run it on bare metal and it'll go to sleep fine.

Share this post


Link to post

I must admit that the whole concept of snapshots is an absolute dream out of management point of view, so even with only one vm there is benefit .. Allthough I now have 3 (plain unraid, mediabeast for all nntp/torrent stuff and a test unraid I use for preclears and stuff).

Share this post


Link to post

Definitely no way to put ESXi to sleep - it was never designed with this in mind, and I would imagine that, even if you could, putting a hypervisor to sleep would have some serious repercussions when you wake it up.

 

There is a standby mode built into it, if you are running VirtualCenter there is an option to manually put a host into standby and exit standby either by WOL or iLO.  They tie it into the DPM (Dyanamic Power Management) feature in clusters to shut down and boot up hosts on demand to meet/match resource demands to save on power.

 

You could probably find some method to trigger it through the remote CLI since I would assume they built it into the CLI API.  You would also need to suspend/shut down the guests manually by script since this is normally a task VirtualCenter does before the host standby (the function being to VMotion all guests off).

 

I've never used it so have no idea how similar it is to normal sleep mode.

Share this post


Link to post

Definitely no way to put ESXi to sleep - it was never designed with this in mind, and I would imagine that, even if you could, putting a hypervisor to sleep would have some serious repercussions when you wake it up.

 

There is a standby mode built into it, if you are running VirtualCenter there is an option to manually put a host into standby and exit standby either by WOL or iLO.  They tie it into the DPM (Dyanamic Power Management) feature in clusters to shut down and boot up hosts on demand to meet/match resource demands to save on power.

 

You could probably find some method to trigger it through the remote CLI since I would assume they built it into the CLI API.  You would also need to suspend/shut down the guests manually by script since this is normally a task VirtualCenter does before the host standby (the function being to VMotion all guests off).

 

I've never used it so have no idea how similar it is to normal sleep mode.

 

This is good and interesting news. If anyone finds out how to do it via cli, please post!

Share this post


Link to post

I must admit that the whole concept of snapshots is an absolute dream out of management point of view, so even with only one vm there is benefit .. Allthough I now have 3 (plain unraid, mediabeast for all nntp/torrent stuff and a test unraid I use for preclears and stuff).

 

Indeed it is.  Just be sure not to forget about them, the snapshot locks the original disk and then starts creating a delta disk for each snapshot.  If it is a high rate of change server these delta's can grow to be quite large and if you run out of space on the volume ESXi will pause all disk IO until you free up some space (most guest OS's will eventually crash if left for any length of time).  Same thing happens if you over provision many thin provisioned disks and they use up all the free space.  Having a 1-2gb 'dummy' file on your host volumes that you can delete in emergencies is very usefull as all other operations other then a hard power off require free disk space to operate.

 

So, keep an eye on free disk space, and if you have a disk on a guest that does not need snapshotted then make it an independent/persistant disk so that way you don't create snapshots of it in the first place.

 

One last tip, when commiting large snapshots they will sometimes fail or say sucessfull but not get rid of of the delta vmdk file and it also will no longer show up in snapshot manager.  What you got is an orphaned snapshot.  If this happens create another snapshot and then do a "delete all", it will go through and commit all delta disks even if they are  not showing up as a snapshot in the manager.

Share this post


Link to post

Hi Brian,

 

Thanks for the heads up - requires some further investigation I guess.  Not sure how far we'll get, considering your statement saying 'Vmotion guests to off', as that obviously requires a production license.

 

 

 

Share this post


Link to post

Quote from another user:

 

In the end the best solution would be to create and start / stop those machines using the vSphere CLI - which I already tested.

The only problem with the free license key for ESXi is that you cannot start / stop them anymore remotely from the command line and so we need a full license if we decide to go with a direct CLI remote start / stop:

"Fault detail: RestrictedVersionFault"

Share this post


Link to post

Here's a question for you guys. For anyone having built a VM unraid, have you migrated disks from one working array into the VM on passthrough interfaces?

 

I've got several steps I will need to accomplish and I'm trying to sort out which way I proceed. These are not in order yet:

 

[*]Make an unraid 5.0 based VM

[*]migrate from an AMD based HW w/ 6 disks, plus license to VM on different HW with passthrough controller

[*]upgrade the parity disk and at least 1 storage disk to 3TB WD Reds

[*]re-structure my shares to remove the split-level stuff I was doing which has become a real PITA

 

From a previous post, I ordered a used i5-3470S which has VT-d. On my DQ77KB it will work fine. I will probably add a SASLP-MV8, but I'm not sure if that comes in phase 1 or phase 2.

 

Hints?

 

Thanks.

Share this post


Link to post

Quote from another user:

 

In the end the best solution would be to create and start / stop those machines using the vSphere CLI - which I already tested.

The only problem with the free license key for ESXi is that you cannot start / stop them anymore remotely from the command line and so we need a full license if we decide to go with a direct CLI remote start / stop:

"Fault detail: RestrictedVersionFault"

 

Figures.  They dangle that damn carrot.

Share this post


Link to post

Here's a question for you guys. For anyone having built a VM unraid, have you migrated disks from one working array into the VM on passthrough interfaces?

 

I've got several steps I will need to accomplish and I'm trying to sort out which way I proceed. These are not in order yet:

 

[*]Make an unraid 5.0 based VM

[*]migrate from an AMD based HW w/ 6 disks, plus license to VM on different HW with passthrough controller

[*]upgrade the parity disk and at least 1 storage disk to 3TB WD Reds

[*]re-structure my shares to remove the split-level stuff I was doing which has become a real PITA

 

From a previous post, I ordered a used i5-3470S which has VT-d. On my DQ77KB it will work fine. I will probably add a SASLP-MV8, but I'm not sure if that comes in phase 1 or phase 2.

 

Hints?

 

Thanks.

 

Step 2 should be step 1. Don't create the VM until you have the hardware in place that actually works with it. The rest can go in any order you want.

Share this post


Link to post

You can't get spindown with RDM'ed drives - you need to go with the controller passthrough option, something like a M1015 or Supermicro SASLP-MV8. 

 

Beta, have you or anyone seen the SASLP-MV8 working on ESXi. Some googling around seems to indicate that it is not well supported on ESXi 5.1. I noticed that your Orion build was originally going to have the MV8.

 

For me, I need something that is x4 PCI-X. The mobo I have and want to use does not have an x8 slot.

 

Thanks.

Share this post


Link to post

Plenty of folks on this forum use the MV8 - restrict searching to the forum and you should find some hits on 5.1 and the MV8.

 

There is a change you need to do to a text file somewhere, but that's about it afaik.

Share this post


Link to post

You can't get spindown with RDM'ed drives - you need to go with the controller passthrough option, something like a M1015 or Supermicro SASLP-MV8. 

 

Beta, have you or anyone seen the SASLP-MV8 working on ESXi. Some googling around seems to indicate that it is not well supported on ESXi 5.1. I noticed that your Orion build was originally going to have the MV8.

 

For me, I need something that is x4 PCI-X. The mobo I have and want to use does not have an x8 slot.

 

Thanks.

 

Check out my build. works like a charm and was a breeze to setup..

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.