ESXi 5.x - pre-built VMDK for unRAID


Recommended Posts

Sounds like the virtual BIOS is taking a while to POST.. no idea why that would be.  Here's a screen recording of my VM booting from the 5.0rc12a .vmdk:

 

http://goo.gl/NV4Ws

 

I wonder if it's something to do with the multiple different types of adapters you have passed through.. or maybe that 'unknown' adapter that I can see.  That's the only obvious difference between our VM's.

 

Link to comment

The "unknown" is my ASMedia SATA controller, there is 2 on this MB

 

And your boot time is quick  :D comparing to mine

 

//Peter

 

Couple things:

 

1. VMFS 5 is formated to a 1MB block size, this is normal.

2. Decrease your CPU to 2 or 3.  Your sig lists you using a 4 core CPU, you are likely to incur some some high CPU wait times just from the hypervisor process running, let alone any other VM.

3. Remove your floppy and CDROM unless you have a good reason to use it.  Disable it in the BIOS too.

4. Try removing your APC USB device.

 

It sounds vaguely like a resource scheduling issue, so try the CPU decrease first.  As a general rule, a VM guest will ONLY run faster with more CPU's if it actually needs them to run, in all other cases adding more CPU's then is needed will slow it down.  Official stance from VMware even is to use only 1 CPU if at all possible.

Link to comment

OK, I am still stuck on not being able to add the pre-built .vmdk file.

 

I've got ESXi 5.1 running on my Asus P5Q Deluxe mobo. I put the .vmdk files in my "datastore1" in the VM-unraid directory after creating the VM using the Typical template and Ubuntu 32b.

 

Attached are snips from showing the file attrs in the datastore and showing me trying to add the vmdk and it not showing up in the chooser.

 

Pulling my hair out on what should be pretty straighforward....

 

Thanks!

snip1.JPG.491963db115e3fdee2226ac88df4cf6d.JPG

snip2.JPG.b365fd32e654e0402db62ef3352459ab.JPG

Link to comment

What controller did you choose? I think I remember fiddling around to pick a controller type.

 

To answer your question, the default which is a LSI SCSI.

 

UPDATE! I completely deleted my datastore, reformated and recreated. This time is was as VMFS-5 (I didn't realize there were different versions until I started down the clean-n-clear path.)

 

So I started with a 100% clean slate and this time it found the unRAID.vmdk file (which is 1KB). I'm not sure what the 1GB "flat" file is even for....

 

Another tidbit which is probably documented a zillion times it that you need the network adapter to be E1000, as the Flexible option results in no eth0 interface.

 

Now I will attempt to map the 2 rotated HDD's into the VM. They exist on the same SATA ICH10 controller as the SSD datastore. I hope that works ok.

 

Thanks!

Link to comment

I followed the instructions in here http://lime-technology.com/forum/index.php?topic=7914.0 to map disks directly to the UnRAID VM. It seems to be working. However, the SMART status is not coming through and I'm getting blinking green balls so it doesn't even seem them as spun up.

 

What is the preferred method of mapping disks to UnRAID on ESXi 5.1?

 

Thanks.

 

I've used a command like this.

vmkfstools -z /vmfs/devices/disk/vml.xxxx diskname.vmdk -a lsilogic

 

I ended up changing it to pvscsi, in my tests it was faster, but only if you had the hardware to support it.

 

 

Blinking green balls and no spin down. Yes, that's my case too.

Actually they are probably spinning, but the status cannot be determined. Therefore they cannot be spun down correctly either.

You should be able to get smart status though.

Link to comment

 

I've used a command like this.

vmkfstools -z /vmfs/devices/disk/vml.xxxx diskname.vmdk -a lsilogic

 

I ended up changing it to pvscsi, in my tests it was faster, but only if you had the hardware to support it.

 

 

Blinking green balls and no spin down. Yes, that's my case too.

Actually they are probably spinning, but the status cannot be determined. Therefore they cannot be spun down correctly either.

You should be able to get smart status though.

 

That is the same as I used. I changed the controller in ESXi to be pvscsi, but I did not re-issue the vmkfstools commands. I get no smart info and no spin control. Somehow I gotta fix that or the whole virtualized unraid deal flys out the window. I'm sure I will get there. More tips are greatly appreciated!

 

Thanks!

Link to comment

You can't get spindown with RDM'ed drives - you need to go with the controller passthrough option, something like a M1015 or Supermicro SASLP-MV8.  I wouldn't be surprised if you can't get SMART either, as with RDM, you are essentially putting an extra layer between the VM and your drives.

 

Also, while the E1000 adapter certainly works, the VMXNET3 adapter is even better, as it allows (theoretically) 10Gbps traffic internally between VM's.

 

Lastly, the '-flat' file is the actual virtual hard disk itself - the 1kb file is just a configuration file for it.  You need both, though you'll only ever see the 1kb file in the datastore when you go looking (as that's what VMware needs to look at to mount the virtual hard disk.)

 

 

 

 

Link to comment
I wouldn't be surprised if you can't get SMART either, as with RDM, you are essentially putting an extra layer between the VM and your drives.

With pvscsi, and the lsisas controller you can get smartctl information. Although it wont spin down.

 

I'm going to try a test where I set the spindown time into the drive with hdparm -S241 /dev/sd? on bare metal unRAID.

Then boot into vmware and see if does actually spin down.

 

Since I cannot tell directly, I can try to see if there is a delay when I access the drive.

From what I saw, ESX probes the drives every now and then, so the drives may never spin down.

Link to comment

Yeah a .vmdk is probed for status fairly regularly, so I think you will see the same occur for a RDM drive and it not being spun down unfortunately.  I think there was a discussion about this a while back.. buggers me if I can find it though!

 

I bet it works for pass through.

 

Although I wonder how it would work with an Areca controller.

They have a bios setting to spin down the drive during idle time.

I know unRAID could not spin down my RAID0/RAID1 array, but the controller would do it for me.

Link to comment

Lots of good info here, except that it's not what I want to hear... :o

 

I really want to consolidate my pfSense router, unraid, and then maybe VM a few other handy things. I guess maybe I didn't plan well enough when I was spec'ing my router box. It is based on an Intel DQ77 board with an i3. Way overkill for pfSense, but I was thinking about future needs.

 

I'm going to look more into the cards you mentioned. Actually, the IBM card is a x8 connector and the supermicro only needs x4.

 

Does everyone else using ESXi really tolerate their drives spinning 24/7? Is there no other way?

 

Thanks!

Link to comment

Lots of good info here, except that it's not what I want to hear... :o

 

I really want to consolidate my pfSense router, unraid, and then maybe VM a few other handy things. I guess maybe I didn't plan well enough when I was spec'ing my router box. It is based on an Intel DQ77 board with an i3. Way overkill for pfSense, but I was thinking about future needs.

 

I'm going to look more into the cards you mentioned. Actually, the IBM card is a x8 connector and the supermicro only needs x4.

 

Does everyone else using ESXi really tolerate their drives spinning 24/7? Is there no other way?

 

Thanks!

 

I would do a little more research to see if your board supports VT-D and if ESX will handle the Pass through. I don't know enough to guide you, but I'm not sure the i3 will support it either.

 

What we know works is the popular Supermicro X9SCM boards and the Sandy Bridge XEON CPU along with the IBM 1015 controller. I believe there is a hack for the super micro controller. Again. I'm not advanced on the topic. I would suggest reading JohnM's atlas thread.

 

As far as 24x7 drive spinning. I will be tolerating it for my small ESX Micro servers. At least one of them is an ESX host, a download file server, my main repository for driver/ISO downloads and my home folder. Along with my music.

 

My other workstation used to have 4 10,000 RPM drives spinning 24x7x365. Those drives lasted 6 years until hurricane sandy. I'm sure they would have lasted longer.

 

For my current workstations, I went with laptops and swapped almost all of the drives out for 256gb SSD's. The only ones I did not swap out were my dj and download laptops which need massive storage.

 

A few spinning drives all the time is not going to kill them, you just have to manage heat and monitor the smart stats.

Link to comment

I know I'm derailing my own thread, but I just set up OpenIndiana/Napp-it in 30 minutes.  The guy's PDF instructions were spot on and so far, I really like the look of it. 

 

I am going to back up the contents of my ZFS datastore overnight (there's a little under a TB to do), kill my FreeNAS install and migrate the disks to OI/Napp-it I think. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.