esxi 5.0 pointers


Recommended Posts

  • Replies 99
  • Created
  • Last Reply

Top Posters In This Topic

This forum is great!  Such great help.

Beta

Did you edit the file in question from the ESXi command line?  Or with a linux-compatible text editor?  It could be that the file wasn't saved in the correct format after editing.

 

I ssh into into the ESXi host and used vi to edit the file.

 

Bob

I always edit the vmx file from the vCenter client GUI to avoid any problems with a non-compatible editor when I add the "... .msiEnabled='FALSE'" setting to the VM.

 

Not sure how to do that.  Can you provide a link on how to do it?

 

Thanks for the quick response.

Link to comment

Bob

I always edit the vmx file from the vCenter client GUI to avoid any problems with a non-compatible editor when I add the "... .msiEnabled='FALSE'" setting to the VM.

 

Not sure how to do that.  Can you provide a link on how to do it?

Here are the instructions I posted way back in December (took quite a while to find it) on how to do it - plus the graphics from another post:

[*]Go to the Options Tab when editing settings for the VM

[*]Select "Advanced" sub type "General" selection.  When you do you will see a button for Configuration Parameters on right hand side of dialog

[*]When you click on the Configuration Parameters button a dialog box pops up - adjust the columns so that you can read them

[*]Find the entries for the MV8 card (if it is the first card then find pciPassthru0 in left column) and press F2 while one of the parameter names is selected like you are going to rename it but just cut the name

[*]Now press escape to stop the edit

[*]Press the Add Row button at bottom

[*]Paste in the name you cut previously but change to pciPassthru0.msiEnabled in left hand column

[*]Go to the Value column (right column) and type in FALSE without the quotes

[*]Then press the OK button - you are finished

 

How_To_Add_VM_Configuration_Parameters.zip

Link to comment

Just one thing, from that screenshot - it looks like you have the SAS2LP in the bottom slot? i.e. the furthest one from the motherboard connectors (LAN, USB etc).

 

If that is the case, move it to the slot closest to the connectors, as that 4th slot is a little odd in how it works.  It's also only x4, so you are bottlenecking your SAS card.

 

I'm not 100% sure on this, as the IIF presents the passthrough slots in a different manner to the F, but it's worth a check.

 

 

 

Link to comment

Just one thing, from that screenshot - it looks like you have the SAS2LP in the bottom slot? i.e. the furthest one from the motherboard connectors (LAN, USB etc).

 

If that is the case, move it to the slot closest to the connectors, as that 4th slot is a little odd in how it works.  It's also only x4, so you are bottlenecking your SAS card.

 

I'm not 100% sure on this, as the IIF presents the passthrough slots in a different manner to the F, but it's worth a check.

 

Beta

Yes, I have it in the bottom slot.  For the ASUS P8H77-I Motherboard has a single "PCIe 3.0/2.0 x16 slot".  Hopefully that is not a show stopper.  :-\

Link to comment

Bob

 

Success!!  I was able to transfer a 7G file to unRAID and on to the cache drive and no PSOD.  :D

 

I need to do more testing.  It used to crash when TM ran on my MacBook. 

 

Is this the best way to move the drives from the MB SATA ports (RDM) to the SASLP controller (pass-through)?

[*]verify that unRAID parity is good

[*]bring down the array

[*]shutdown the unRAID guest

[*]move the RDM files in the datastore/unraid folder to a temp folder (done via ssh)

[*]shut down the ESXi host

[*]disconnect the MB SATA cables from the drives

[*]connect the SASLP cables to the drives

[*]power up the ESXi host

[*]power up the unRAID guest (array off)

[*]verify that unRAID sees ALL the drives

[*]start the array

[*]verify parity and test

 

Thanks again for everyone's help.

Link to comment

Bob

 

Success!!  I was able to transfer a 7G file to unRAID and on to the cache drive and no PSOD.  :D

 

I need to do more testing.  It used to crash when TM ran on my MacBook. 

 

Is this the best way to move the drives from the MB SATA ports (RDM) to the SASLP controller (pass-through)?

[*]verify that unRAID parity is good

[*]bring down the array

[*]shutdown the unRAID guest

[*]move the RDM files in the datastore/unraid folder to a temp folder (done via ssh)

[*]shut down the ESXi host

[*]disconnect the MB SATA cables from the drives

[*]connect the SASLP cables to the drives

[*]power up the ESXi host

[*]power up the unRAID guest (array off)

[*]verify that unRAID sees ALL the drives

[*]start the array

[*]verify parity and test

 

Thanks again for everyone's help.

That might work.  The only potential problem I see is MOVING the RDM file.  Not sure you need to worry about that as long as you disconnect the RDM file from all VMs you could probably just leave it where it is for your steps above.  Make sure you have a record of how it was connected - which scsi/sas port it was connected to etc and which MB port and drive it was connected to so that you can just reattach it if necessary.
Link to comment

It's been a week and I haven't gotten any replies in the UCD Atlas thread so I figure I'll cross post this here. See my issue at the bottom of the post.

 

So my new M1015 card came in and I set up a ZFS RAID-Z array with OpenIndiana/napp-it. I'm using four Hitachi 500GB 7200RPM laptop drives. I'm getting nice performance. I may get an SSD to add as a ZIL and see if that improves performance even further.

 

From the dd read write bench in napp-it:

 

Memory size: 8192 Megabytes

write 16.777216 GB via dd, please wait...
time dd if=/dev/zero of=/pool1/dd.tst bs=2048000 count=8192

8192+0 records in
8192+0 records out

real       55.7
user        0.0
sys         3.6

16.777216 GB in 55.7s = 301.21 MB/s Write

wait 40 s
________________________________________
read 16.777216 GB via dd, please wait...
time dd if=/pool1/dd.tst of=/dev/null bs=2048000

8192+0 records in
8192+0 records out

real       59.6
user        0.0
sys         2.9

16.777216 GB in 59.6s = 281.50 MB/s Read

 

 

And over NFS from an Ubuntu VM:

 

mike@Workhorse:/nfs/test$ sudo dd if=/dev/zero of=dd.tst bs=2048000 count=2048
2048+0 records in
2048+0 records out
4194304000 bytes (4.2 GB) copied, 14.1594 s, 296 MB/s

mike@Workhorse:/nfs/test$ sudo dd if=dd.tst of=/dev/null bs=2048000
2048+0 records in
2048+0 records out
4194304000 bytes (4.2 GB) copied, 18.7457 s, 224 MB/s

 

 

 

However, my problem comes when I'm trying to use this ZFS array to set up a cache drive for my unraid array. I added the NFS share as a datastore. I then changed the SCSI controller for the unraid guest to the Paravirtual adapter, created a new virtual disk in SCSI mode on the NFS share, booted up unraid and added it as my cache drive. The problem is the virtual disk always appears to be spun down so it gets bypassed and writes go directly to the unraid array disks and I can't create any cache only shares. However, I can manually browse to the cache from the terminal and write data to it. Did I do something wrong when adding this disk?

Link to comment

Can't see any issues with what you did - my setup is essentially identical and cache drive writes work fine.  The cache drive appears spun up all the time as opposed to spun down (how I would expect it to appear.)

 

 

The only difference I can see is I set up my datastore via NFS versus iSCSI how you have yours set up.

 

That was a whole other issue. I could not get the iSCSI shared array to appear in the list to add as a datastore even though it was showing up as it should under the storage adapters. I spent a few hours on it and could not get it to work. Finally just gave up and exported via NFS as that is actually what the developer of napp-it suggests over iSCSI for use as a datastore in ESXi.

Link to comment

I have actually switched to NFS, so that even eliminates that as a potential issue.  I wanted to regain the ability to bond the 4 ports on my Intel NIC and NFS was the answer to that (otherwise you have to bind NIC's to iSCSI interfaces and have multiple paths to the iSCSI target.  I wanted this NIC to do more than that, and the traffic was all internal anyway, so didn't really make sense to leave it set up as iSCSI.)

 

 

Link to comment

I have actually switched to NFS, so that even eliminates that as a potential issue.  I wanted to regain the ability to bond the 4 ports on my Intel NIC and NFS was the answer to that (otherwise you have to bind NIC's to iSCSI interfaces and have multiple paths to the iSCSI target.  I wanted this NIC to do more than that, and the traffic was all internal anyway, so didn't really make sense to leave it set up as iSCSI.)

 

Good to know. I'll give this another go and see what happens.

 

One last remaining difference is I believe you are using FreeNAS? I'm using OpenIndiana and napp-it.

Link to comment

Yeah that's correct - a NFS datastore should be a NFS datastore though, regardless of platform.  I have an OpenIndiana/napp-it VM at work with a ZFS pool attached..  we were using it to evaluate array speeds, so it's in a dev/test environment.  I'll see if I can spin up a free unRAID VM there with a few virtual disks and see what happens... probably won't be until next week though.

Link to comment

Can't see any issues with what you did - my setup is essentially identical and cache drive writes work fine.  The cache drive appears spun up all the time as opposed to spun down (how I would expect it to appear.)

 

Finally got around to giving this another go. After you add the vdmk disk to your array and format it, you have to reboot before writes will start going to it as your cache disk.

 

Not getting very good speeds considering I'm using 4 7900RPM 500GB laptop disks in Raid-Z. Only getting a max of about 118MB/s to the cache. 70-90 MB/s average. I remember in another thread you mentioned you were getting in the 180-230MB/s range. If I write directly to the NFS share from the same VM I wrote to the cache disk from I get more than double what I get when writing to the cache disk.

 

 

EDIT: Simultaneous reads/writes seem to be a problem too. I can download from usenet at 13.7MB/s with my ISP. If one download is going while another just completed download is extracting the speeds of whatever is downloading slow down to between 2 and 4 MB/s and then speed back up when it's finished. When I was using a single 320GB 7900RPM laptop disk as my cache drive this didn't even cause it to break a sweat. Same slowdown occurs if I use dd to write to the cache drive to benchmark the speeds while a download is going. This is really disappointing...

Link to comment

When you provision a cache drive, I'd recommend doing Thick Provision - Eager Zero.  Either of the other two options will result in lower performance.

 

Thick Provision - Eager Zero essentially writes out the entire .vmdk ready to read/write to.

 

I can't comment on Sab's performance on top of a cache drive (if that's what you are using) as I use a separate VM for sab/sick/couch.. writes to the cache drive are only at the final stage when it moves the file across to the array.

Link to comment

When you provision a cache drive, I'd recommend doing Thick Provision - Eager Zero.  Either of the other two options will result in lower performance.

 

Thick Provision - Eager Zero essentially writes out the entire .vmdk ready to read/write to.

 

I can't comment on Sab's performance on top of a cache drive (if that's what you are using) as I use a separate VM for sab/sick/couch.. writes to the cache drive are only at the final stage when it moves the file across to the array.

 

Thanks for the pointers. I did choose thin provisioning so I'll give that a try. I also have SAB, SB, cp, etc. in a seperate VM but I have them download directly to the unraid VM (so to the cache drive) and when they extract, for instance, the are extracted from the cache drive, to the ubuntu VM that hosts the downloaders and actually performs the extraction, and directly back to the cache drive. Like I said before with the 7900rpm laptop disk I was using previously this didn't cause any performance issues but I'll try what you suggested and see if that improves performance.

 

I hesitate to have everything download directly to the ubuntu VM and then move it to the unraid VM once all the post processing is done because occasionally I'll download 300-400GB in a single day (gotta love 110Mbps Internet service) and would like to avoid having to provision off that much space for the ubuntu VM. If the transfer speeds to the cache pick up a good amount when I recreate the cache drive like you suggested that might not be necessary if it can move the data quick enough that it wouldn't fill up the ubuntu VMs allocated space (currently 128GB). I guess I could also move the ubuntu VM off the ZFS pool and put it on its own SSD that I have lying around for speed purposes and then back the SSD up once a week to the ZFS pool for redundancy purposes.

 

I also want to pick up a SLC SSD use as a ZIL but I'm trying to hold out for a deal. It's hard to bring myself to spend $110 bucks on a 20GB SSD which is what SLC SSDs are going for these days.

Link to comment

Interesting. I was curious about ZIL, I started to wonder if reiserfs could have the journals on another drive perhaps cache or ssd. here's what I found.

 

http://linux.about.com/library/cmd/blcmdl8_reiserfstune.htm

 

I bet that would speed up writes to the unRAID array since all journal operations would be done elsewhere not requiring any form of parity calculations.

 

Anyone ever play with this idea?

 

Link to comment

For some reason the GUI interface to add a disk to that VM won't let me create a Thick Eager Zero disk, it will only allow thin. When I get to the step in the wizard to choose the provisioning method thin is selected and the other options are greyed out. I have to use the command line vmkfstools utility to manually create it and then add it as an existing disk.

Link to comment
  • 3 weeks later...

And it works !  The mini ubuntu did it, and I am actually glad with it, I feel fine with a prompt an all I used Ubuntu desktop for is to start the terminal (and it took me half an hour to find that :-)

 

Installing SABNZBD, Couchpotato and Sickbeard actually was extremely easy. For the benefit of others the  short instruction herein (put together out of several webpages that all had their own shortcommings):

 

This is in real shorthand:

 

sudo add-apt-repository ppa:jcfp/ppa

sudo apt-get install -y sabnzbdplus sabnzbdplus-theme-smpl sabnzbdplus-theme-plush sabnzbdplus-theme-iphone

sudo vi /etc/default/sabnzbdplus
set the host field to 0.0.0.0 and set the port to 8080
sudo service sabnzbdplus start

wget https://github.com/midgetspy/Sick-Beard/tarball/master -O sickbeard.tar.gz
tar xf sickbeard.tar.gz
mv midgetspy-Sick-Beard-8d7484d .sickbeard
sudo mv .sickbeard/init.ubuntu /etc/init.d/sickbeard
sudo vi /etc/init.d/sickbeard
Edit the APP_PATH to point to /home/user/.sickbeard, where "user" is your username and set RUN_AS to your username
sudo update-rc.d sickbeard defaults
sudo service sickbeard start

sudo apt-get install git-core python
git clone git://github.com/RuudBurger/CouchPotatoServer.git .couchpotato
cd ~/.couchpotato/init
sudo cp ubuntu /etc/init.d/couchpotato
sudo vi /etc/init.d/couchpotato
Edit the APP_PATH to point to /home/user/.sickbeard, where "user" is your username and set RUN_AS to your username
sudo chmod +x /etc/init.d/couchpotato
sudo update-rc.d couchpotato defaults

sudo apt-get install cifs-utils
sudo mkdir /mnt/movies
sudo mkdir /mnt/series

Now add the following two lines to fstab (sudo vi /etc/fstab)
/192.168.1.13/Series /mnt/series cifs user=<user>,password=<password>,file_mode=0777,dir_mode=0777
/192.168.1.13/Series /mnt/series cifs user=<user>,password=<password>,file_mode=0777,dir_mode=0777

 

The correct syntax to mount read/write took me several hours of trying out several "Yeah it works" solution that didn't ;-)

 

BobPhoenix: Yes, I found that last night, with a powered down VM you need to remove the interface and then re-assign it. I tried it with another VM and have still not done it with the unraid VM, everything is pretty speedy as it is.

 

jangjong: I am still using the USB to boot into unraid but I am using the SSD to boot into esxi (and esxi then boots the VM using the flashdrive, the nice thing is that I can now choose between running bare-metal and esxi by changing the boot option on the system, if I boot off the SSD I get esxi, if I boot of the SSD I get bare metal unraid.

 

I must say I am really loving this setup... As far as you could tell after one day the unraid VM is rock stable, I only have airvideo and cache-dirs running as plugins.  I even found a (free) APP for my iphone that controls the esxi setup and lets me start/stop and view performance of the vm's and the system itself.

 

All appears to be working with the exception of couchpotato, for some reason it refuses to scan all my movies.

 

I have a question, during installation of Ubuntu-minimal, what software should I select if the only purpose (at this point) of the Ubuntu-minimal VM is to run SABNZBD, Couchpotato and Sickbeard?  See attached screenshot for software package selections during installation.  Would I want to select "Samba file server" to move files?

Ubuntu_minimal_32bit_-_software_selection.png.07c65286d27443238abd6bd87df3d4f4.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.