Jump to content

ESXi 5 / unRaid 5b13 - Slow transfers / High CPU Usage


punksux

Recommended Posts

I am using unRaid 5 beta 13 as a virtual machine in ESXi 5 and I am having very slow transfers with very high cpu usage. When I transfer data to unRaid, from ether a physical machine or another virtual machine, I get around 1MB/s or slower. If I use the "top" command in unRaid while transferring there is 100% cpu usage, mostly from the "smbd" process. Also, access to the webui or shares is slow or non-existent when transferring. If it is a long transfer, I usually get a lot of "hangcheck value past margin!" errors in the syslog. After the transfer completes, I have to restart unraid to fix the connection.

 

The server is:

Biostar A880G+

Athlon II X2 255 3.1GHz

4 Gb RAM

Intel Pro 1000 GT

 

Unraid is set to have 1 core and 2 Gb ram with 66% of the shares.

I have tried running unraid in it's own vSwitch, with its own nic, but that did not help. I switched the vnic to VMXNET 3 and it was no different.

 

Any help would be appreciated.

Thanks

syslog-2011-11-14.txt

Link to comment

 

Hrmm....

 

I'd be interested in seeing your ESXi usage when you have this issue.

 

I never use more then 20% of my total ESXi CPU except when I have handbrake running.

 

these screen shots are are with several guests running. the newsbin and torrent guests are downloading at about 50Mb/s straight to my unRAID from the internet. I am streaming 3 Blurays and Par/Raring files on the array..

as you can see nothing (I am hurting bad on ram though)... ill wager it is your hardware or bios.

XRMzkl.png

 

ZpPvsl.png

 

If i had to guess..

 

the problem is a result of the low end desktop without "Virtualization Technology" along with a low powered CPU.

 

Looking through the spec sheet it looks like that board does not support AMD's I/O Virtualization Technology, "AMD-Vi" or "IOMMU".

 

By not having VT, ESXi is using a binary translation of the hardware (emulating the hardware). That and joined to a lower end dualcore CPU without hyperthreading is resulting in ESXi burning a lot overhead..

 

Again, I am only guessing at this point...

 

This same thing would happen with a Celeron CPU or even a P4.

It is like taking a scooter onto the autobahn.. you cant keep up with traffic...

 

It is hard to find a desktop that runs ESXi. the HCL list is small.. and once you do. you want it as beefy as possible.

 

Link to comment

Thank you for your reply. You are probably right that it is my hardware. While transferring, the CPU usage only gets up to 1GHz and the ram is almost full. The weird thing is, I can stream 3-4 movies simultaneously to different computers, it's just the transferring to, that kills it. This is my first time using ESXi, so it might be a configuration error, also.

esxi.jpg.747b014be90188a5f174dd9554e2602b.jpg

Link to comment

I had/have an issue with my ESXi board and the IRQ's. My SASLP and Ethernet are on the same bus.  There is an IRQ problem that can cause a "Disabling IRQ" message for the IRQ that relates to both the SASLP and the Ethernet controller.  When that happens, my IO wait skyrockets and my read/write to the array drops off dramatically.

 

Check for a BIOS update to your board first.

 

If that does not fix it then add irqfixup or irqpoll to the syslinux.cfg file on your flash drive.

 

My syslinux.cfg file:

cat /boot/syslinux.cfg
default menu.c32
menu title Lime Technology LLC
prompt 0
timeout 50
label unRAID OS
  menu default
  kernel bzimage
  append initrd=bzroot irqfixup rootdelay=10
label FreeDOS
  kernel memdisk
  append iso initrd=fdbasecd.iso
label Memtest86+
  kernel memtest

I had irqpoll in there first and that is working fine.  I read up a little bit and irqfixup is supposed to be less "intensive" on the system.  I have changed it to irqfixup for now but have not needed to reboot to see if irqfixup will do the job without the overhead of irqpoll

Link to comment

Your hardware is the same as mine except my MB is Asus M4A88T-M. It's interesting to see that you have Lion and Lion server as your VMs. Can you include a link on instructions to get those running? Also, is pass-through working?

 

Sorry to derail your thread here.

Link to comment

prostuff1 :

 

I tried what you said with the irqfixup in syslinux.cfg, and it works for the first transfer (over 40 MB/s). After that it's back to what it was (slow and high cpu usage). After the transfer the cpu usage goes back down, but the transfers never speed up, until I reset unraid. I attached a picture of unraid running top if it can help.

 

I have the latest bios and have disabled everything I'm not using. I haven't disabled the onboard nic as I have dedicated one nic per vm, but if you think it will help, I will try.

 

Other than that I don't know what to do. Thanks for all your help.

top.jpg.12f84c5851be70dc2ace415154e544f7.jpg

Link to comment

Your hardware is the same as mine except my MB is Asus M4A88T-M. It's interesting to see that you have Lion and Lion server as your VMs. Can you include a link on instructions to get those running? Also, is pass-through working?

 

Sorry to derail your thread here.

 

I assume you meant me. if so, our hardware is pretty different.

 

as far as OSX. it is supported by ESXi 5 (and the latest VMware family).

But it is only supported (ie. unlocked) on APPLE hardware... (as far as this forum is concerned.)

 

that said, a little google-fu will get you the answer you want.

Link to comment

I have the latest bios and have disabled everything I'm not using. I haven't disabled the onboard nic as I have dedicated one nic per vm, but if you think it will help, I will try.

 

You dont have to, it was just a thought.

It wont hurt to test it without it and see if that is a possible hardware conflict.

 

also do you have anything in your SMB.conf?

 

Link to comment

 

You dont have to, it was just a thought.

It wont hurt to test it without it and see if that is a possible hardware conflict.

 

also do you have anything in your SMB.conf?

 

 

I do not have a SMB.conf, should I?

I actually got 2 hangchecks while just sitting there.

Link to comment

try irqpoll and see what happens, it can't hurt at this point.

 

The I/O wait (wa%) looks fine.  Are you trying to write to the same disk that transmission is saving/downloading to?  If that is the case then try stopping all of Transmission and see if the speed picks back up a little bit.

 

I would probably also not make the NIC's dedicated per VM in this case.  virtual e1000 NIC for my unRAID build works perfectly fine.

Link to comment

 

One additional tweak you might try with your existing set up, you could try going into the unraid guests setting in the vsphere client.

try setting a CPU reservation for unRAID to something fairly beefy. see if maybe you get ESXi to steal the CPU from win 7 instead.

 

if that fails, You might be able to get away with what you have by adding a lager CPU.

I can't guarantee that. but if you have a newer quad core laying about in another system, it might be worth a try.

 

If you do go the new hardware route and are on a budget...

I don't know if you saw this post, http://lime-technology.com/forum/index.php?topic=16622.0 , but it is a pretty slick and inexpensive (for ESXi anyways)

high power option.

 

you should be able to recycle your current case, drives and maybe the ram (4GB is still pretty low, but a start).

Link to comment

SATA 1.5?  most/all HDD dont break 1.5GB/s so you will not hit that SATA 1.5 bottleneck.

 

 

The only way you would is if you are adding a SSD to the mix.  If you plan to do that then 1.5 might be a little limiting... but even then, the random access times are so much lower on an SSD that even on SATA 1.5 the SSD will be fine.

 

You can also try out ESXi 4.1 and see if something works better under that.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...