punksux Posted November 15, 2011 Share Posted November 15, 2011 I am using unRaid 5 beta 13 as a virtual machine in ESXi 5 and I am having very slow transfers with very high cpu usage. When I transfer data to unRaid, from ether a physical machine or another virtual machine, I get around 1MB/s or slower. If I use the "top" command in unRaid while transferring there is 100% cpu usage, mostly from the "smbd" process. Also, access to the webui or shares is slow or non-existent when transferring. If it is a long transfer, I usually get a lot of "hangcheck value past margin!" errors in the syslog. After the transfer completes, I have to restart unraid to fix the connection. The server is: Biostar A880G+ Athlon II X2 255 3.1GHz 4 Gb RAM Intel Pro 1000 GT Unraid is set to have 1 core and 2 Gb ram with 66% of the shares. I have tried running unraid in it's own vSwitch, with its own nic, but that did not help. I switched the vnic to VMXNET 3 and it was no different. Any help would be appreciated. Thanks syslog-2011-11-14.txt Link to comment
Johnm Posted November 15, 2011 Share Posted November 15, 2011 Hrmm.... I'd be interested in seeing your ESXi usage when you have this issue. I never use more then 20% of my total ESXi CPU except when I have handbrake running. these screen shots are are with several guests running. the newsbin and torrent guests are downloading at about 50Mb/s straight to my unRAID from the internet. I am streaming 3 Blurays and Par/Raring files on the array.. as you can see nothing (I am hurting bad on ram though)... ill wager it is your hardware or bios. If i had to guess.. the problem is a result of the low end desktop without "Virtualization Technology" along with a low powered CPU. Looking through the spec sheet it looks like that board does not support AMD's I/O Virtualization Technology, "AMD-Vi" or "IOMMU". By not having VT, ESXi is using a binary translation of the hardware (emulating the hardware). That and joined to a lower end dualcore CPU without hyperthreading is resulting in ESXi burning a lot overhead.. Again, I am only guessing at this point... This same thing would happen with a Celeron CPU or even a P4. It is like taking a scooter onto the autobahn.. you cant keep up with traffic... It is hard to find a desktop that runs ESXi. the HCL list is small.. and once you do. you want it as beefy as possible. Link to comment
punksux Posted November 15, 2011 Author Share Posted November 15, 2011 Thank you for your reply. You are probably right that it is my hardware. While transferring, the CPU usage only gets up to 1GHz and the ram is almost full. The weird thing is, I can stream 3-4 movies simultaneously to different computers, it's just the transferring to, that kills it. This is my first time using ESXi, so it might be a configuration error, also. Link to comment
prostuff1 Posted November 15, 2011 Share Posted November 15, 2011 I had/have an issue with my ESXi board and the IRQ's. My SASLP and Ethernet are on the same bus. There is an IRQ problem that can cause a "Disabling IRQ" message for the IRQ that relates to both the SASLP and the Ethernet controller. When that happens, my IO wait skyrockets and my read/write to the array drops off dramatically. Check for a BIOS update to your board first. If that does not fix it then add irqfixup or irqpoll to the syslinux.cfg file on your flash drive. My syslinux.cfg file: cat /boot/syslinux.cfg default menu.c32 menu title Lime Technology LLC prompt 0 timeout 50 label unRAID OS menu default kernel bzimage append initrd=bzroot irqfixup rootdelay=10 label FreeDOS kernel memdisk append iso initrd=fdbasecd.iso label Memtest86+ kernel memtest I had irqpoll in there first and that is working fine. I read up a little bit and irqfixup is supposed to be less "intensive" on the system. I have changed it to irqfixup for now but have not needed to reboot to see if irqfixup will do the job without the overhead of irqpoll Link to comment
Johnm Posted November 15, 2011 Share Posted November 15, 2011 That is a good point about the IRQ. I had just assumed it was the lack of VT at 2AM. Perhaps the OP should also try disabling everything in the bios that is not needed. Audio, onboard nic, Etc. Link to comment
SCSI Posted November 16, 2011 Share Posted November 16, 2011 Your hardware is the same as mine except my MB is Asus M4A88T-M. It's interesting to see that you have Lion and Lion server as your VMs. Can you include a link on instructions to get those running? Also, is pass-through working? Sorry to derail your thread here. Link to comment
punksux Posted November 16, 2011 Author Share Posted November 16, 2011 prostuff1 : I tried what you said with the irqfixup in syslinux.cfg, and it works for the first transfer (over 40 MB/s). After that it's back to what it was (slow and high cpu usage). After the transfer the cpu usage goes back down, but the transfers never speed up, until I reset unraid. I attached a picture of unraid running top if it can help. I have the latest bios and have disabled everything I'm not using. I haven't disabled the onboard nic as I have dedicated one nic per vm, but if you think it will help, I will try. Other than that I don't know what to do. Thanks for all your help. Link to comment
Johnm Posted November 16, 2011 Share Posted November 16, 2011 Your hardware is the same as mine except my MB is Asus M4A88T-M. It's interesting to see that you have Lion and Lion server as your VMs. Can you include a link on instructions to get those running? Also, is pass-through working? Sorry to derail your thread here. I assume you meant me. if so, our hardware is pretty different. as far as OSX. it is supported by ESXi 5 (and the latest VMware family). But it is only supported (ie. unlocked) on APPLE hardware... (as far as this forum is concerned.) that said, a little google-fu will get you the answer you want. Link to comment
Johnm Posted November 16, 2011 Share Posted November 16, 2011 I have the latest bios and have disabled everything I'm not using. I haven't disabled the onboard nic as I have dedicated one nic per vm, but if you think it will help, I will try. You dont have to, it was just a thought. It wont hurt to test it without it and see if that is a possible hardware conflict. also do you have anything in your SMB.conf? Link to comment
punksux Posted November 16, 2011 Author Share Posted November 16, 2011 You dont have to, it was just a thought. It wont hurt to test it without it and see if that is a possible hardware conflict. also do you have anything in your SMB.conf? I do not have a SMB.conf, should I? I actually got 2 hangchecks while just sitting there. Link to comment
prostuff1 Posted November 16, 2011 Share Posted November 16, 2011 try irqpoll and see what happens, it can't hurt at this point. The I/O wait (wa%) looks fine. Are you trying to write to the same disk that transmission is saving/downloading to? If that is the case then try stopping all of Transmission and see if the speed picks back up a little bit. I would probably also not make the NIC's dedicated per VM in this case. virtual e1000 NIC for my unRAID build works perfectly fine. Link to comment
punksux Posted November 16, 2011 Author Share Posted November 16, 2011 I will try irqpoll. Yes, transmission is saving to the same disk. I'll try stopping that also. Is there something better to test the network than transferring a file? Link to comment
Johnm Posted November 16, 2011 Share Posted November 16, 2011 not having a SMB.conf is fine. I just wanted to make sure you did not have some sort of funky custom settings. Link to comment
punksux Posted November 16, 2011 Author Share Posted November 16, 2011 Could setting up the unraid VM as a Ubuntu 64bit cause the problems? I just selected that as the operating system when setting up because I wasn't sure what else to use. Link to comment
punksux Posted November 16, 2011 Author Share Posted November 16, 2011 A weird thing: with the Windows 7 VM turned off, I can transfer from my physical computer at 80 MB/s with none of the bad effects. Now, I am really wondering if it is just my low end hardware. Link to comment
Johnm Posted November 17, 2011 Share Posted November 17, 2011 err... ok lets look at your ESXi setup. how are your drives set up? Link to comment
punksux Posted November 17, 2011 Author Share Posted November 17, 2011 I have one 160 GB drive as my datastore drive, and one 1.5 TB drive RAW mapped as my unraid storage drive. ESXi and unraid are on USB sticks, and unraid is passed through to the vm. Link to comment
Johnm Posted November 17, 2011 Share Posted November 17, 2011 so the only drive in unraid is the 1.5tb and the win7 guest is on the datastore? Link to comment
punksux Posted November 17, 2011 Author Share Posted November 17, 2011 Yes, that is right. Link to comment
Johnm Posted November 17, 2011 Share Posted November 17, 2011 hrmm.. sounds like you might have found something there. I was asking to see if you were trying to over use one drive. but you split the guests. so... it might be physical resources after all. Link to comment
punksux Posted November 17, 2011 Author Share Posted November 17, 2011 Yea, that's what I was afraid of. Thanks for all your help. I will just wait until I can get a real visualization capable server. Link to comment
Johnm Posted November 17, 2011 Share Posted November 17, 2011 One additional tweak you might try with your existing set up, you could try going into the unraid guests setting in the vsphere client. try setting a CPU reservation for unRAID to something fairly beefy. see if maybe you get ESXi to steal the CPU from win 7 instead. if that fails, You might be able to get away with what you have by adding a lager CPU. I can't guarantee that. but if you have a newer quad core laying about in another system, it might be worth a try. If you do go the new hardware route and are on a budget... I don't know if you saw this post, http://lime-technology.com/forum/index.php?topic=16622.0 , but it is a pretty slick and inexpensive (for ESXi anyways) high power option. you should be able to recycle your current case, drives and maybe the ram (4GB is still pretty low, but a start). Link to comment
punksux Posted November 17, 2011 Author Share Posted November 17, 2011 Thanks for that link. I'll look into that. I have been eyeing an old server on Ebay with 2 quad core Xeons and 8GB ram. It has VT-d, and the motherboard is on the esxi hcl. I just wonder if the first gen sata will be too slow. Link to comment
prostuff1 Posted November 17, 2011 Share Posted November 17, 2011 SATA 1.5? most/all HDD dont break 1.5GB/s so you will not hit that SATA 1.5 bottleneck. The only way you would is if you are adding a SSD to the mix. If you plan to do that then 1.5 might be a little limiting... but even then, the random access times are so much lower on an SSD that even on SATA 1.5 the SSD will be fine. You can also try out ESXi 4.1 and see if something works better under that. Link to comment
punksux Posted November 17, 2011 Author Share Posted November 17, 2011 OK. I will just plan on new hardware then. Thank you guys for all your help. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.