art-informa.pl Posted May 26, 2017 Share Posted May 26, 2017 Hello Experts I just installed Proxmox as guest on unRaid. The problem is that the performance of the Proxmox vm's is terrible. In comparision if Proxmox was directly installed on the same server, all was working 5-20 times faster (system boot time, response time, etc). I know that this is in fact deep virtualization, but, for some reasons (existing licensed software) i have to run already existing proxmox VM's under unraid. So thought installing proxmox as guest will be a good point to start. Can please anybody help me explaining what am I doing wrong, or what should be corrected to have reasonable io performance? Proxmox VM drive is on a share of unRaid and there is the configuration: <domain type='kvm'> <name>Proxmox</name> <uuid>cf02a52b-8d17-11fb-0a7d-d5db142d75c4</uuid> <description>Proxmox VE</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="default.png" os="linux"/> </metadata> <memory unit='KiB'>67108864</memory> <currentMemory unit='KiB'>33554432</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>24</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='5'/> <vcpupin vcpu='6' cpuset='6'/> <vcpupin vcpu='7' cpuset='7'/> <vcpupin vcpu='8' cpuset='8'/> <vcpupin vcpu='9' cpuset='9'/> <vcpupin vcpu='10' cpuset='10'/> <vcpupin vcpu='11' cpuset='11'/> <vcpupin vcpu='12' cpuset='12'/> <vcpupin vcpu='13' cpuset='13'/> <vcpupin vcpu='14' cpuset='14'/> <vcpupin vcpu='15' cpuset='15'/> <vcpupin vcpu='16' cpuset='16'/> <vcpupin vcpu='17' cpuset='17'/> <vcpupin vcpu='18' cpuset='18'/> <vcpupin vcpu='19' cpuset='19'/> <vcpupin vcpu='20' cpuset='20'/> <vcpupin vcpu='21' cpuset='21'/> <vcpupin vcpu='22' cpuset='22'/> <vcpupin vcpu='23' cpuset='23'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='12' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/ai_proxmox/Proxmox/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Hypervisors/proxmox-ve_4.4-eb2d6f1e-2.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:65:0f:57'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </memballoon> </devices> </domain> Quote Link to comment
1812 Posted May 26, 2017 Share Posted May 26, 2017 5 minutes ago, art-informa.pl said: Proxmox VM drive is on a share of unRaid If this is on a spinning disk in a parity protected array, that's the bulk of your problem. Put the disk image on the cache or on a disk via unassigned devices. Quote Link to comment
art-informa.pl Posted May 27, 2017 Author Share Posted May 27, 2017 40 minutes ago, 1812 said: If this is on a spinning disk in a parity protected array, that's the bulk of your problem. Put the disk image on the cache or on a disk via unassigned devices. I'll try this, but the same is when the image is on external nfs share comming from nas storare (external). Now have to free one of the unRaid array disks, so I will be back with this tomorrow. Thanks Quote Link to comment
1812 Posted May 27, 2017 Share Posted May 27, 2017 57 minutes ago, art-informa.pl said: I'll try this, but the same is when the image is on external nfs share comming from nas storare (external). Now have to free one of the unRaid array disks, so I will be back with this tomorrow. Thanks Of course it's going to be slow on an external share. Smb/nfs/etc network communication protocols are slower in terms of latency required for running an operating system than using a single disc that is not part of the array. I played around with running vm images via smb/nfs for fun. Mostly un-useable even when running over 10gbe direct attach copper using smb/nfs to a remote ssd. the only way you can have super fast network drive images is with a San setup of some sort, allowing block level access to the disks. be aware that even when you use a spinning disk for Promox, it will be limited by the io of that single disc. And it will only get slower the more vm's you add to it. This is why many people use unassigned ssd drives for vm's. you can run a couple with little noticeable slowdown if they aren't all demanding maximum disk bandwidth. Quote Link to comment
art-informa.pl Posted May 27, 2017 Author Share Posted May 27, 2017 (edited) 6 hours ago, 1812 said: be aware that even when you use a spinning disk for Promox, it will be limited by the io of that single disc. And it will only get slower the more vm's you add to it. This is why many people use unassigned ssd drives for vm's. you can run a couple with little noticeable slowdown if they aren't all demanding maximum disk bandwidth. Hi 1812, than you for your engagment. I have 24 Drives in my unRaid. What about taking ex. 4 disks out of the array and put an additional H310 controller for them (not flashed) ? So the idea is to have 4x2TB disks in raid5 (hardware raid using H310) and then this resulting VD of 6TB having as unsigned device? Will this be recognized under unRaid? will it work? Of course i understand that this unsigned device will have no parity protection from unRaid (but will have from the controller feautures using one of the disks as parity by VD definition. In this setup i will of course auto-backup the VM's from Proxmox to external array regullary. Maybe this will add more io throughput? Am i thinking in the righ direction? Additional question - iSCSI over gigabit will also not be effective enoug? Edited May 27, 2017 by art-informa.pl Quote Link to comment
1812 Posted May 27, 2017 Share Posted May 27, 2017 16 hours ago, art-informa.pl said: Hi 1812, than you for your engagment. I have 24 Drives in my unRaid. What about taking ex. 4 disks out of the array and put an additional H310 controller for them (not flashed) ? So the idea is to have 4x2TB disks in raid5 (hardware raid using H310) and then this resulting VD of 6TB having as unsigned device? Will this be recognized under unRaid? will it work? Of course i understand that this unsigned device will have no parity protection from unRaid (but will have from the controller feautures using one of the disks as parity by VD definition. In this setup i will of course auto-backup the VM's from Proxmox to external array regullary. Maybe this will add more io throughput? Am i thinking in the righ direction? Additional question - iSCSI over gigabit will also not be effective enoug? It might work, not sure. You could always run the disks as cache disks in raid10 and get good speed and redundancy without worrying about another raid controller. iscsi mightwork but I don't know anyone doing it. Doesn't mean it isn't done, just means I can't help you there. Quote Link to comment
art-informa.pl Posted June 7, 2017 Author Share Posted June 7, 2017 Totally resigned with unRaid array Disks as Proxmox drives. I took 4 drives off the unRaid array and put them in Raid10 under H310 and then this VD in unRaid as unassigned device. A Guest virtual Machine with Proxmox was later on put on this unassigned device and nested virtualization support was added to the booting option of unRaid The result -> a perfect speed of the virtual machines on Proxmox with all the goods of the Proxmox system hosted on the same server as the unRaid Thank you all for the hints. 1 Quote Link to comment
maternal-effrontery7688 Posted November 25, 2023 Share Posted November 25, 2023 Hi there.. i'm interessed in trying this out.. Let me check if understood this correctly, your booting unraid, and on unraid, your booting a vm with proxmod. With a Dedicated disk and controller to proxmod. And it works like a charm. ? Im kind of a noob here, so what do you mean with "nested virtualization support" in unraid ? Thanks in advance for your time and help. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.