jonp Posted June 18, 2014 Share Posted June 18, 2014 IMPORTANT NOTE RE: VIRSH / LIBVIRT and XEN For those with existing Xen VMs from a previous beta, please note that using libvirt / virsh to create/manage Xen VMs in beta 6 is experimental. We completed some testing with success, but other tests performed today specifically when trying to convert an ArchVM to run using virsh resulted in libvirt crashing. This doesn't crash the host nor affect your Docker-managed Containers, but it does prevent further usage of libvirt without a reboot. In the meantime, we encourage Xen users to not become reliant on libvirt / virsh for day to day use, but rather, just experimentation with a new management tool. You continue to use Xen with xl create and the webGUI management tool until we can resolve these issues. Thanks! VM Quick-Start Guide First and foremost, there are two ways to create and manage VMs in Beta 6: using Xen's xl tool and using libvirt's virsh tool. While xl is specific only to creating Xen-based VMs, virsh can be used to create both Xen and KVM-based virtual machines. However, this guide will be focused only on how to use the new virsh tool to create your VMs. There are three essential steps to go through for the creation of each virtual machine: create a vDisk, create a Domain XML file, and create your VM. Step 1: Create a vDisk Creating your vdisk in beta 6 becomes even easier and more efficient. While previously folks have used the truncate command for this need, in beta 6, things change a bit. Using the new qemu-img tool, we can create virtual disks with newly supported image formats. Let's say we wanted to create a 40GB image file for a new virtual machine called vm1: qemu-img create -f raw vm1.img 40G Running this command will create a 40GB virtual disk using the RAW format in your current directory. Alternatively, if we want to try the new qcow2 image format, we can use this: qemu-img create -f qcow2 vm1.qcow2 40G NOTE: When creating a QCOW2 vDisk, you do not have to use the file extension of "qcow2." You can still specify just "img" but use the qcow2 format. That said, we recommend using it because it will serve as a reminder as to the format you chose to use. Additional information about this image format and its benefits over RAW can be found online and will also eventually be discussed here on our forums. Step 2: Create a Domain XML File In order for create a VM, you ultimately need two things: a disk image and a domain configuration file. Since we already have our disk image, we're ready to create the configuration file. As mentioned previously, libvirt requires that domain configuration files be written in an XML format. The full details of what can go into a Domain XML file can be found on the libvirt website here: http://libvirt.org/formatdomain.html. For now, here's a sample of an XML file we used with virsh to create a basic KVM virtual machine without any type of hardware pass through: <domain type='kvm'> <name>vm1</name> <uuid>554cbf6b-aa75-4044-b1b3-c1005bea6062</uuid> <memory unit='GB'>8</memory> <currentMemory unit='GB'>8</currentMemory> <vcpu>2</vcpu> <os> <type arch='x86_64' machine='q35'>hvm</type> <loader>/usr/share/qemu/bios-256k.bin</loader> <boot dev='cdrom'/> </os> <features> <acpi/> </features> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> <devices> <!-- VIRTUAL DISK (IMG)--> <disk type='file' device='disk'> <source file='/mnt/path/to/vms/vm1.img'/> <target dev='hda'/> </disk> <!-- VIRTUAL CD-ROM (ISO) --> <disk type='file' device='cdrom'> <source file='/mnt/path/to/isos/boot.iso'/> <target dev='hda' bus='ide'/> <readonly/> </disk> <graphics type='vnc' port='5900'> <listen type='address' address='0.0.0.0'/> </graphics> <interface type='bridge'> <source bridge='xenbr0'/> </interface> <emulator>/usr/bin/qemu-system-x86_64</emulator> <input type='mouse' bus='ps2'/> </devices> </domain> Remember to save the file in XML format, and be sure to check the Virtualization Forums for more in-depth guides on domain XML configuration and advanced features including hardware pass through! NOTE: QCOW2 usage has only been tested when being used in conjunction with VirtIO drivers for that disk device. Feel free to test with the above XML code, but this may not work. Additional guides on using QCOW2 and VirtIO will be posted to this forum eventually. Step 3: Create Your VM Now that we have both a virtual disk and a domain XML file, we are ready to create our virtual machine! From the directory containing both the img and xml file, we will type the following command: virsh create vm1.xml If successful, the console will print back to you "Domain vm1 created from vm1.xml." In addition, you can type virsh list to see your running VMs. Using the XML configuration example from before, we also enabled connecting to the VM using a VNC connection. To do so, specify the IP of the unRAID server and use the port specified in the XML file (in our example, we specified 5900). To safely shut down your VM from an SSH console, you can type: virsh shutdown vm1 This is just the beginning on how to take advantage of libvirt and virsh in unRAID 6 Beta 6. This guide will be updated over time with more content and separate guides will be created for more advanced concepts such as hardware pass through support. Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 Nice guide, thanks. A couple of things I noticed... 1. you refer to version 6d, but we only have version 6 currently. I assume 6d refers to an internal version you used prior to releasing to us, but may refer to a yet-to-be-released version, so just wanted to get clarification, or suggest you update your guide. 2. you have an extra [/i] just above step 3 and a couple of questions... 1. in the XML example you posted, it shows <uuid>554cbf6b-aa75-4044-b1b3-c1005bea6062</uuid>, but I'm not sure if this is just a made up number, or is determined from someplace. i.e. how do we find/determine this number? 2. if we create a new image in the qcow2 format, could we copy or move a current VM into this new image, or would we need to recreate the VM from scratch? 3. do the XML files get stored is the same location as the current XEN cfg files, and do they have to be registered the same way? I think that's it for now, thanks! Link to comment
jonp Posted June 18, 2014 Author Share Posted June 18, 2014 Nice guide, thanks. A couple of things I noticed... 1. you refer to version 6d, but we only have version 6 currently. I assume 6d refers to an internal version you used prior to releasing to us, but may refer to a yet-to-be-released version, so just wanted to get clarification, or suggest you update your guide. Thanks for the proof reading ;-). I've removed the "d" as you are correct, that was an internal version. 2. you have an extra [/i] just above step 3 ;-) Fixed. and a couple of questions... 1. in the XML example you posted, it shows <uuid>554cbf6b-aa75-4044-b1b3-c1005bea6062</uuid>, but I'm not sure if this is just a made up number, or is determined from someplace. i.e. how do we find/determine this number? It can be whatever (may have to be that exact length of characters though). I don't even think this field is actually required for "virsh create" to work. 2. if we create a new image in the qcow2 format, could we copy or move a current VM into this new image, or would we need to recreate the VM from scratch? I think there are ways to actually convert an existing image from RAW to QCOW2, but not 100% on the process for that. I would think Ionix may be updating his tool to also help with this, but my guess is that will take him a little time to complete. 3. do the XML files get stored is the same location as the current XEN cfg files, and do they have to be registered the same way? Similar to how Xen works, the .xml file can be technically located anywhere, but we recommend that for the sake of your own sanity, practice the following: Keep your domain configuration files (xml or cfg) with your virtual disk images (.img or .qcow2) in the same folder Name these two files the same way (e.g. win81.xml, win81.qcow2) Name the folder the same way as well (e.g. /mnt/cache/domains/win81) NOTE: There is no "xenman register" equivalent for KVM yet. Everything is 100% command line right now. This will change in the future... Link to comment
peter_sm Posted June 18, 2014 Share Posted June 18, 2014 Nice guide, thanks. A couple of things I noticed... 1. in the XML example you posted, it shows <uuid>554cbf6b-aa75-4044-b1b3-c1005bea6062</uuid>, but I'm not sure if this is just a made up number, or is determined from someplace. i.e. how do we find/determine this number? It can be whatever (may have to be that exact length of characters though). I don't even think this field is actually required for "virsh create" to work. You can create a unique ID with uuidgen Link to comment
BobPhoenix Posted June 18, 2014 Share Posted June 18, 2014 2. if we create a new image in the qcow2 format, could we copy or move a current VM into this new image, or would we need to recreate the VM from scratch? I think there are ways to actually convert an existing image from RAW to QCOW2, but not 100% on the process for that. I would think Ionix may be updating his tool to also help with this, but my guess is that will take him a little time to complete. Would what is talked about here work? http://lnx.cx/docs/vdg/html/ch02s04.html Link to comment
jonp Posted June 18, 2014 Author Share Posted June 18, 2014 2. if we create a new image in the qcow2 format, could we copy or move a current VM into this new image, or would we need to recreate the VM from scratch? I think there are ways to actually convert an existing image from RAW to QCOW2, but not 100% on the process for that. I would think Ionix may be updating his tool to also help with this, but my guess is that will take him a little time to complete. Would what is talked about here work? http://lnx.cx/docs/vdg/html/ch02s04.html Quite possibly! I'll have to test this out sometime in the near future. Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 2. if we create a new image in the qcow2 format, could we copy or move a current VM into this new image, or would we need to recreate the VM from scratch? I think there are ways to actually convert an existing image from RAW to QCOW2, but not 100% on the process for that. I would think Ionix may be updating his tool to also help with this, but my guess is that will take him a little time to complete. Would what is talked about here work? http://lnx.cx/docs/vdg/html/ch02s04.html I'm copying my Windows8.img right now, and will test on the copy when it finishes. Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 Would what is talked about here work? http://lnx.cx/docs/vdg/html/ch02s04.html It seems not, unless I'm doing it wrong (which is very likely). I putty'd into unRAID, changed to my /mnt/cache/VM directory, which contains my copied Windows8.img file, then entered the command from your linked page, no luck. Tried changing .raw to .img, but still no luck root@media:/mnt/cache/VM# qemu-img convert -O qcow2 Windows8a.raw Windows8.qcow2 qemu-img: Could not open 'Windows8a.raw': Could not open 'Windows8a.raw': No such file or directory qemu-img: Could not open 'Windows8a.raw' root@media:/mnt/cache/VM# qemu-img convert -O qcow2 Windows8a.img Windows8.qcow2 qemu-img: Could not open 'Windows8a.img': Could not open 'Windows8a.img': No such file or directory qemu-img: Could not open 'Windows8a.img' Link to comment
MyKroFt Posted June 18, 2014 Share Posted June 18, 2014 worked for me qemu-img convert -O qcow2 ArchVM-data.img ArchVM-Data.qcow qemu-img info ArchVM-Data.qcow image: ArchVM-Data.qcow file format: qcow2 virtual size: 75G (80530636800 bytes) disk size: 72G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false Myk Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 that's a data image, meaning a blank storage 'drive', correct? It's not the actual VM image, or am I misunderstanding? I don't have -data.img file, I only have a Windows8.img file, or an Arch5.img file. I only tried the Windows8.img file, which failed. Perhaps I'll stop my Arch5 VM, then try again with that. ***might be user error, will try again in a couple minutes. Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 yep, user error, it's converting as we speak Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 worked for me While I wait for the conversion to finish, and before I try converting my Arch image, I wanted to ask if you're still using XEN, or have you moved to KVM? If still using XEN, can you confirm the converted image worked fine? Also, I think I remember you're using NFS shares, not SMB, correct? I'm having some issues with my Arch.img with SMB shares, so I just changed to use NSF, but it's starting as I type this, so I still have to test if that's any better. Link to comment
MyKroFt Posted June 18, 2014 Share Posted June 18, 2014 Am reconverting now - as i made some changes to the orig .img file - I am still using xen, and cifs shares Will post here shortly if both VM and DATA img files convert and work correctly Myk Link to comment
MyKroFt Posted June 18, 2014 Share Posted June 18, 2014 am going to assume to we have to use "virsh" to be able to use this new container format? Myk Link to comment
JustinChase Posted June 18, 2014 Share Posted June 18, 2014 re: virsh - good point, I'll have to re-read that section. re: my conversion - it's at 38GB now, so almost done (original is 40GB). However, I thought one of the advantages of the qcow2 format was that it didn't automatically allocate all the space, only what was actually required by the VM. I may not be explaining that right, but I expected the new image to be smaller. not a big deal, just an unexpected thing I saw while waiting ** just finished... root@media:/mnt/cache/VM/Windows8a# qemu-img info windows8.qcow image: windows8.qcow file format: qcow2 virtual size: 40G (42949672960 bytes) disk size: 37G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false it seems my disk size is almost the full 40GB; not sure why, as it's a pretty basic Windows 8 install. I'll have to look into it in more detail later. Anyway, off to investigate the virsh info... Link to comment
MyKroFt Posted June 18, 2014 Share Posted June 18, 2014 I wondered about that also, but it might depend on how the .img was created. \ I tried making a 180GB blank data container - and added it to the .cfg as a 2nd data drive, make the partition and formatted it, but it was only 195kb (same size as the file on the SSD) and didnt auto grow - so I dont know if I set it up correctly does xen know this container format by default? Myk Link to comment
jonp Posted June 18, 2014 Author Share Posted June 18, 2014 I wondered about that also, but it might depend on how the .img was created. \ I tried making a 180GB blank data container - and added it to the .cfg as a 2nd data drive, make the partition and formatted it, but it was only 195kb (same size as the file on the SSD) and didnt auto grow - so I dont know if I set it up correctly does xen know this container format by default? Myk you only manage VMS with virsh in beta 6, not containers. containers are managed by docker. we didn't include the LXC management tools with libvirt as they would conflict with docker... Sent from my Nexus 5 using Tapatalk Link to comment
MyKroFt Posted June 18, 2014 Share Posted June 18, 2014 my data image is a RAW .img - I take it that has to stay the same then? disk = [ 'phy:/mnt/appdisk/VM/ArchVM/ArchVM.img,xvda,w', 'phy:/mnt/appdisk/VM/ArchVM/ArchVM-data.img,xvdb,w' ] Myk Link to comment
JustinChase Posted June 19, 2014 Share Posted June 19, 2014 with the change in qemu, do I need to change my cfg file for my windows VM? name = 'windows8' builder = 'hvm' vcpus = '4' memory = '2048' #maxmem = '6144' localtime = 1 device_model_version="qemu-xen-traditional" disk = [ 'phy:/mnt/cache/VM/Windows8/windows8.img,hda,w' ] boot = 'c' xen_platform_pci='1' on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' vif = [ 'mac=00:16:3E:C8:C8:C8,bridge=br0,model=e1000' ] acpi = '1' apic = '1' sdl = '0' stdvga = '0' viridian = '1' vnc = '1' vnclisten = '0.0.0.0' vncpasswd = '' vncdisplay = 1 usb = '1' usbdevice = ['tablet','host:0a12:0001','host:045e:0745','host:174c:5106','host:045e:00db','host:147a:e03e'] pci = [ '00:14.0','01:00.0','01:00.1','09:00.0','00:1b.0' ] or, do I need to convert to virsh right now to get the windows.img file working with this new version? Family wants to watch a movie, and the windowsVM seems to have no internet access, so I'm dead in the water right now. Link to comment
jonp Posted June 19, 2014 Author Share Posted June 19, 2014 with the change in qemu, do I need to change my cfg file for my windows VM? name = 'windows8' builder = 'hvm' vcpus = '4' memory = '2048' #maxmem = '6144' localtime = 1 device_model_version="qemu-xen-traditional" disk = [ 'phy:/mnt/cache/VM/Windows8/windows8.img,hda,w' ] boot = 'c' xen_platform_pci='1' on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' vif = [ 'mac=00:16:3E:C8:C8:C8,bridge=br0,model=e1000' ] acpi = '1' apic = '1' sdl = '0' stdvga = '0' viridian = '1' vnc = '1' vnclisten = '0.0.0.0' vncpasswd = '' vncdisplay = 1 usb = '1' usbdevice = ['tablet','host:0a12:0001','host:045e:0745','host:174c:5106','host:045e:00db','host:147a:e03e'] pci = [ '00:14.0','01:00.0','01:00.1','09:00.0','00:1b.0' ] or, do I need to convert to virsh right now to get the windows.img file working with this new version? Family wants to watch a movie, and the windowsVM seems to have no internet access, so I'm dead in the water right now. Justin, drop this: device_model_version="qemu-xen-traditional" Not necessary anymore. Still reviewing the rest, but start with that. I'll help you through this crisis! Also, you do NOT need to use virsh with Xen in this beta. You can use either virsh or xl to control VMs. You may also be able to drop the "e1000" designation from your network card. Did this VM have the GPLPV drivers installed? Link to comment
JustinChase Posted June 19, 2014 Share Posted June 19, 2014 thanks for the late evening help!!! I'll make the changes now, and report back. yes, this VM has the GPLPV drivers installed already. Link to comment
JustinChase Posted June 19, 2014 Share Posted June 19, 2014 You can create a unique ID with uuidgen worked great, thanks! Link to comment
jonp Posted June 19, 2014 Author Share Posted June 19, 2014 thanks for the late evening help!!! I'll make the changes now, and report back. yes, this VM has the GPLPV drivers installed already. No problem. Let me know how this goes. Also, you can probably drop a bunch of other stuff from the config as well and it'd work just the same if not better. Here's what I would use (I edited yours): name = 'windows8' builder = 'hvm' vcpus = '4' memory = '2048' #maxmem = '6144' localtime = 1 disk = [ 'phy:/mnt/cache/VM/Windows8/windows8.img,hda,w' ] boot = 'c' xen_platform_pci='1' on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' vif = [ 'mac=00:16:3E:C8:C8:C8,bridge=br0' ] viridian = '1' vnc = '1' vnclisten = '0.0.0.0' vncpasswd = '' vncdisplay = 1 usb = '1' usbdevice = ['tablet','host:0a12:0001','host:045e:0745','host:174c:5106','host:045e:00db','host:147a:e03e'] pci = [ '00:14.0','01:00.0','01:00.1','09:00.0','00:1b.0' ] Also curious about your USB pass through. you have usb=1 and VNC plus are you doing a GPU pass through? Is this working properly for both VNC access as well as your USB input devices like mouse and keyboard? Or are you passing through the PCI controller for the USB devices themselves? Link to comment
JustinChase Posted June 19, 2014 Share Posted June 19, 2014 Nope, now the VM won't start at all. When I do xl top, I see this message repeating every few seconds... Found interface vif1.0 but domain 1 does not exist. Should I replace device_model_version="qemu-xen-traditional" with a different 'version' instead? Perhaps with device_model_version="qemu-xen"? I'll try that, to see what happens. Link to comment
bkastner Posted June 19, 2014 Share Posted June 19, 2014 re: virsh - good point, I'll have to re-read that section. re: my conversion - it's at 38GB now, so almost done (original is 40GB). However, I thought one of the advantages of the qcow2 format was that it didn't automatically allocate all the space, only what was actually required by the VM. I may not be explaining that right, but I expected the new image to be smaller. not a big deal, just an unexpected thing I saw while waiting ** just finished... root@media:/mnt/cache/VM/Windows8a# qemu-img info windows8.qcow image: windows8.qcow file format: qcow2 virtual size: 40G (42949672960 bytes) disk size: 37G cluster_size: 65536 Format specific information: compat: 1.1 lazy refcounts: false it seems my disk size is almost the full 40GB; not sure why, as it's a pretty basic Windows 8 install. I'll have to look into it in more detail later. Anyway, off to investigate the virsh info... I am going to guess that while the CoW format allows for thin provisioning it is not going to dynamically check the existing volume and collapse white space when converting. Since you had a 40GB disk image that was using 40GB of space, the new file will do the same. It's only when you are creating a brand new file that it's going to start tiny and grow with usage. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.