Calling All Xen Users...Feedback Needed...


jonp

Recommended Posts

  • Replies 156
  • Created
  • Last Reply

Top Posters In This Topic

Ok guys, here's the quick and dirty process for converting a Windows Xen VM to KVM.  Please test this for me.  I will be adding this to the Wiki and I just recorded a video of me doing it, so I will upload that later if desired.  Here's the procedure.

 

BEFORE STARTING

Make a copy of your existing virtual disk(s) and use the copy to test the conversion process.  If anything goes wrong, you can revert to the original VM under Xen mode.  In addition, I recommend removing any PCI device pass through from your configuration before doing the conversion.  Your devices can then be readded in KVM after the conversion is complete.

 

STARTING POINT

Your server should be booted in Xen boot mode and your VM started.  Connect to your VM over VNC to perform the steps described.

 

Step 1:  Determine if your VM uses GPLPV drivers

1 - From within your Xen VM, open Windows Device Manager (click Start -> right-click on Computer -> click Manage)

2 - Expand the node for Network adapters and note the name.  If the name of the network device contains "Xen", then you are using GPLPV drivers.  Anything else means you are not.

 

NOTE:  IF YOU ARE NOT USING GPLPV DRIVERS, YOU CAN SKIP THE NEXT SEVERAL STEPS AND RESUME THE PROCEDURE FROM REBOOTING INTO KVM MODE.

 

Step 2:  Prepare Windows 7 for driver removal

1 - Open a command prompt, running it as administrator (click Start -> click All Programs -> click Accessories -> right-click Command Prompt -> click Run as administrator)

2 - Type the following command from the prompt:

bcdedit -set loadoptions nogplpv

3 - Reboot the VM

 

Step 3:  Download the uninstaller and remove the GPLPV drivers

1 - Once rebooted, open a browser and download the following zip file:  http://www.meadowcourt.org/downloads/gplpv_uninstall_bat.zip

2 - Extract the uninstall_0.10.x.bat file to your desktop

3 - Right click on this file and click Run as administrator (this will happen very quick)

4 - Reboot your VM

5 - After rebooting, open up Windows Device Manager again.

6 - Under the System Devices section, right-click on Xen PCI Device Driver and select Uninstall, and the confirmation dialog, click the checkbox to Delete the device driver software for this device.

7 - Shut down the VM

 

Step 4:  Reboot your server into KVM mode

1 - Navigate your browser to the unRAID webGui, click on Main, then click on Flash from under the devices column.

2 - Under Syslinux Configuration, move the line menu default from under label Xen/unRAID OS to be under label unRAID OS.

3 - Click Apply

4 - Reboot your unRAID server

 

Step 5:  Create a new VM with the VM Manager

1 - If you haven't already, follow the procedure documented here to enable VM Manager

2 - Click on the VMs tab and click Add VM.

3 - Give the VM a name and if you haven't already, download the VirtIO drivers ISO and specify it

4 - Under Operating System be sure Windows is selected.

5 - Under Primary vDisk Location, select your Xen virtual disk.

6 - Add an additional vdisk and give it a size of 1M (you can put this vdisk anywhere, it is temporary.

7 - Leave graphics, sound, etc. all to defaults and click create.

8 - Upon create, immediately force shutdown the VM (click the eject symbol from the VMs page)

9 - Click the </> symbol from the VMs page next to the VM to edit the XML.

10 - Locate the following section of code:

 

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/mnt/cache/domains/yourxenvm/yourxenvm.img'/>
      <target dev='hdb' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>

 

11 - Delete the entire <address> line and change the bus in the <target> from virtio to ide.  From the above example, our new <disk> would look like this:

 

    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='none' io='native'/>
      <source file='/mnt/cache/domains/yourxenvm/yourxenvm.img'/>
      <target dev='hdb' bus='ide'/>
      <boot order='1'/>
    </disk>

12 - Click Update

 

Step 6:  Starting your new VM and loading the VirtIO drivers

1 - From the VMs page, click the play symbol to start the VM.

2 - Click the eye symbol to open a VNC connection through the browser.

3 - When the VM boots up, it will install several drivers and prompt a reboot, select Reboot later

4 - Open Device Manager again and you'll notice 3 warnings under Other devices (Ethernet Controller, PCI Device, SCSI Controller)

5 - For each device, double click the device, click Update Driver, then select Browse my computer for driver software

6 - Specify a path of d:\WIN7\AMD64\ (or browse to it) and click Next

7 - Select to Always trust Red Hat if prompted.

8 - When all 3 drivers have been loaded, shut down your VM

 

Step 7:  Remove the temporary vdisk and start the VM

1 - Click to edit the VM using the form-based editor (the pencil symbol)

2 - Remove the secondary vdisk

3 - Ensure the primary vdisk is pointing to your original vdisk file (it may be pointing to the secondary vdisk, but you can update this)

4 - When completed, click Update

5 - Start your VM

6 - Verify your device manager shows no warnings

7 - DONE!

 

Final Notes

After this is done, you can shut down the VM and use the form-based editor to add graphics/sound devices to your VM.

 

FYI, I recorded a video of me doing this and even with rebooting the server from Xen to KVM, the entire video is 12 minutes in length.  That's from beginning to end.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

 

I was just setting it up as SMB share setup in the guest VM's fstab.  The thought was that 9P sharing was supposed to be a huge performance boost, so that I should use that since it was an option in KVM.  I'm not sure if it's the 9P vs SMB sharing that caused the pain with the user or not.

 

9p does not necessarily improve performance and the permissions issues are 100% the result of 9P sharing.  9P was something we included for folks that wanted to test with it, but from the testing that folks have done, it doesn't yield a performance improvement over SMB/NFS.  It just makes things easier in terms of not needing to traverse a network layer.

 

Please retest your VM using SMB instead of 9P and report your findings on performance.

 

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

Link to comment

I'm preparing to give this a try.  Lots to backup and prepare for.

 

Where is the best place to get the correct virtio drivers?

 

When you boot into KVM mode and click "add vm", you'll notice that the text for VirtIO Drivers ISO is a hyperlink.  Will take you right to the page to download them.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

 

I was just setting it up as SMB share setup in the guest VM's fstab.  The thought was that 9P sharing was supposed to be a huge performance boost, so that I should use that since it was an option in KVM.  I'm not sure if it's the 9P vs SMB sharing that caused the pain with the user or not.

 

9p does not necessarily improve performance and the permissions issues are 100% the result of 9P sharing.  9P was something we included for folks that wanted to test with it, but from the testing that folks have done, it doesn't yield a performance improvement over SMB/NFS.  It just makes things easier in terms of not needing to traverse a network layer.

 

Please retest your VM using SMB instead of 9P and report your findings on performance.

 

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

Excellent.  Note that you are stating the performance is better than using VirtFS, but what about noticing any difference now between KVM and Xen?

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and CouchPotato always puts the movies as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and CouchPotato always puts the movies as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

 

Yes.

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and CouchPotato always puts the movies as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

 

on my ubuntu VM that i use for building dockers i changed the uid and gid for nobody to match unraid's, before i installed anything on a clean install.

 

don't know if that's an advisable thing to do, but i've not had any problems.

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and CouchPotato always puts the movies as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

 

on my ubuntu VM that i use for building dockers i changed the uid and gid for nobody to match unraid's, before i installed anything on a clean install.

 

don't know if that's an advisable thing to do, but i've not had any problems.

 

failing that though you can connect via smb using uid and gid.

 

it won't look right in the VM but will be fine in unraid itself.

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and CouchPotato always puts the movies as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

 

on my ubuntu VM that i use for building dockers i changed the uid and gid for nobody to match unraid's, before i installed anything on a clean install.

 

don't know if that's an advisable thing to do, but i've not had any problems.

 

The only issue with changing uid/gid is if you want the user to show up for logging into a desktop, since the values are 99 for uid, which won't show up to login to ubuntu desktop.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

 

I was just setting it up as SMB share setup in the guest VM's fstab.  The thought was that 9P sharing was supposed to be a huge performance boost, so that I should use that since it was an option in KVM.  I'm not sure if it's the 9P vs SMB sharing that caused the pain with the user or not.

 

9p does not necessarily improve performance and the permissions issues are 100% the result of 9P sharing.  9P was something we included for folks that wanted to test with it, but from the testing that folks have done, it doesn't yield a performance improvement over SMB/NFS.  It just makes things easier in terms of not needing to traverse a network layer.

 

Please retest your VM using SMB instead of 9P and report your findings on performance.

 

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

Excellent.  Note that you are stating the performance is better than using VirtFS, but what about noticing any difference now between KVM and Xen?

 

I would say performance is comparable (hard to tell if there's a difference now that I've moved away from VirtFS).  One other thing I noticed, unraid performance on shares seems back to normal now (for example renaming a file on an unraid SMB share via my Windows machine would take a bit of extra time, just enough to notice, probably a half second or so difference).

Link to comment

Performance seems significantly better, and it eliminates the whole UID/desktop login issue.  Thanks for the clarification about 9P.

 

I am using 9P shares on my Ubuntu server and it keeps putting files as admin:1000, Do you think if I used SMB shares that it would be nobody:users?

 

Yes.

 

Alright just made the change to SMB and the folders are mounted correctly as nobody:users! I cannot tell you how much of a pain in the butt it has been to have to ssh into unraid to modify folders due to unRAID not have the correct permissions. Should have looked into this a long time ago. Thanks.

Link to comment
FYI, I recorded a video of me doing this and even with rebooting the server from Xen to KVM, the entire video is 12 minutes in length.  That's from beginning to end.

 

Please post a link to the video.  I'm planning on trying this tomorrow morning when the server is not in use.

Link to comment

FYI, I recorded a video of me doing this and even with rebooting the server from Xen to KVM, the entire video is 12 minutes in length.  That's from beginning to end.

 

Please post a link to the video.  I'm planning on trying this tomorrow morning when the server is not in use.

 

Need to finish editing video tomorrow for upload.  About half done.

Link to comment

Really appreciate you putting this together - makes life very easy!

 

Only 2 things;

 

It was unclear what network bridge I needed to set. Xenbr0 was already existing but I left the field blank and all seems well.

 

Step 5.6 had me stumped, it took me a while to figure out  how to add in the extra vdisk, could be clarified.

 

Otherwise, all good. Now to see if it works with Win8 and Linux.

 

(however, I'm having trouble re-setting up my device passthroughs, would be grateful if you could have a quick look: https://lime-technology.com/forum/index.php?topic=38259.msg369036#msg369036)

 

Peter

Link to comment

Really appreciate you putting this together - makes life very easy!

 

Only 2 things;

 

It was unclear what network bridge I needed to set. Xenbr0 was already existing but I left the field blank and all seems well.

 

Step 5.6 had me stumped, it took me a while to figure out  how to add in the extra vdisk, could be clarified.

 

Yeah, the video would make that step easier.  I will get that uploaded.

 

Otherwise, all good. Now to see if it works with Win8 and Linux.

 

For Win8, the procedure SHOULD be the same, but I haven't tested it yet.  With Linux, if in Xen, your Linux VM had a PCI device assigned to it, then it's super easy.  Just define your VM in KVM and start it up.  No second vdisk, no bat files to run, no drivers to uninstall.  Just create a new VM using VM Manager in KVM and then point to the existing vdisk.  If the Linux VM did NOT have a PCI device assigned and you created it as a paravirtual machine, that's a little uglier and I do not have the process for that documented.  It is specific to the distro of the Linux OS you are using, so I don't see myself documenting that at all (you'll just have to build a new VM for KVM or search Xen to KVM $distroname on google).

 

(however, I'm having trouble re-setting up my device passthroughs, would be grateful if you could have a quick look: https://lime-technology.com/forum/index.php?topic=38259.msg369036#msg369036)

 

Peter

 

Will take a look shortly...

Link to comment

 

 

For Win8, the procedure SHOULD be the same, but I haven't tested it yet.  With Linux, if in Xen, your Linux VM had a PCI device assigned to it, then it's super easy.  Just define your VM in KVM and start it up.  No second vdisk, no bat files to run, no drivers to uninstall.  Just create a new VM using VM Manager in KVM and then point to the existing vdisk.  If the Linux VM did NOT have a PCI device assigned and you created it as a paravirtual machine, that's a little uglier and I do not have the process for that documented.  It is specific to the distro of the Linux OS you are using, so I don't see myself documenting that at all (you'll just have to build a new VM for KVM or search Xen to KVM $distroname on google).

 

 

 

So a couple more updates....

 

the windows 7 vm has told me that I must re-activate due to hardware changes and I have 3 days to do so!!

 

I also tried to migrate my ArchLinux/Manjaro/Netrunner VM (which had VGA and usb controller passthrough). Creating a KVM VM and pointing to the disk image didn't work - the VM complained about missing UUIDs just after GRUB. I then tried to reinstall from the original ISO but encountered CPU panics and blank screens.

 

Will keep trying...

Link to comment

 

 

 

 

For Win8, the procedure SHOULD be the same, but I haven't tested it yet.  With Linux, if in Xen, your Linux VM had a PCI device assigned to it, then it's super easy.  Just define your VM in KVM and start it up.  No second vdisk, no bat files to run, no drivers to uninstall.  Just create a new VM using VM Manager in KVM and then point to the existing vdisk.  If the Linux VM did NOT have a PCI device assigned and you created it as a paravirtual machine, that's a little uglier and I do not have the process for that documented.  It is specific to the distro of the Linux OS you are using, so I don't see myself documenting that at all (you'll just have to build a new VM for KVM or search Xen to KVM $distroname on google).

 

 

 

So a couple more updates....

 

the windows 7 vm has told me that I must re-activate due to hardware changes and I have 3 days to do so!!

 

I also tried to migrate my ArchLinux/Manjaro/Netrunner VM (which had VGA and usb controller passthrough). Creating a KVM VM and pointing to the disk image didn't work - the VM complained about missing UUIDs just after GRUB. I then tried to reinstall from the original ISO but encountered CPU panics and blank screens.

 

Will keep trying...

 

Windows should let you reactivate it. At worst you may have to call them and just tell them you upgraded a few parts.

 

That's interesting about your issue with Arch.  Can you send me a pm with your xen cfg file contents for the VM?

Link to comment

I think I found a reason I need XEN :( ... Booting to non-Xen bzroot I get a ton of these errors "amd-vi event logged io_page_fault unraid" and despite my sata card posting I don't have access to the drives attached to it (aka my cache).  Log attached and hardware in sig

 

EDIT: rebooted to Xen mode. Works as it did before. So what's different??? That log is attached for comparison

syslog.zip

syslog-xen.zip

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.