Calling All Xen Users...Feedback Needed...


jonp

Recommended Posts

I think staying with a single hypervisor is indeed the best approach ... although I certainly sympathize with those who have significant investment in building Xen VM's ... especially if they're using paravirtualized OS's where some of the virtualized services are embedded in the guest OS.

 

But I know jonp has put a LOT of time into exploring both of these hypervisors and trust that KVM is the best choice.

 

Link to comment
  • Replies 156
  • Created
  • Last Reply

Top Posters In This Topic

It's seems there's general but not universal acceptance of the kvm-only approach.

 

I agree that it makes sense.

 

For my part, I look forward to learning something new and will no doubt find workarounds to any problems that might arise.

 

The only thing I'm looking for is time to manage the process.

 

@Garycase, maybe we'll take the PVR discussion to another thread?

Link to comment

@Garycase, maybe we'll take the PVR discussion to another thread?

 

We can do that via PM's ... just send me a note with your reply and I'll respond via PM

 

I would like to see what you guys come up with as I do enjoy reading and learning about what others use to record shows with. Maybe a unRaid PVR section to post in?

 

JM

Link to comment

@Garycase, maybe we'll take the PVR discussion to another thread?

 

We can do that via PM's ... just send me a note with your reply and I'll respond via PM

 

I would like to see what you guys come up with as I do enjoy reading and learning about what others use to record shows with. Maybe a unRaid PVR section to post in?

 

JM

 

Okay, instead of doing this via PM, we'll have the discussion here:

http://lime-technology.com/forum/index.php?topic=39423.0

 

Peter -- I created a separate thread just for our discussion on this.

 

Link to comment

As an FYI, I was able to create a Windows 7 Xen-based VM today, then got that same VM to spin up under KVM without Windows reactivation required (not as daunting as it might seem).  I'm still working out the driver swap between Xen and KVM so that the previous Xen VM can use VirtIO for network and storage access, but I'm confident that'll be figured out soon.

Link to comment

So how difficult will it be to convert my Ubuntu Server with PCI-E NIC pass through to KVM?

 

That actually should be super easy because it's a Linux-based VM with PCI pass through.  This means that your Linux VM is an HVM (not paravirtualized).

 

In theory, you should be able to restart in KVM mode, create a new VM with the VM Manager, but for primary vdisk, specify the same image file you had with Xen (make a backup of that file before doing this).  The one thing you will have to do is edit your XML to specify the NIC pass through as we do not offer that capability through the webGui VM editor yet.  To do this, can you give me the PCI address of your NIC device?  Should be in your domain CFG file for Xen.

Link to comment

 

The reality is that converting from Xen to KVM is dependent on whether we're talking about HVM guests (mainly Windows) or paravirtualized guests (Linux-only).  With PV guests, converting is more involved.  I added the steps required to the wiki:

 

http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Converting_VMs_from_Xen_to_KVM

 

It ultimately links to a guide on OpenSUSE's site, but as I indicate in the wiki, only certain steps need be performed.  For Windows-based Xen VMs, the process is going to be a little more involved, but I'm working on a guide.

 

I attempted to convert a Xen Ubuntu 14 server image to KVM and found the instructions in the above link (as far as I can tell) doesn't apply to Ubuntu. I couldn't find the /etc/config/kernel file in my paravirtualized installation.  If I'm mistaken, let me know as I would love to convert my current image to KVM.

Link to comment

 

The reality is that converting from Xen to KVM is dependent on whether we're talking about HVM guests (mainly Windows) or paravirtualized guests (Linux-only).  With PV guests, converting is more involved.  I added the steps required to the wiki:

 

http://lime-technology.com/wiki/index.php/UnRAID_Manual_6#Converting_VMs_from_Xen_to_KVM

 

It ultimately links to a guide on OpenSUSE's site, but as I indicate in the wiki, only certain steps need be performed.  For Windows-based Xen VMs, the process is going to be a little more involved, but I'm working on a guide.

 

I attempted to convert a Xen Ubuntu 14 server image to KVM and found the instructions in the above link (as far as I can tell) doesn't apply to Ubuntu. I couldn't find the /etc/config/kernel file in my paravirtualized installation.  If I'm mistaken, let me know as I would love to convert my current image to KVM.

 

Ok, I'll let you know what I find.

Link to comment

I attempted to convert a Xen Ubuntu 14 server image to KVM and found the instructions in the above link (as far as I can tell) doesn't apply to Ubuntu. I couldn't find the /etc/config/kernel file in my paravirtualized installation.  If I'm mistaken, let me know as I would love to convert my current image to KVM.

 

That should be "/etc/syconfig/kernel"  Which I couldn't find.

Link to comment

What does it offer? It's up and running.

 

What would it take to switch? Over coming inertia!  And time.  But really I want transmission working, it is not now on xen. So really I just need the time to build a new VM or convert. All I use is plex and transmission but I want the option for getting my VM to start background handbrake and a vm seems better.

Link to comment

Here are a couple of use cases (and I realize this is a limited case for sure). 

 

The first case is my test server doesn't have the hardware capability to do KVM (it's an Intel Atom).  So having Xen allows me to move to the latest beta, and test on a server, including docker and VM's (PVM) to make sure it's stable enough to move to my primary server.    This allowed me to start much earlier in the beta cycle, and to learn about VM's without having to risk downtime on my primary server.

 

The second case is similar to the first, but related.  My primary server, the CPU supports virtualization AMD-V, but the motherboard does not.  Will this work with KVM, I'm not sure (and finding time to test while the server is down and my 4 year old can't watch his TV, is hard to do), but since Xen has been so stable I've been able to use both HVM (Windows XP for some home automation) and PVM VM's (Ubuntu to familiarize myself with it, to try the different desktops and to use Handbrake to re-encode video), including passing through USB devices without an issue.  I also use Docker, but find that for playing with the OS and doing things that dont' have a Web UI, my Xen VM is better.  The other advantage of the VM in my mind is that if I want to try out different versions of apps, I'm not reliant on someone else to update a docker, I can just get the app, and install it myself, and make changes to it

 

So I'm with some of the others who hope that support can be left in for Xen even if there is no additional development.

Link to comment

(and finding time to test while the server is down and my 4 year old can't watch his TV, is hard to do)

Parenting Pro Tip of the Day: "Don't let the inmates run the asylum"

 

Now if you would have said wife, then yes there is nothing to be done  ;D

 

True, but with a 4 year old and 4 month old twins, you gotta pick your battles :)

Link to comment

(and finding time to test while the server is down and my 4 year old can't watch his TV, is hard to do)

Parenting Pro Tip of the Day: "Don't let the inmates run the asylum"

 

Now if you would have said wife, then yes there is nothing to be done  ;D

 

True, but with a 4 year old and 4 month old twins, you gotta pick your battles :)

 

Understand that for sure -- occupying the 4 year old so you can deal with the twins is clearly helpful !!

 

I understand this problem pretty well -- we had 4 kids; and I was the oldest of 7, so often had to help with the younger ones  :)

Link to comment

Here are a couple of use cases (and I realize this is a limited case for sure). 

 

The first case is my test server doesn't have the hardware capability to do KVM (it's an Intel Atom).  So having Xen allows me to move to the latest beta, and test on a server, including docker and VM's (PVM) to make sure it's stable enough to move to my primary server.    This allowed me to start much earlier in the beta cycle, and to learn about VM's without having to risk downtime on my primary server.

 

The second case is similar to the first, but related.  My primary server, the CPU supports virtualization AMD-V, but the motherboard does not.  Will this work with KVM, I'm not sure (and finding time to test while the server is down and my 4 year old can't watch his TV, is hard to do), but since Xen has been so stable I've been able to use both HVM (Windows XP for some home automation) and PVM VM's (Ubuntu to familiarize myself with it, to try the different desktops and to use Handbrake to re-encode video), including passing through USB devices without an issue.  I also use Docker, but find that for playing with the OS and doing things that dont' have a Web UI, my Xen VM is better.  The other advantage of the VM in my mind is that if I want to try out different versions of apps, I'm not reliant on someone else to update a docker, I can just get the app, and install it myself, and make changes to it

 

So I'm with some of the others who hope that support can be left in for Xen even if there is no additional development.

 

Playing with VMs as an alternative to Docker on systems without hardware-assisted virtualization capability is definitely one use-case, but it would be considered a niche.  While technology enthusiasts are a key market for us, asking folks to have higher-class hardware for virtual machines isn't an unreasonable request when there is a viable alternative in the form of Docker Containers which do not require virtualization-capable hardware.

 

The point is that "playing with the OS" isn't a necessity to support the running of applications such as Plex, ownCloud, Crashplan, BT Sync, and the thousands of others that are available on the Docker Registry.  If Xen had never been added to begin with and all you had was Docker and KVM, you'd either upgrade your hardware to support KVM or using Docker to achieve your goals.

 

The problem here is simply this:  leaving a major technology in a supported release (when we go RC/final) that we do not plan to support long-term is counter-productive.  We will have to document the reason there is a Xen boot mode, what it's for, and why new users shouldn't start using it now (because of it's planned removal).  For those that don't bother asking, they may see this boot mode and assume, "oh, I guess I can use Xen for my needs," and when we later do remove it, now we've upset another user who's become reliant on a deprecated technology.  I realize for those already using it, its easy to think solely about your use-case and the effort that it will require you to convert, but from our standpoint, that effort pales in comparison to the work involved supporting two hypervisors in a commercial product.  It comes down to only including technologies in our build that we intend to officially support.  For those that cannot afford to take the time required to convert their VMs to Docker or KVM, remaining on the current beta build is a viable option.

Link to comment

I started out using Xencenter as a host for unraid and switched to v6 around beta 6. I used Xen initially, but in order to use the vm manager I switched to KVM.  There was a learning curve, but I am comfortable now.  I also am more and more comfortable with Dockers doing things I used to use VM's for.  I think moving to KVM only is a reasonable decision for LT to make given their position. Just keep making KVM easier (the increase in ease of use from b14 to b15 should be applauded) and all will be well eventually.

Link to comment
The problem here is simply this:  leaving a major technology in a supported release (when we go RC/final) that we do not plan to support long-term is counter-productive.  We will have to document the reason there is a Xen boot mode, what it's for, and why new users shouldn't start using it now (because of it's planned removal).

 

Jonp,

 

This is pretty lame.  Just remove the Xen boot mode from the syslinux.cfg file and no one would be the wiser except those of us in the know.

 

Stop trying to justify the move to drop Xen.  We get it.  Just do what is best for LT and those of us on Xen will adapt.  My issue is with the abruptness of the decision.

 

I did invest some time yesterday in converting my ownCloud VM to the ownCloud Docker.  As I suspected, it was not particularly difficult, I just had a learning curve to get over with using ownCloud with Mariadb, but I got it.  I also had to spend a lot of time reloading contacts and the photos we stored on ownCloud.  I then had to update all our calendar, contacts, and photo synchronization apps.  All of them on Android cell phones broke.  The one positive is that I removed a 30GB VM and replaced it with a much smaller footprint Docker.

 

This is not how I want or need to spend my time.  I don't know what I will do yet with the Windows VMs.  I see some issues that others are having with KVM implementations so I may wait for the VM Manager to become more mature.  I'm also quite concerned about the performance of my Windows 7 Media Center on KVM.  I've had a difficult time tuning the VM and Xen to get decent performance.  I know your response will be that the performance will be better,  but I won't know until I try it.  Xen actually performs quite well for me.

Link to comment

The problem here is simply this:  leaving a major technology in a supported release (when we go RC/final) that we do not plan to support long-term is counter-productive.  We will have to document the reason there is a Xen boot mode, what it's for, and why new users shouldn't start using it now (because of it's planned removal).

 

Just remove the Xen boot mode from the syslinux.cfg file and no one would be the wiser except those of us in the know.

 

We discussed this and it's a non-starter.  Removing Xen means we can also remove python and perl from our build.  This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements.  Previously we had considered making two bzroot files, but with Xen going away, we won't need to do this.  The net difference between a build with KVM tools and without is ~ 32MB.

 

My issue is with the abruptness of the decision.

 

As far as the abruptness, in hindsight, we should have probably dropped Xen a while back, but we were holding out in hope that 4.5 would greatly improve things.  It didn't, which is why we are pulling it.  We didn't come to this decision lightly.  Like I've said to you in PMs, this sucks, but there's no way around it.

 

I did invest some time yesterday in converting my ownCloud VM to the ownCloud Docker.  As I suspected, it was not particularly difficult, I just had a learning curve to get over with using ownCloud with Mariadb, but I got it.  I also had to spend a lot of time reloading contacts and the photos we stored on ownCloud.  I then had to update all our calendar, contacts, and photo synchronization apps.  All of them on Android cell phones broke.  The one positive is that I removed a 30GB VM and replaced it with a much smaller footprint Docker.

 

Glad to hear you were able to do this without much difficulty.  Not sure why you had to reload contacts and whatnot, but then again, I'm not an ownCloud expert.

 

I don't know what I will do yet with the Windows VMs.  I see some issues that others are having with KVM implementations so I may wait for the VM Manager to become more mature.  I'm also quite concerned about the performance of my Windows 7 Media Center on KVM.  I've had a difficult time tuning the VM and Xen to get decent performance.  I know your response will be that the performance will be better,  but I won't know until I try it.  Xen actually performs quite well for me.

 

I've spent the last several days working on converting Windows-based Xen VMs to KVM.  Trying to find a way to make this super easy for folks.  If you didn't install the GPLPV drivers, converting is a piece of cake.  Getting the drivers removed if you did install them though?  That's proving to be a nightmare.  No matter what I try, the Xen PCI driver sticks around and confuses KVM.  This isn't a KVM issue, it's an issue with the GPLPV uninstaller.  Quite frankly, it doesn't work.  That said, I can report that today I was able to get a new Windows 7 VM setup in KVM without much difficulty (and that's with GPU pass through and VirtIO drivers for network and storage).  Used to have more driver-related issues with Win7, but they seem to have disappeared.  Probably due to newer VirtIO drivers and the enhancements for virtualization in the kernel / QEMU.

 

There is one more solution I'm looking into for converting Windows-based Xen VMs to KVM.  It's virt-v2v.  Could potentially provide a basic command line tool for converting a vdisk from Xen to KVM, but I need to test it first.  If it doesn't work, then I have to move on to documenting KVM and give up on the Xen conversion documentation.

Link to comment

We discussed this and it's a non-starter.  Removing Xen means we can also remove python and perl from our build.  This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements.

 

How low of a system requirement could they really have on a 64bit required system where a few megs makes any difference at all? That point there seems to be a strawman argument.

 

Fwiw, I agree that you should have dropped xen long long ago.

Link to comment

We discussed this and it's a non-starter.  Removing Xen means we can also remove python and perl from our build.  This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements.

 

How low of a system requirement could they really have on a 64bit required system where a few megs makes any difference at all? That point there seems to be a strawman argument.

 

Our minimum memory requirement for unRAID 5 was 1GB.  We're going from consuming half that to about a quarter.  I don't know why you're bolding 64-bit requirement.  64-bit is a CPU requirement, not a memory requirement.  The last 32-bit Intel CPU was released in 2004.  So if you bought a system in the last 10 years, unless you got second-hand hardware, you have a 64-bit capable processor.  That said, you may still be using a low amount of system RAM.

Link to comment

We discussed this and it's a non-starter.  Removing Xen means we can also remove python and perl from our build.  This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements.

 

How low of a system requirement could they really have on a 64bit required system where a few megs makes any difference at all? That point there seems to be a strawman argument.

 

Our minimum memory requirement for unRAID 5 was 1GB.  We're going from consuming half that to about a quarter.  I don't know why you're bolding 64-bit requirement.  64-bit is a CPU requirement, not a memory requirement.  The last 32-bit Intel CPU was released in 2004.  So if you bought a system in the last 10 years, unless you got second-hand hardware, you have a 64-bit capable processor.  That said, you may still be using a low amount of system RAM.

 

Now, this is just how i see things, that i suspect you do not agree but...

 

That only applies if they're also trying to run a system with smaller than 2tb drives, since the requirement of supporting larger than 2tb drives forces the system to newer motherboards which uses newer memory modules which ups the minimum quantity of memory a system is even capable of being built with. They simply dont make smaller quantities of memory. Not small enough where the few megs saved on your bzimages matter.

 

If you're going to run a system with smaller drives under 2tb and barely any physical memory, then your best bet is to simply remain on 5.x.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.