Calling All Xen Users...Feedback Needed...


jonp

Recommended Posts

We discussed this and it's a non-starter.  Removing Xen means we can also remove python and perl from our build.  This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements.

 

How low of a system requirement could they really have on a 64bit required system where a few megs makes any difference at all? That point there seems to be a strawman argument.

 

Our minimum memory requirement for unRAID 5 was 1GB.  We're going from consuming half that to about a quarter.  I don't know why you're bolding 64-bit requirement.  64-bit is a CPU requirement, not a memory requirement.  The last 32-bit Intel CPU was released in 2004.  So if you bought a system in the last 10 years, unless you got second-hand hardware, you have a 64-bit capable processor.  That said, you may still be using a low amount of system RAM.

 

Now, this is just how i see things, that i suspect you do not agree but...

 

That only applies if they're also trying to run a system with smaller than 2tb drives, since the requirement of supporting larger than 2tb drives forces the system to newer motherboards which uses newer memory modules which ups the minimum quantity of memory a system is even capable of being built with. They simply dont make smaller quantities of memory. Not small enough where the few megs saved on your bzimages matter.

 

If you're going to run a system with smaller drives under 2tb and barely any physical memory, then your best bet is to simply remain on 5.x.

 

There are still lots of folks on 5.x that are using 2TB or less drives for their array.  You'd be shocked how many.  For new users, this isn't an issue as I suspect new users wouldn't start off with 2TB drives.  The issue here isn't new users, it's trying to make v6 viable for existing users of 5.x as well.

Link to comment
  • Replies 156
  • Created
  • Last Reply

Top Posters In This Topic

This cuts our image size basically in half.  Xen was the only component we had in the build that had those dependencies.  Going from a 400+MB image to ~230MB image is a huge savings and will be important for folks that have low system requirements

 

Wow. Let me see if I understand what you are saying.  You are saying that a person with an old system with 1GB of memory that is probably not 64bit compatible, not going to support the new features of V6 with VMs or Dockers is so important that you are dropping Xen immediately for those of us like myself that have contributed through all the changes in the V6 beta cycle?  All this to save 170mb on a 16GB flash or in the memory file system of an outdated system?  If I were LT I'd rethink my priorities.  I am re-evaluating my priorities when it comes to unraid and what time I am willing to invest in your efforts.

 

Now you're saying that we will basically have to build new VMs under KVM and start over.  That is one big PIA for me.  I will stay on Xen and possibly move to KVN when I have the desire and time to invest in the effort; or I am forced to move on.  Suffice it to say I will at this point wait for LT to sort out issues with KVM.

 

You are set in this decision.  Just move on and stop coming up with more lame reasons to drop Xen with the RC and/or final release.

Link to comment

... Just remove the Xen boot mode from the syslinux.cfg file and no one would be the wiser except those of us in the know.

 

I don't use Xen, but I certainly agree with this suggestion => or, for that matter, the option could still be there, with an ALL-CAPS caveat that "This feature will be removed in v6.1 -- it is included in this release to allow time for users to move Xen VM's to KVM"

 

... for that matter, a lot of users never look at the UnRAID boot screen after they've first got everything set up (many run headless)

 

 

The decision to only support one hypervisor doesn't surprise me -- in fact, I think it's a good idea.  But, as I noted earlier in this thread, I certainly didn't expect the change to be so abrupt.  At the time, I noted:

...  I'm sure that any changes in the hypervisors will be AFTER v6 final is released; so as a minimum you should be able to keep everything as it is for the foreseeable future.

 

 

But clearly I was wrong ... and I suspect most folks using Xen agree with you when you note:

... My issue is with the abruptness of the decision.

 

Link to comment

... Forgot to mention:  I absolutely agree that doing this just to save ~170MB on a flash drive is irrelevant.

 

Anyone who's booting to v6 is going to have enough memory that this is certainly NOT an issue !!

Although I agree that the RAM is probably not that relevant, it does mean that the packages that would otherwise be included no longer need testing and that could be a significant reduction in testing resource requirements for getting a release out.
Link to comment

... Forgot to mention:  I absolutely agree that doing this just to save ~170MB on a flash drive is irrelevant.

 

Anyone who's booting to v6 is going to have enough memory that this is certainly NOT an issue !!

Although I agree that the RAM is probably not that relevant, it does mean that the packages that would otherwise be included no longer need testing and that could be a significant reduction in testing resource requirements for getting a release out.

 

True -- IF there are any Xen-related changes.    Otherwise it could simply be left in with a caveat (as I noted above) about no future support and that it was being removed with the next point release.

 

As far as I know, based on Jonp's earlier comment, the only Xen-related issue is they aren't able to integrate it into the new VM Manager => something I doubt any current Xen users care about.

 

Link to comment

What does it offer? It's up and running.

 

This. 

 

If you are going to remove Xen then it would be nice to have good documentation of the process to convert to KVM

 

Based on earlier comments and what I've seen on the Internet, removing the GPLPV drivers is not that easy, so LT may have a difficult time providing documentation that will do the job.

 

There is one more solution I'm looking into for converting Windows-based Xen VMs to KVM.  It's virt-v2v.  Could potentially provide a basic command line tool for converting a vdisk from Xen to KVM, but I need to test it first.  If it doesn't work, then I have to move on to documenting KVM and give up on the Xen conversion documentation.

 

Jonp has already suggested that he may not be able to come up with a viable solution.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

Link to comment

Python and Perl are extremely relevant, so even if you remove Xen from unRAID, these must stay.

They are gone. We have a build without them. They can be readded via the boot/extra folder.

 

Unraid OS is not dependent on these. Just plugins.

This would make my live worse.  :-\

 

Please advertise this when release the update.

 

i feel a nerd pack 2 coming on....

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

 

I was just setting it up as SMB share setup in the guest VM's fstab.  The thought was that 9P sharing was supposed to be a huge performance boost, so that I should use that since it was an option in KVM.  I'm not sure if it's the 9P vs SMB sharing that caused the pain with the user or not.

Link to comment

I just wanted to share some actual experience since I was able to find some time to switch one of my VM's (in this case an Ubuntu 14.04 desktop).  I do understand the desire to eliminate supporting 2 hypervisors, so please don't consider this just my complaining.  I just wanted to add this to give an example of what switching is like given the current state of unraid/the tools (and I'm admittedly no unix guru, so I'm sure that added to the time/frusturation level).

 

The conversion was, to say the least painful.

 

After going through the install, I had to search to through the forums to find out why my host mapped (passthrough) drives weren't showing up.  I finally found that I had to add the 9P shares in the guest VM.  Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.  So after that I tried a few tests, and discovered that all new files on the passthrough shares were showing up with a user/group of 1000 because of the UID/GID of the guest user.  After spending a ton of time searching, and trying to modify fstab to fix this issue, I was unable to find a solution there, so I ended up chaging the UID/GID of the guest user to match unraid's nobody/users (99/100).  This introduced a new issue, since all users with a UID under 1000 don't show up to login to the desktop.  So to solve this, the only thing I could come up with was creating a second user to allow me to login to a desktop.

 

So after all of that, I have a VM up and running, and I can say that it seems performance is slightly worse (still running some tests using high CPU usage video re-encoding).  One of the interesting things is that the stats page now reflects true CPU usage, with a Xen VM, any CPU use was outside of unraid, so the system stats plugin didn't show it, now with KVM being part of the kernel system stats shows the actual CPU utilization.

Were you doing filesystem pass through with Xen (9p)???  How?

 

I was just setting it up as SMB share setup in the guest VM's fstab.  The thought was that 9P sharing was supposed to be a huge performance boost, so that I should use that since it was an option in KVM.  I'm not sure if it's the 9P vs SMB sharing that caused the pain with the user or not.

 

9p does not necessarily improve performance and the permissions issues are 100% the result of 9P sharing.  9P was something we included for folks that wanted to test with it, but from the testing that folks have done, it doesn't yield a performance improvement over SMB/NFS.  It just makes things easier in terms of not needing to traverse a network layer.

 

Please retest your VM using SMB instead of 9P and report your findings on performance.

Link to comment
Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.

 

This is a bug that is fixed for the next release.  It was in how dmacias code was looking for # of CPUs.  It did a calculation on cores X threads, but there is an easier method:  CPUs.  This should work appropriately in the next release.

 

There were a few bugs with VM manager, but nothing too crazy.  The big ones are an issue with OVMF mode (which we have fixed internally and I've provided a guide on how to work around it in the meantime while we prep for the next release;  see here:  http://lime-technology.com/forum/index.php?topic=39493.0).  The other one was the number of CPUs reported available on the system.  Also fixed.

 

I would expect your experience with KVM to be much better these issues resolved.

Link to comment

jonp

 

Can you assure that at the very least there will be a beta version that has BOTH a working/well-tested KVM implementation and toolset AND Xen?

 

This would allow us boot back and forth between the two when transitioning rather than having to switch unraid versions as well.

 

Your post above indicating the Xen dependencies had been removed and some KVM bugs have already been fixed  indicate to me that even the next release might not have Xen which would indeed be bad news for us switchers :-(

 

Thanks

 

Peter

 

Link to comment

jonp

 

Can you assure that at the very least there will be a beta version that has BOTH a working/well-tested KVM implementation and toolset AND Xen?

 

This would allow us boot back and forth between the two when transitioning rather than having to switch unraid versions as well.

 

Your post above indicating the Xen dependencies had been removed and some KVM bugs have already been fixed  indicate to me that even the next release might not have Xen which would indeed be bad news for us switchers :-(

 

Thanks

 

Peter

 

Sounds like b15 is it.  Again I say the issue I have is the absolute abruptness of this move without any other considerations like Dynamix and plugin dependencies on the removed packages, and how we can migrate Xen VMs.

 

Maybe a good idea to drop Xen, but terrible in its implementation.

Link to comment

 

 

Then I ran into the next issue, which was that only one of my CPU's was showing up, so I had to once again search to find the line in the KVM XML to change (through the advanced view), and then manually assign the VCPU's via XML since the GUI wasn't showing all of the CPU's to assign.

 

This is a bug that is fixed for the next release.  It was in how dmacias code was looking for # of CPUs.  It did a calculation on cores X threads, but there is an easier method:  CPUs.  This should work appropriately in the next release.

Hey George W. provided me with those calculations. :)

Link to comment

... Sounds like b15 is it.

 

Agree ... EXCEPT that there are clearly some issues in the VM Manager that aren't yet resolved (per Jon's comments)

 

 

... Maybe a good idea to drop Xen, but terrible in its implementation.

 

Agree again -- to do this so abruptly is an unfortunate move.    I'm just very glad I didn't decide to move my virtualization work to UnRAID while it was still in Beta.    My VM's are all in VMware ... and likely to stay there until well after v6 is released and the VM Manager has evolved a good bit.    I'll build a couple to play around with once v6 final is released, but won't move those that I really use for a good while.

 

Link to comment

Agree again -- to do this so abruptly is an unfortunate move.    I'm just very glad I didn't decide to move my virtualization work to UnRAID while it was still in Beta.    My VM's are all in VMware ... and likely to stay there until well after v6 is released and the VM Manager has evolved a good bit.    I'll build a couple to play around with once v6 final is released, but won't move those that I really use for a good while.

 

FWIW, I've been using the KVM VM manager since dmacias first created it. It worked really well then and the VM manager now is really robust. I wouldn't even consider the issues "issues", you can still pick the CPU's you need even if it doesn't show you all of them and the OVMF mode can be easily solved by following what jonp posted.

 

The VM Manager currently works very well and I have had zero issues with my KVM's moving from beta to beta (other than the issues I caused myself  ::)) and I run some complicated VM's. Sorry for the mostly off topic rant but I felt like the VM Manager needed some defending.

Link to comment
Agree again -- to do this so abruptly is an unfortunate move.    I'm just very glad I didn't decide to move my virtualization work to UnRAID while it was still in Beta.    My VM's are all in VMware ... and likely to stay there until well after v6 is released and the VM Manager has evolved a good bit.    I'll build a couple to play around with once v6 final is released, but won't move those that I really use for a good while.

 

I will be thinking long and hard before committing myself to any unraid features like VMs again.  I will definitely wait for stability in unraid.  I got rid of two physical computers and put them into Xen VMs.  I thought it was a pretty good idea at the time.  I still do, but I am a bit overwhelmed with the idea of configuring two Windows 7 computers from scratch.  Geez, it takes a day to just apply updates.

 

Maybe KVM will fall out of favor with LT, and it'll be the next bleeding edge VM technology.

Link to comment

... the VM manager now is really robust. I wouldn't even consider the issues "issues", you can still pick the CPU's you need even if it doesn't show you all of them and the OVMF mode can be easily solved by following what jonp posted.

 

Whether they're major or minor, they're still issues =>  they'll be fixed in the NEXT release ... and as JonP noted:

I would expect your experience with KVM to be much better these issues resolved.

 

In other words, Beta 15 is NOT the "... beta version that has BOTH a working/well-tested KVM implementation and toolset AND Xen ..."  that meep is hoping for.    And at this point, it seems that the version with the fixes to KVM JonP noted will NOT include Xen.

 

 

The bugs are relatively minor, but they still require workarounds that make it a lot less convenient.  Hopefully the two JonP noted are the only ones (although there may be others, as he calls the two he mentions the "big ones"):

 

There were a few bugs with VM manager ...

 

... The big ones are an issue with OVMF mode (which we have fixed internally and I've provided a guide on how to work around it in the meantime while we prep for the next release)

 

...The other one was the number of CPUs reported available on the system.  Also fixed.

 

 

Link to comment

Agree again -- to do this so abruptly is an unfortunate move.    I'm just very glad I didn't decide to move my virtualization work to UnRAID while it was still in Beta.    My VM's are all in VMware ... and likely to stay there until well after v6 is released and the VM Manager has evolved a good bit.    I'll build a couple to play around with once v6 final is released, but won't move those that I really use for a good while.

 

I will be thinking long and hard before committing myself to any unraid features like VMs again.  I will definitely wait for stability in unraid.  I got rid of two physical computers and put them into Xen VMs.  I thought it was a pretty good idea at the time.  I still do, but I am a bit overwhelmed with the idea of configuring two Windows 7 computers from scratch.  Geez, it takes a day to just apply updates.

 

Maybe KVM will fall out of favor with LT, and it'll be the next bleeding edge VM technology.

 

A few updates:

 

1)  I was able to successfully convert a Windows-based Xen VM to KVM today without using virt-v2v.  This was a Windows 7 Xen workload (using dlandon's Windows7 domain cfg file as the source template) and had the GPLPV drivers installed.  I was able to remove the drivers, copy the VM to my other test machine with KVM, boot it up, install the VirtIO drivers, and voila, all is working.  The entire process shouldn't take more than 15-30 minutes to do (probably less if I tried a speed run for the procedure).  I am working on documenting it right now and hope some folks like you and meep will test.

 

2)  As far as KVM falling out of favor with LT, that's not going to happen.  We have been testing with both hypervisors and the story on Xen / KVM is documented in the blog post in my forum signature, so I won't reiterate how we came to the decision to focus on KVM over Xen.  Read the blog if you want that insight.  I also posted details about the value of KVM as a hypervisor built into the Linux kernel directly in the KVM forum.  The info is out there on why we are supporting KVM and why we like it so much.

 

3)  VMs with KVM are very stable.  More stable than Xen.  I don't have memory issues like I did with Xen and GPU pass through.  Windows task manager natively reports the disk IO and network statistics whereas GPLPV drivers mess that all up with Xen.  I don't have issues with GPU driver installs for the most part.  The only thing you are going to lose with KVM is iGPU pass through and that's because of the sheer hackery the Xen team had to go through to get that to work in the first place.  Alex Williamson from Red Hat has commented on that here just yesterday:  https://bbs.archlinux.org/viewtopic.php?pid=1522474#p1522474 (on why IGD pass through works with Xen and not KVM).

Link to comment

... the VM manager now is really robust. I wouldn't even consider the issues "issues", you can still pick the CPU's you need even if it doesn't show you all of them and the OVMF mode can be easily solved by following what jonp posted.

 

Whether they're major or minor, they're still issues =>  they'll be fixed in the NEXT release ... and as JonP noted:

I would expect your experience with KVM to be much better these issues resolved.

 

In other words, Beta 15 is NOT the "... beta version that has BOTH a working/well-tested KVM implementation and toolset AND Xen ..."  that meep is hoping for.    And at this point, it seems that the version with the fixes to KVM JonP noted will NOT include Xen.

 

 

The bugs are relatively minor, but they still require workarounds that make it a lot less convenient.  Hopefully the two JonP noted are the only ones (although there may be others, as he calls the two he mentions the "big ones"):

 

There were a few bugs with VM manager ...

 

... The big ones are an issue with OVMF mode (which we have fixed internally and I've provided a guide on how to work around it in the meantime while we prep for the next release)

 

...The other one was the number of CPUs reported available on the system.  Also fixed.

 

Just a point of note, the bugs with OVMF and CPU count detection are issues purely with the webGui side of things and nothing to do with the underpinnings (KVM/QEMU).  Those components are 100% solid as proven by the fact that editing XML is a workaround to making this happen.

 

Keep in mind that with Xen as it exists right now, to install a VM, the only supported method is to do so via a plg file on the Xen manager built into the webGui.  Otherwise you have to drop to command line for creating the vdisk, the domain cfg file, and registering the VM with Xen Manager.  This is FAR more work for someone to perform than with VM Manager and KVM right now to overcome two minor bugs, one of which doesn't even effect all boot modes and the other doesn't affect all users.

 

So when comparing the VM Manager in beta 15 to Xen Manager in the current implementation, VM Manager is LIGHTYEARS ahead over Xen.

Link to comment
Just a point of note, the bugs with OVMF and CPU count detection are issues purely with the webGui side of things and nothing to do with the underpinnings (KVM/QEMU).  Those components are 100% solid as proven by the fact that editing XML is a workaround to making this happen.

 

Keep in mind that with Xen as it exists right now, to install a VM, the only supported method is to do so via a plg file on the Xen manager built into the webGui.  Otherwise you have to drop to command line for creating the vdisk, the domain cfg file, and registering the VM with Xen Manager.  This is FAR more work for someone to perform than with VM Manager and KVM right now to overcome two minor bugs, one of which doesn't even effect all boot modes and the other doesn't affect all users.

 

So when comparing the VM Manager in beta 15 to Xen Manager in the current implementation, VM Manager is LIGHTYEARS ahead over Xen.

 

I don't think any of us was suggesting that someone would implement a new Xen.  We just want to be able to move on without enormous amounts of time, effort, and learning curve.

 

1)  I was able to successfully convert a Windows-based Xen VM to KVM today without using virt-v2v.  This was a Windows 7 Xen workload (using dlandon's Windows7 domain cfg file as the source template) and had the GPLPV drivers installed.  I was able to remove the drivers, copy the VM to my other test machine with KVM, boot it up, install the VirtIO drivers, and voila, all is working.  The entire process shouldn't take more than 15-30 minutes to do (probably less if I tried a speed run for the procedure).  I am working on documenting it right now and hope some folks like you and meep will test.

 

This will go a long way to help us transition.

Link to comment

The bugs are relatively minor, but they still require workarounds that make it a lot less convenient.  Hopefully the two JonP noted are the only ones (although there may be others, as he calls the two he mentions the "big ones"):

 

The reason I posted was the last couple of posts before mine seemed a little dramatic IMHO and if newer users were reading them they might have come to the conclusion that the VM Manager was not user friendly or had major bugs. I disagree that these bugs make using the VM Manager  "a lot less convenient" to use but maybe slightly less convenient if your happen to have a CPU that doesn't get calculated correctly or if your really wanted to try OVMF. Either way they are in not a reason to not use the VM Manager, again IMHO. Honestly if those "bugs" seem like a reason not to try using the VM Manager, I would be asking myself if I should really be testing a beta version of unRAID.

 

So when comparing the VM Manager in beta 15 to Xen Manager in the current implementation, VM Manager is LIGHTYEARS ahead over Xen.

.

 

Amen!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.