[XEN VM IMG] ArchVM <--- deprecated 01/07/2014


Recommended Posts

New version is out. No need for existing people to upgrade, there are no major changes.

 

https://drive.google.com/file/d/0Bz5njPYYj26iaDNFb1JXZk8xcEU/edit?usp=sharing

http://unraidrepo.ktz.me/archVM/ArchVM_v3.zip

 

Changes:

 

  • Reduced rootfs to 770M
  • Introduced a naming scheme for releases
  • Discovered nfs bug in unraid v6, temporary work around for now is to use samba manually

 

Mount your samba shares like this

 

mount -t cifs //Tower/data /mnt/tmp -o guest,rw

Link to comment
  • Replies 687
  • Created
  • Last Reply

Top Posters In This Topic

Hey all!

 

Thanks to everyone for getting this going...I was just embarking on trying to set this up on my own when you guys (Tom included!)  jumped in and got the ball rolling.

 

Regaring Open VPN...It was actually quite easy to get setup...I dont' remember all the steps, but I'll have a look through my logs and see if I still have the details.  I'm using it to make an outbound vpn connection (using privateinternetaccess.com).  I'm still testing, but it seems solid....it was actually quicker to get working than some of the other packages--probably because I was working it from scratch and knew where it was putting everything (vs the packages, where I had to dig to find out)

 

I have one suggestion for the sabnzb package...can we change the default in the conf file to allow access from remote servers (ie, host = 0.0.0.0 vs host = localhost?)  Since this is destined for a headless server, it was a bit frustrating to install the package, see it running, but not know why the web page wouldn't open to configure it...

 

anyway, thanks again!

 

 

 

Hi there how difficult would open VPN server and SQL be to install?

 

 

Thornwood

Link to comment

 

New version is out. No need for existing people to upgrade, there are no major changes.

 

https://drive.google.com/file/d/0Bz5njPYYj26iaDNFb1JXZk8xcEU/edit?usp=sharing

http://unraidrepo.ktz.me/archVM/ArchVM_v3.zip

 

Changes:

 

  • Reduced rootfs to 770M
  • Introduced a naming scheme for releases
  • Discovered nfs bug in unraid v6, temporary work around for now is to use samba manually

 

Mount your samba shares like this

 

mount -t cifs //Tower/data /mnt/tmp -o guest,rw

 

Ironic, how can you permanently mount the share? This gets lost on reboot

 

 

Sent from my iPhone using Tapatalk

Link to comment

 

New version is out. No need for existing people to upgrade, there are no major changes.

 

https://drive.google.com/file/d/0Bz5njPYYj26iaDNFb1JXZk8xcEU/edit?usp=sharing

http://unraidrepo.ktz.me/archVM/ArchVM_v3.zip

 

Changes:

 

  • Reduced rootfs to 770M
  • Introduced a naming scheme for releases
  • Discovered nfs bug in unraid v6, temporary work around for now is to use samba manually

 

Mount your samba shares like this

 

mount -t cifs //Tower/data /mnt/tmp -o guest,rw

 

Ironic, how can you permanently mount the share? This gets lost on reboot

 

 

Sent from my iPhone using Tapatalk

 

Edit /etc/fstab with this info but due to the length of time the network takes to come up (about 30-60 seconds) this may adversely affect your boot time as arch will probably just sit appearing to hang.

 

I'll work on this some more when I get back on Friday. This was the main reason autofs was so attractive, if another one of you can get it working that would be fantastic.

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

After mounting the share:  mount -t cifs //Tower/data /mnt/tmp -o guest,rw

 

Trying to change the temp directory in sabnzbd to /mnt/tmp I get this error:

 

2014-02-04 17:52:31,893 ERROR: download_dir directory: /mnt/tmp error accessing

 

Not sure if that is what I should put in sabnzb for the temporary folder.

 

 

Link to comment

Just curious.  Because of the fact that we are all sort of agreed that there is no need for a cache drive, is it possible to instead just have a sudo cache drive for xen.  Meaning an extra drive (cache but not a cache drive) that we then assign to xen vms completely outside of unraid?  Make sense?  This would be the same as any other server setup.  I have arch running, added a domu vm, then added a second disk used as storage, partitioned it and added the configuration to my vm for use.

 

I would think similarly with unraid, where unraid knows about it, but doesnt touch it at all.  Instead only are domu's care about it.  Not sure if it is a good or bad idea, so just throwing it out there more as a thought.

Link to comment

Just curious.  Because of the fact that we are all sort of agreed that there is no need for a cache drive, is it possible to instead just have a sudo cache drive for xen.  Meaning an extra drive (cache but not a cache drive) that we then assign to xen vms completely outside of unraid?  Make sense?  This would be the same as any other server setup.  I have arch running, added a domu vm, then added a second disk used as storage, partitioned it and added the configuration to my vm for use.

 

I would think similarly with unraid, where unraid knows about it, but doesnt touch it at all.  Instead only are domu's care about it.  Not sure if it is a good or bad idea, so just throwing it out there more as a thought.

 

Just exclude your VM drive from the shares on the array. I've adopted this method for my dev machine and it works great.

 

I'm going to also investigate different IMG containers for the VM over the weekend. Again, if someone else would like to save me some effort and make researched suggestions...

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment

hi ironicbadger, any ideas if blktap2 is included in dom0?, if so you could investigate using vhd disk format possibly, unfortunately i dont have a test unraid system so cant check this out myself, taken from a post i found:-

 

"You can use upstream qemu to convert VHDs into raw images.

For example:

 

qemu-img convert -f vpc -O raw win7.vhd win7.raw

 

Also you can use VHD images directly even with open source xen;

something like the following line in your VM config file should work:

 

disk = [ 'tap:vhd:/root/images/win7.vhd,hda,w' ]

 

Keep in mind that in order to use VHDs you need a dom0 kernel with

blktap2 support. Blktap2 is not upstream yet, so if you download Linux

3.0.0 and compile it yourself, blktap2 is not going to be present.

However most Linux distros that provide a dom0 kernel, like Debian, Suse

and Gentoo, also provide blktap2 in their kernel package." - Stefano

(on an unrelated private thread)

Link to comment

You can also use VBOX utility functions to convert if XEN can't handle it.

 

What tools are you refering to, if i could move my ubuntu server vm over that would be great.

VBoxManage has some commands that you can use but I don't have my text printout handy at the moment to tell you but it will convert from any format it supports to raw if I remember correctly which is what Xen wants I think.  If I'm wrong on that then I'm sure Xen would support another format that VBox can convert to.  But quite frankly I would expect Xen to support  the same formats as vbox anyway so you may not have to convert in the first place.  Just suggesting the conversion as an option if importing it into Xen doesn't work.
Link to comment

I will save you all a bunch of time / trouble.

 

1. You can use VHD, QCOW2, thin-provisioning, CoW, etc.

 

2. You need qemu installed to create qcow2.

 

3. There are other tools for VHDs.

 

4. You need to boot them with pvgrub.

 

5. You are going to need multipath and other packages if you ever want to mount / access the partitions within the VM.

 

6. Yes, you can convert images from VHD, QCOW2, VMDK to RAW format but it isn't the greatest tool. There are other opensource ones that a much better. Also, you will be running your VMs in HVM and not PV. Which means unRAID (Host) has to work a lot harder and your VMs are slower.

 

7. It's best that you create a fresh VM from scratch with the Xen PV drivers installed (like ironics). It's very easy to install a Linux VM with Xen Drivers in a VM... Do not bother creating images in VMWare and then use QEMU tools that are designed for KVM to convert them over.

 

8. Leave the VM Appliance images small.

 

Let the users create their own "data drive" image for the size they want and need. You can mount more than one "drive" in your VM using the VM cfg file. Mount the "data" drive (image file you create) as "xvdb" and leave the VM Appliance as "xvba". Your "data" drive is where you have all your stuff download / store before your Apps or a chron job move it to unRAID for you. Instead of people arguing or only being able to have VM Appliance X size, let you users decide for themselves.

 

9. Jumping through hoops for QCOW2, VHD, etc. isn't worth the trouble and using step 8 above you avoid the necessity of having to create thin provisioned images that are designed for KVM. Xen DOES NOT recommend you use them in a production environment.

 

10. Why go spend $100+ on an even bigger cache drives to store your downloads that you are going to copy over to your unRAID? Why not just download them to your unRAID instead? Do you value this data?

 

  a) Having your drive(s) on your unRAID server spin down does not increase how long they will last.

 

  b) The amount of money you spend on a new cache drive is probably 4X the cost of what you would spend for electricity for a year.

 

  c) When your files are done downloading, renamed, copied where they should be... The unRAID drive(s) will spin down.

 

11. Instead of using file based protocols like NFS or Samba...  It's time to look at block-based protocols too.

Link to comment

From what I'm reading, this image is a paravirtualized, is it true that no mobo support for virtualization is needed? I have hardware that supports it but my mobo does not. I do not plan to pass-through any devices, as I would think that would still require the support, or is that incorrect?

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

From what I'm reading, this image is a paravirtualized, is it true that no mobo support for virtualization is needed? I have hardware that supports it but my mobo does not. I do not plan to pass-through any devices, as I would think that would still require the support, or is that incorrect?

 

Sent from my Nexus 5 using Tapatalk

 

http://lime-technology.com/forum/index.php?topic=31716.0

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

Ironic, this is great and I'm looking forward to trying it out when I have some time.  I have been planning on building something similar for myself as a learning exercise (still will probably), but this is a nice way to get my feet wet in v6 and Arch.

 

One question though:  Is Arch a good long term option for an appliance VM?  Maybe I'm way off here, but I would think a stable server-oriented operating system would be better suited to this type of application (Centos minimal, or similar).  Far less likely to break when updating, guaranteed long support life.  Would that be better suited to the average unraid user looking to offload extended functionality into a VM?

 

Perhaps it is trivial for the maintainer of the VM appliance to update the image monthly, but I don't think it would be practical for each user to start from scratch and reconfigure or migrate settings into a new image.  Maybe I'm missing something simple, it's possible;  I'm learning. 

 

The beauty here is that there's nothing stopping someone from creating another image.  Arch may be a nice starting point for those technically inclined people that prefer the customization, control and cleanliness it brings, but a system with better long term support, simpler/safer upgrades for non-technical users could prove useful long term.

 

Keep up the awesome work.  Thanks!

Link to comment

Great points. As you say the future is bright! This image, at least for now is designed to be a simple, proof of concept, get your feet wet nice and easily type deal. Once we the betas are moving along towards RCs the image itself will likely change little. I'm going to explore having a separate /home partition setup so that users don't loose their configurations.

 

As for arch vs whatever else. That's up to you. I find arch a piece of cake, sure there may be the odd breakage but we can deal with those as / when / if they occur. I myself only moved into xen in august and learned everything I know about arch in that time (before this I was windows / Mac only seriously). I now consider myself trilingual. The choice, is yours.

 

 

NFS being read only is a bug in unraid, not my image. Tom is aware and working on it.

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment

One question though:  Is Arch a good long term option for an appliance VM?

 

Arch is good a choice because many of the Applications we use are "cutting edge".

 

Maybe I'm way off here, but I would think a stable server-oriented operating system would be better suited to this type of application (Centos minimal, or similar). Far less likely to break when updating, guaranteed long support life.

 

Depends what you plan on running in your VM.

 

If you choose CentOS and want XBMC, Sickbeard, Couchpotato, Owncloud, etc. and any number of the Apps that people here typically use... Good luck finding a guide / package to install it. XBMC is version 11 and you will be compiling all those apps and have to manually update them.

 

Now if you are going to put up a WebServer / Blog / Forum / etc. and expose it to the internet... I think something like CentOS (that you secure / harden) is a great choice.

 

Would that be better suited to the average unraid user looking to offload extended functionality into a VM?

 

Once you have a VM Appliance you pretty much just go to the various WebGUIs to access the apps. You don't really need to fool with the OS much. Also, there will be tons of stuff on here around Arch and their Wikis are probably the best out there beside maybe Gentoo.

 

Go look to how to install Sickbeard or turn on NFS in Ubuntu... You are going to get 100+ different answers and 50 of them are going to be incorrect or for the wrong version of Ubuntu.

 

Another thing about Arch... Once you know it, you know CentOS, Fedora, Red Hat, openSUSE, etc. because they are "true" to Linux and all work / use the same programs (aside from a package manager). Where as Ubuntu is what I consider a "fork" of Linux and custom in many ways.

 

Perhaps it is trivial for the maintainer of the VM appliance to update the image monthly, but I don't think it would be practical for each user to start from scratch and reconfigure or migrate settings into a new image.  Maybe I'm missing something simple, it's possible;  I'm learning.

 

The beauty of Arch is that it's a rolling release and their are no versions / releases.

 

To update the system, the user has to type "pacman -Syyu" and it will update Arch / Apps.

 

The beauty here is that there's nothing stopping someone from creating another image.  Arch may be a nice starting point for those technically inclined people that prefer the customization, control and cleanliness it brings, but a system with better long term support, simpler/safer upgrades for non-technical users could prove useful long term.

 

When you say long term support you have to explain. Arch isn't a flash in the pan Linux Distro that is going away. It's just as stable as any other Distro if Ironic sets it up / maintains it that way.

 

You mentioned CentOS but with the partnership with Red Hat, it is now going to be a "development / testing" Distro for Red Hat so I would start to consider it as stable as you would Fedora (Red Hat's other development / testing Distro).

 

Think of it this way... Fast forward 2+ years and Ironic choose to use Ubuntu.

 

In that time, there will be 4 releases of Ubuntu. Can you imagine having to update, maintain and upgrade all the packages for 5 separate versions of Ubuntu? He would because there are plenty of users here still running unRAID 4.7 and betas / release candidates for 5.0 and they probably won't update / upgrade Ubuntu 13.10 till Ubuntu 23.10 is out.

 

Not to mention in one of those releases Ubuntu is going to switch to systemd. That in of itself is a MAJOR change and pretty much every thing you have every read / seen on the internet about installing apps in Ubuntu does not work anymore.

Link to comment
Guest
This topic is now closed to further replies.