Virtualizing unRAID with Xen on Arch Linux, XBMC & Windows with VGA Passthrough


Recommended Posts

  • Replies 314
  • Created
  • Last Reply

Top Posters In This Topic

LVM is the BOMB once you understand it and all the features it offers.

 

No you do not need a separate disk. If I were you, I would install Arch into a 10GB partition and leave the rest of the drive(s) blank. This way you can play around with and experiment with LVM without having to reinstall Arch.

 

All awesome info, but being at work, I can't mess with the rest of this, so for now, I will just discuss this point.

 

LVM does sound cool and as noted I will play.  Thats all this current installation was actually for, playing.  I have some VMs running on ESXi, but wanted to change it up, but before blowing that away, I wanted to test and play.  I am intrigued by the thin provisioning and the snapshot capabilities noted, but before that I just need to get running with the basics.

 

You say to install Arch into a 10GB partition.  I don't know if I can fix it, but I used way more than 10GB.

 

I went off IconicBadgers blog:

 

    Do you want to GUID partition table (GPT)? Yes

    /boot partition: 512mb is enough

    /swap: 256mb (or whatever you want)

    / :7500mb

    /home : use the remaining free space, this is where your data will live.

    format as ext4, answering each question as you see fit. make sure you are sure that you’re not going to lose data!!!!

    Select the device name scheme….: PARTUUID

 

So, my /home is the whole rest of the disk, leaving no free space and might be the reason when I ran a few LVM commands they were not working as it is a mounted partition.  I assume to get to the suggested 10GB, /home should be 10GB-ish (10240)

 

Lastly, just thinking as I read, can a PV "pool" have multiple partitions and disk?  Or is it just the volume groups that can be made of multiple PV?  I assume the later.

 

Found this while posting this message.  I got to page 2 and with what you wrote and what is on here, it is becoming much clearer as to what and how the basics work. http://www.howtoforge.com/linux_lvm

 

looking forward to more messing

 

Link to comment

LVM is the BOMB once you understand it and all the features it offers.

 

No you do not need a separate disk. If I were you, I would install Arch into a 10GB partition and leave the rest of the drive(s) blank. This way you can play around with and experiment with LVM without having to reinstall Arch.

 

All awesome info, but being at work, I can't mess with the rest of this, so for now, I will just discuss this point.

 

LVM does sound cool and as noted I will play.  Thats all this current installation was actually for, playing.  I have some VMs running on ESXi, but wanted to change it up, but before blowing that away, I wanted to test and play.  I am intrigued by the thin provisioning and the snapshot capabilities noted, but before that I just need to get running with the basics.

 

You say to install Arch into a 10GB partition.  I don't know if I can fix it, but I used way more than 10GB.

 

I went off IconicBadgers blog:

 

    Do you want to GUID partition table (GPT)? Yes

    /boot partition: 512mb is enough

    /swap: 256mb (or whatever you want)

    / :7500mb

    /home : use the remaining free space, this is where your data will live.

    format as ext4, answering each question as you see fit. make sure you are sure that you’re not going to lose data!!!!

    Select the device name scheme….: PARTUUID

 

So, my /home is the whole rest of the disk, leaving no free space and might be the reason when I ran a few LVM commands they were not working as it is a mounted partition.  I assume to get to the suggested 10GB, /home should be 10GB-ish (10240)

 

Lastly, just thinking as I read, can a PV "pool" have multiple partitions and disk?  Or is it just the volume groups that can be made of multiple PV?  I assume the later.

 

Found this while posting this message.  I got to page 2 and with what you wrote and what is on here, it is becoming much clearer as to what and how the basics work. http://www.howtoforge.com/linux_lvm

 

looking forward to more messing

 

resizing the /home partition is easy, my instructions are that way because archboot doesn't allow you to do it any other way without going into manual partitioning (which isn't hard but was beyond the scope of the end user i wrote the guide for). load up any linux distro from a USB key and use fdisk or whatever to resize it. i prefer cgdisk (part of gptfdisk package) myself directly on Arch.

 

That said, there's some nice instructions in this thread that explains it pretty well.

 

https://bbs.archlinux.org/viewtopic.php?id=63313

 

https://wiki.archlinux.org/index.php/GUID_Partition_Table

Link to comment

haha, might really be starting over this time...started messing with parted, but home was mounted.  installed webmin, told it to unmount /home, wanted me to make sure, said to force it, but now /home is gone all together.

 

If I go into parted now, I only see 4 partitions vs the previous 5, fdisk -l is the same way, missing home....is it really gone or can it be saved?

Link to comment

haha, might really be starting over this time...started messing with parted, but home was mounted.  installed webmin, told it to unmount /home, wanted me to make sure, said to force it, but now /home is gone all together.

 

If I go into parted now, I only see 4 partitions vs the previous 5, fdisk -l is the same way, missing home....is it really gone or can it be saved?

 

That's called learning! Enjoy.

 

Sent from my Nexus 5 using Tapatalk

 

 

Link to comment

haha, yes.

 

Just curious, before I go getting all crazy, is there any real recovering from this?

 

Disk /dev/sdc: 186.3 GiB, 200049647616 bytes, 390721968 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: 4F0EFB21-51BF-47BB-AA59-A8F375729201

 

Device          Start          End  Size Type

/dev/sdc1        2048        6143    2M BIOS boot partition

/dev/sdc2        8192      2105343    1G EFI System

/dev/sdc3      2107392      2631679  256M Linux swap

/dev/sdc4      2633728    17993727  7.3G Linux filesystem

/dev/sdc5    17995776    29296639  5.4G Microsoft basic data

 

 

 

sdc5 is obviously not right and during creation I get errors complaining about the superblock.  I am OK with learning some more, but thought if I can save a few steps, I wouldn't mind doing so

Link to comment

Just curious, before I go getting all crazy, is there any real recovering from this?

 

Since you have a GPT partition goto gdisk:

 

gdisk /dev/sdc

 

Change the type of sdc5:

 

t

 

Select Partition 5

 

5

 

Change the type too:

 

8300

 

Format the partition:

 

mkfs.ext4 /dev/sdc5

 

Update your fstab so it points your home folder to the correct UUID:

 

blkid

 

Cut and Paste the UUID for /dev/sdc5 into fstab where you have home listed.

 

nano /etc/fstab

Link to comment

this normal?

 

Changed type of partition to 'Linux filesystem'

 

Command (? for help): mkfs.ext4 /dev/sdc5

b      back up GPT data to a file

c      change a partition's name

d      delete a partition

i      show detailed information on a partition

l      list known partition types

n      add a new partition

o      create a new empty GUID partition table (GPT)

p      print the partition table

q      quit without saving changes

r      recovery and transformation options (experts only)

s      sort partitions

t      change a partition's type code

v      verify disk

w      write table to disk and exit

x      extra functionality (experts only)

?      print this menu

 

 

 

---NM - needed to quit gdisk

 

Link to comment

OK, last step to get back online,

 

 

#
# /etc/fstab: static file system information
#
# <file system> <dir>   <type>  <options>       <dump>  <pass>
# DEVICE DETAILS: /dev/sdc2 PARTUUID=fc4a690e-b565-45d3-88be-7e22efb7b016 PARTLABEL=UEFI_SYSTEM UUID=5427-4A08 LABEL=EFISYS
# DEVICE DETAILS: /dev/sdc3 PARTUUID=7fb27645-477e-43d6-9aad-68a918f1c02d PARTLABEL=ARCHLINUX_SWAP UUID= LABEL=
# DEVICE DETAILS: /dev/sdc4 PARTUUID=12ecbfba-ad6d-4df4-b6bf-574547c64055 PARTLABEL=ARCHLINUX_ROOT UUID=2f3f945a-2cf8-48ac-b34f-86d91254aaf2 LABEL=ROOT_ARCH
# DEVICE DETAILS: /dev/sdc5 PARTUUID=f2c16585-9041-4b9d-9d39-e9191b826727 PARTLABEL=ARCHLINUX_HOME UUID=86d8d441-186a-4c09-afc0-63af9afa69e9 LABEL=HOME_ARCH
PARTUUID=fc4a690e-b565-45d3-88be-7e22efb7b016 /boot vfat defaults 0 1
tmpfs   /tmp         tmpfs   nodev,nosuid,size=4G          0  0

 

 

 

Link to comment
/dev/sr0: UUID="2011-03-02-19-19-23-00" LABEL="GRMSHSXFREO_EN_DVD" TYPE="udf"
/dev/sdb1: LABEL="DATA" UUID="903C7A033C79E49E" TYPE="ntfs" PARTUUID="f916b64c-01"
/dev/sda1: UUID="A08A758F8A7562A8" TYPE="ntfs" PARTUUID="e05cad40-01"
/dev/sda2: UUID="2AE6787EE6784BD7" TYPE="ntfs" PARTUUID="e05cad40-02"
/dev/sda3: UUID="88FA7BBCFA7BA4DA" TYPE="ntfs" PARTUUID="e05cad40-03"
/dev/sdc2: LABEL="EFISYS" UUID="5427-4A08" TYPE="vfat" PARTLABEL="UEFI_SYSTEM" PARTUUID="fc4a690e-b565-45d3-88be-7e22efb7b016"
/dev/sdc3: LABEL="SWAP_ARCH" UUID="7f0e4741-738b-4368-aafd-498a80afb421" TYPE="swap" PARTLABEL="ARCHLINUX_SWAP" PARTUUID="7fb27645-477e-43d6-9aad-68a918f1c02d"
/dev/sdc4: LABEL="ROOT_ARCH" UUID="2f3f945a-2cf8-48ac-b34f-86d91254aaf2" TYPE="ext4" PARTLABEL="ARCHLINUX_ROOT" PARTUUID="12ecbfba-ad6d-4df4-b6bf-574547c64055"
/dev/sdc5: UUID="50c55d3e-9fe9-488c-95dc-24c5602e0f50" TYPE="ext4" PARTUUID="e507f6e8-2ed7-4ba5-b8c0-3da4fa7c0d49"
/dev/sdc1: PARTLABEL="BIOS_GRUB" PARTUUID="988c675b-1013-4230-a5cb-a57e5abb2c4e"

Link to comment

Comment out everything in your fstab and add the following:

 

UUID=5427-4A08                                   /boot   vfat     defaults 0 2
UUID=7fb27645-477e-43d6-9aad-68a918f1c02d        none    swap     defaults 0 0 
UUID=2f3f945a-2cf8-48ac-b34f-86d91254aaf2        /       ext4     defaults 0 1
UUID=50c55d3e-9fe9-488c-95dc-24c5602e0f50        /home   ext4     defaults 0 2

 

Then run the following command:

 

grub-mkconfig -o /boot/grub/grub.cfg

 

Report back any errors.

Link to comment

seems I may be back in business...thanks!

 

Disk /dev/sdc: 186.3 GiB, 200049647616 bytes, 390721968 sectors

Units: sectors of 1 * 512 = 512 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disklabel type: gpt

Disk identifier: 4F0EFB21-51BF-47BB-AA59-A8F375729201

 

Device          Start          End  Size Type

/dev/sdc1        2048        6143    2M BIOS boot partition

/dev/sdc2        8192      2105343    1G EFI System

/dev/sdc3      2107392      2631679  256M Linux swap

/dev/sdc4      2633728    17993727  7.3G Linux filesystem

/dev/sdc5    17995776    29296639  5.4G Linux filesystem

 

 

is that enough space for what we are doing and testing or should I have gone bigger?

 

I assume since it wont be doing anything but holding a iso, it should be OK

Link to comment

I assume since it wont be doing anything but holding a iso, it should be OK

 

My total Arch Partition is 10Gb. So between your Root and Home Partition you have more allocated than I do. I consider myself a power user and I am not close to filling up my space.

 

You will play around, learn stuff, break stuff and reload Arch again. When you learn about LVM... You will want to put Arch in a LV as I do.

Link to comment

Grumpy, do you think if we use btrfs we need LVM?

I thiught the main strength of btrfs is that it should eliminate the need for LVM/raid as it have a built in functiinality for it.

 

1. BTRFS is a game changer.

 

2. However, btrfs RAID 5/6 is not stable yet. When it is, I will be switching to it.

 

3. libvirt (how you manage Xen and KVM) has LVM built in. Until btfs RAID 5/6 is stable and libvirt adds it in mainline... Not worth it.

 

4. Wait till you checkout out 9P with virtio. Its BLAZING FAST and close to "bare metal" speeds when accessing shares.

Link to comment

I guess I will just go ahead and ask what you mean by running Arch in an LV vs say the way I have it.  And the benefits to one way or another.

 

With running Arch on a LV...

 

1. You can do Snapshots. (Similar to Apple's Time Machine)

 

Example: I take a "snapshot" of Arch. I can then do upgrades, load new software, etc. and should any of that break it. I simply "go back in time" and revert back to my snapshot.

 

2. It's much easier to Back up my system and Restore it. I take a snapshot, compress it and copy it to my server and restore it within in minutes. No need to go through an install and reconfiguring my system.

 

3. Again using the Back ups example above in the VM world, I have Fresh Installs for Windows, Arch, Ubuntu, etc. that I previously did backed up. If I want to create another VM with any of those... I simple create a new LV and restore any of those "images" I have. Saves me time from having to reinstall and configure them. 

 

4. If I add drives, remove drives, etc. it's a snitch to move LVs around from one drive to another.

 

5. Mirroring (Raid 1). Which you can add at anytime.

 

Example: If I have two drives in my system... I could mirror the entire drives (or have 2 LVM Partitions on each) and mirror them. Should one drives (or LVM Partitions) die, be deleted, etc... My system continues to run without me having to do a thing. It can run this way until I finally decide to replace the drive (or LVM Partition) in the mirrored set and restore the mirror.

 

6. Growing, Shrinking, Expanding LVs is a snap.

 

Example: If I fill up or need more space in Arch or one of my VMs (running on a LV)... I can easily increase the size (or shrink) it. Doing this outside of a LV is a major pain in the ass.

 

7. There are more benefits but those are the main reasons I do it.

Link to comment

All of that sounds pretty sweet.  I think of it sort of like vmware (where I have a little more knowledge because of daily use).  Snapshots, check, oh, your 20 gig drive is full, OK, let me make it 30, in Windows it is a simple extend command, etc.  Obviously similar, but not exactly the same.

 

The the real question that remains, where is the LV living/running?  I ask because my thinking is that an LV is a member of a VG that is a part of a PV(s).  So I can load Arch, install/run LVM and create all of this.  However, you mention that you run Arch in a LV, so how do you get there?  Are you saying that you run a base linux system and from there control LVM, create an LV that then has your Arch/Xen master in it?  Make sense?  I am confused a bit by how you have an LV with your Arch/Xen system when it was my understanding I needed Arch to get an LV via LVM in the first place.  Or am I missing something?

Link to comment

All of that sounds pretty sweet.  I think of it sort of like vmware (where I have a little more knowledge because of daily use).  Snapshots, check, oh, your 20 gig drive is full, OK, let me make it 30, in Windows it is a simple extend command, etc.  Obviously similar, but not exactly the same.

 

ESXi and XenServer are doing that via LVM. You just don't know / see it.

 

The the real question that remains, where is the LV living/running?  I ask because my thinking is that an LV is a member of a VG that is a part of a PV(s).  So I can load Arch, install/run LVM and create all of this.  However, you mention that you run Arch in a LV, so how do you get there?  Are you saying that you run a base linux system and from there control LVM, create an LV that then has your Arch/Xen master in it?  Make sense?  I am confused a bit by how you have an LV with your Arch/Xen system when it was my understanding I needed Arch to get an LV via LVM in the first place.  Or am I missing something?

 

Partition Layout

 

1. 500MB EFI Partition

 

You have to do this if using EFI. If not doing EFI I still separate out /boot because it makes fixing any grub / boot issues 10000% easier. This also needs to be vfat if doing EFI.

 

2. 20 GB Partition dedicated to Arch. Type: LVM

 

3. The rest of the drive is a partition. Type: LVM

 

This is where I store my VMs, Data, ISOs, Etc.

 

Note: I do not use a swap drive because my Server never sleeps. In fact, I have not used a swap drive because my PCs have always have enough ram to sleep that way.

 

Another Note: For those of you who are anal... You could separate your LVM partitions even further than I did. <--- Type: LVM

 

For Example:  ISOs Partition, VMs Partition, Data Partition, Back Up Partition, Etc.

 

For me, I am not anal and moving LVs around or getting confused isn't an issue.

 

Physical Volumes

 

/dev/sda2

 

/dev/sda3

 

Volume Groups

 

Arch - /dev/sda2

 

vg0 - /dev/sda3

 

I do have other Linux Distros and vg1 and vg2 using physical volumes on other drives but I am keeping it simple.

 

Logical Volumes

 

Root - is on the Arch Volume Group - It's my Arch Linux - 10GB in size with 10GB free.

 

Data - is on the vg0 Volume Group - It's where all my VMs, ISOs LV, Data, etc. stay.

 

You want to leave yourself some room for Snapshots and growth. I left myself 10GB on my Arch-Root and 30GB free on my Data Volume Group.

 

Here is why...

 

1. You want to leave yourself some room to grow LVs and add new ones. Sure, you could add another drive and expand your LV that way or shrink one and grow another but that is a pain.

 

2. When you make a snapshot you assign how much "stuff" you can change before it "fills up".

 

Snapshots Explained with an Example:

 

lvcreate -L5G -s -n Snap /dev/mapper/Arch-Root

 

Everything I do going forward in Arch is now limited to the new 5GB of space I allotted for it (it doesn't matter what I had left before the Snapshot). If I were to install Owncloud and XBMC, that would take up X space in my snapshot. I can run on a Snapshot for months if I wanted. I could at any time revert back to what my system was when I took the snapshot or apply all the things I did to my original Arch.

 

To see Arch-Root is using a snapshot and the space left:

 

lvs

 

If I took a snapshot and updated my system and messed it up or if I didn't want Owncloud or XBMC...

 

lvconvert --merge /dev/mapper/Arch-Root--Snap

 

After I reboot, my system will look EXACTLY like it did when I took the snaphot.

 

If I did an upgrade and everything is working great. I want to apply these changes to Arch.

 

lvremove /dev/mapper/Arch-Root--Snap

 

I now have my 5GB back and all the changes I made are now permanently in my Arch-Root LV.

 

Using this as an example, you can see why I use LVs and why I leave myself room for snapshots. I could have 4 or 5 snapshots with the various Linux Distros I run or VMs. I usually run a few days or weeks before I "commit" the changes.

 

Thin Pools, Thin LVs, Thin Snapshots

 

Of course you could also do Thin Pools, Thin LVs and Thin Snapshots too. Those do not consume all the space you assign and grow in size when needed. You have to google that, now that you have an idea / concept of what / how to use LVM you can easily do this as well.

 

BTFS

 

This is an incredible new file system that will eventually replace EXT4, ZFS, LVM, etc. It's ready to go for partitions... Just not RAID 5/6. It's easier to do snapshots using this than LVMs. I use it on other machines but I am not switching to it on my Virtual Server until it's in the mainline libvirt and RAID 5/6 is stable.

 

Here is a great tool and example of Partitioning and using Snapshots in BTRFS:

 

Snapshots/Rollback with Snapper.

 

To learn more about all of this...

 

Advanced Disk Setup

Link to comment

This might sound silly, but does that mean there is no real OS on the primary EFI partition?  Instead, Grub just loads up and starts your VMs?  Like me currently, I have a hard drive, I booted to a USB stick with the Arch ISO.  I installed Arch to my HD and now used LVm to create my volumes where VMs will live.  If I understand your setup, your Arch is one of those LVM housed VMs, so I am confused slightly by how you got there.  Some of what I tried reading mentioned UEFI with Grub to load the OS...so your OS is your VM, while mine would be my Arch physical machine install, if that makes any sense at all.

 

Second.  I am now ready for the next steps.  VM creation.  Do I need to create bridge for the guest network or is that automatic?  And then it is on to creating that first test VM and getting all that straight.

Link to comment

This might sound silly, but does that mean there is no real OS on the primary EFI partition?

UEFI needs a VFAT partition because that is how your motherboard / bios communicates to know where things are.

 

Instead, Grub just loads up and starts your VMs?

 

Linux and Grub are smart. You can load linux via LV, vFAT, EXT2/3/4, BTRFS, XFS, Reiser, JFS, iSCSI, Fiber, NFS, etc.

 

Grub will "probe" your drives and find out where things are. Using me as an example, I have about 6 or 7 LVs which are all different Linux Distros that I test / learn / play with. My grub menu has a total of about 10 options when you factor in Windows 7 and Windows 8 partitions too.

 

An LVM partition is not exclusive to VMs.

 

Like me currently, I have a hard drive, I booted to a USB stick with the Arch ISO.  I installed Arch to my HD and now used LVm to create my volumes where VMs will live.  If I understand your setup, your Arch is one of those LVM housed VMs, so I am confused slightly by how you got there.  Some of what I tried reading mentioned UEFI with Grub to load the OS...so your OS is your VM, while mine would be my Arch physical machine install, if that makes any sense at all.

 

I create a LVM partition for Arch.

 

I also created a LVM partition where all my VMs and other things hang out.

 

You can have a bunch of LVM partitions if you so desire. Some people have several. VMs, ISOs, Data, Back ups, etc.

 

Using my example above, I showed you I had a total of 3 Partitions.

 

1 - UEFI

 

2 - Arch - Which is it's own LVM partition.

 

3 - vg0 - Which is where my VMs, Data, ISO, etc. LVs go. It's a separate LVM partition and not associated with Arch.

 

If I were to blow away Arch and install CentOS.... CentOS would pick up the vg0 LVM and all the LVs on it.

 

Second.  I am now ready for the next steps.  VM creation.  Do I need to create bridge for the guest network or is that automatic?  And then it is on to creating that first test VM and getting all that straight.

 

You need to create a bridge and Xen will use it and create the network for the guest for you. You will want to assign a mac address or else you will get a new IP address everytime you start up.

 

When you do:

 

ip add

 

You should have a xenbr0 device and a IP address assigned to it. If you goto the Xen / Arch wiki it will walk you through how to do it. If you need help, feel free to ask.

Link to comment

as opposed to making myself too nuts, I am going to table the EFI discussion.  Figure get things running first and then worry about the fun stuff once I get it more.

 

So, on to something else I broke.  Surely I can fix it when I get to the console, but I created the bridge following the document and broke the network connection.  The strange thing was that my options were not eth0 and eth1, but instead:

 

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

      valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host

      valid_lft forever preferred_lft forever

2: enp1s5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000

    link/ether 6c:f0:49:1f:52:31 brd ff:ff:ff:ff:ff:ff

    inet 192.168.0.210/24 brd 192.168.0.255 scope global enp1s5

      valid_lft forever preferred_lft forever

    inet6 fe80::6ef0:49ff:fe1f:5231/64 scope link

      valid_lft forever preferred_lft forever

 

So I went with enp1s5 and when I did "netctl start xenbridge-dhcp" per the document, my putty went away, so now the question is why...

 

 

On another point too that I saw in the Xen Arch wiki while doing the network, it says to edit grub to allocate RAM to dom0 as part of the hypervisor installation.  Is this still needed or does the installation handle it?  I would check what my grub.conf looks like, but since the network is currently gone, that proves difficult...

-NM the network part.  Apparently the system pulled a new IP.  Need to set it static...

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.