Jump to content
BetaQuasi

Orion - ESXi/unRaid build - now v6 and KVM

60 posts in this topic Last Reply

Recommended Posts

Had a day all to myself today (Saturday) for the first time in ages, so I did the following:

 

  • Upgraded my ESXi 5.0u1 install to 5.1, and then applied all 3 patches (ESXi510-201210001, ESXi510-201212001, ESXi510-201303001)
  • Upgraded M1015's, which were running P13, to P15 via EFI shell
  • Upgraded X9SCM BIOS from 1.1a to 2.0b, as well as IPMI to latest version
  • Upgraded HP 1810G switch to latest version
  • Built a server2012-based VM to replace my 2008 R2 DC (promoted to DC, transferred FSMO etc, demoted old 2008 R2 VM and decommissioned)

 

Everything went swimmingly, not one issue to speak of (touch wood!).  Fun times.

Share this post


Link to post

I'm not sure what part of what I did yesterday made the difference, but everything is now performing noticeably quicker than before.  Bzroot/bzimage extract when booting unraid takes ~1 second (used to be 4-5) and everything on the VM's just seems snappier.  Good times!

Share this post


Link to post

Another VM is running the following:

 

- Shared MySQL DB for the XBMC instances

- Latest Plex Media Server - used for transcoding to phones/tablets.  Main use is actually to sync content to my tablet for the 50 minute train ride to work.

 

One of these days, we'll get a product that has the best of XBMC and Plex and I'll only need one.  We're almost there now that there are some frodo-based builds of the Plex Home Theater client, and a Ubuntu PPA for the same...  what I'd love to see though is the Plex HT client in an Openelec-style build, then I could ditch XBMC completely.

 

Or.... maybe the rumoured XBMC server will replace plex and I can go that way instead.

 

Anyways.. the hardware itself I've yet to have a single problem with (touch wood).  No misbehaving disks, no RAM issues, no controller issues, nothing.. it's been almost a year running 24/7 now, let's hope things stay that way!

 

There's only a couple of things I'd like to do, but I'm kinda taking the 'if it ain't broke....' approach:

 

- update the BIOS on the X9SCM to 2.0b (currently on 1.1a).  Have gone to do this several times but haven't actually gone through with it

- update ESXi to 5.1 w/ the Dec 2012 patch

- add some more drives to the FreeNAS VM and change to a RAIDZ2 pool, and a couple of SSD's for cache (I don't really need it but I still want to do it lol).

 

Are you running Plex on Ubuntu Server? How many Cores did you assigns for this VM..

 

I am going to try it tonight :)

Share this post


Link to post

Yep, it has 2 cores and seems to have no issues streaming to multiple tablets etc - installing on Ubuntu is very simple.. there is an apt repository, but I prefer just to download the .deb file and run "dpkg -i plex*.deb" and you're done.  There might be a few dependencies you need to install first like avahi, but dpkg will tell you about those.

 

 

 

 

Share this post


Link to post

Yep, it has 2 cores and seems to have no issues streaming to multiple tablets etc - installing on Ubuntu is very simple.. there is an apt repository, but I prefer just to download the .deb file and run "dpkg -i plex*.deb" and you're done.  There might be a few dependencies you need to install first like avahi, but dpkg will tell you about those.

 

Cool.

 

When you download something via SabNZBD, how did you transfer to Unraid? Does it go to Unraid Cache?

Share this post


Link to post

Sickbeard takes care of all of that via the postprocessing script, transferring the file to unRAID after it's downloaded/unpacked etc on the Sab/Sick/Couch VM.

 

 

 

Share this post


Link to post

Sickbeard takes care of all of that via the postprocessing script, transferring the file to unRAID after it's downloaded/unpacked etc on the Sab/Sick/Couch VM.

 

If you manually upload a NZB file on Sab  that have nothing to do with Sick/Couch... -  it will obviously download on that Linux VM.

 

How would you manually move files to unraid via Windows?

 

I am thinking of installing samba on a VM to create User Share of /home/username/ dir...

Share this post


Link to post

A quick update..  my rig hasn't missed a beat, except for one drive kicking the bucket.  Swiftly replaced and rebuilt about 6 months ago.  I've swapped out some 2 TB drives for 3's and 4's, and the party is now running off a 4TB. 

 

Things with unRAID 6 started looking appealing, so I bit the bullet and upgraded, and went back to bare bones unRAID as part of the process.  Firstly, I uninstalled VMware tools on all my VM's.  I decided to ditch my Windows VM, as it seemed more trouble than it was worth to migrate it, and it wasn't doing that much anyway.

 

For the Linux VM's, the process was surprisingly easy - shut them all down, SFTP into ESXi and copy the VM's off onto a laptop.  After bringing up the unRAID bare bones installation, and installing VM Manager from dmacias, I used dlandon's v6 SNAP plugin to mount a non-array SSD, and popped the line in the go script to bring it online during boot up.  I then FTP'ed across the .vmdk's to this location, and converted the .vmdk's using qemu-img:

 

qemu-img convert -f vmdk vmware_image.vmdk -O qcow2 kvm_image.qcow2

 

I then created a dummy VM to get the .xml structure (very new to KVM), and edited it to suit the VM's I was porting across, including their UUID's from the VMFS folder (probably not necessary for Linux but still...).  I added them via VM Manager, and clicked start.. to be honest I thought I'd have issues, and there was no way it'd work straight off the bat.. but it did!  Both Linux VM's up and running.

 

I've added a bunch of plugins (powerdown/apcupsd and others) and all went very smoothly, no issues to speak of.

 

To do list:

 

- Work out bonding.  I've got this far!:  http://lime-technology.com/forum/index.php?topic=37642.msg348032#msg348032

- Play with docker to see if it's worth me starting to use docker apps rather than VM's.  The biggest issue I see here is that you run a little behind with updates, as it appears the docker maintainers are the ones that need to keep them updated.  I prefer to do this myself, so will probably stick with the KVM VM's.

 

Anyways, so far so good.  Very impressed with v6!

Share this post


Link to post

So 2 years down the track, and almost 5 (!) years total with this server, and it is still kicking along like a champion.  Kudos to the components, that's for sure.  A couple of drives have died (to be expected) and I'm now on 6.3.2, and have migrated away from plugins completely to dockers where appropriate (and using linuxserver.io dockers wherever possible.)  

Next step is to get to dual parity - I've got a couple of 6Tb disks coming for that, so that I can also increase array disk sizes beyond the current 4tb.  Other than that, not a great deal to report.  This thing has been more reliable than just about any other PC-based device I've owned in the last 25 years or so!

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.