64 Bit unRAID running natively on Arch Linux with full hypervisor support



Recommended Posts

 

then, for my usage at least, i don't see the cons of not having a vtd cpu

No vtd, no pass through, if you don't need it  you are good to go :-)

you could still virtualize a plex server, usenet box, Dropbox/crashplan server, etc

I was thinking on my needs, i want to pass through a gpu a ditch my desktop ;-)

 

Sent from my GT-I9305 using Tapatalk

 

Link to comment
  • Replies 451
  • Created
  • Last Reply

Top Posters In This Topic

 

then, for my usage at least, i don't see the cons of not having a vtd cpu

No vtd, no pass through, if you don't need it  you are good to go :-)

you could still virtualize a plex server, usenet box, Dropbox/crashplan server, etc

I was thinking on my needs, i want to pass through a gpu a ditch my desktop ;-)

 

Sent from my GT-I9305 using Tapatalk

 

Yeah, that was what I was thinking! But maybe if my hands get itchy later on, might just get a 4670 or something :P

For now, I just want a full Linux based unRAID so I can easily run all sorts of apps without any worries. Will need to read up on xen and kvm. 

 

Sent from my SM-N9005 using Tapatalk

 

 

Link to comment

So this all brings up a point I've been pondering and I'm sorry if this will be vague, but I think the heavy hitters are all in this thread so they should remember.

 

But I am reminded of the very recent past when Tom was chasing bugs in different kernels which seemed to only pop up under certain load conditions [or it was some other reason I don't remember].  The result of a bunch of community effort was a battery of tests that helped ID the problem as well as give the system a good shakedown to validate the fix.

 

So this brings me to my point: beyond the standard tests such as writing a bunch of data, failing,swapping, and adding drives, running the cache mover, and a parity swap it might be good to start a thread to discuss and share scripts / processes to beat on the system to ensure there are no problems inherent in the kernel + ReiserFS or anything else that could go pop up as kernels and drivers change.

 

Make sense? Or should I go back to failing at installing virtual box :(

Link to comment

So this all brings up a point I've been pondering and I'm sorry if this will be vague, but I think the heavy hitters are all in this thread so they should remember.

 

But I am reminded of the very recent past when Tom was chasing bugs in different kernels which seemed to only pop up under certain load conditions [or it was some other reason I don't remember].  The result of a bunch of community effort was a battery of tests that helped ID the problem as well as give the system a good shakedown to validate the fix.

 

So this brings me to my point: beyond the standard tests such as writing a bunch of data, failing,swapping, and adding drives, running the cache mover, and a parity swap it might be good to start a thread to discuss and share scripts / processes to beat on the system to ensure there are no problems inherent in the kernel + ReiserFS or anything else that could go pop up as kernels and drivers change.

 

Make sense? Or should I go back to failing at installing virtual box :(

 

There are no better BETA testers than the real world. For the reason I think we'll release the extreme version in a similar fashion to how 5.0 was released. Although, it must be said, from my side I don't think the RC / BETA window will be anywhere near as lengthy. I'm quite liberal with 1.0 versions, preferring to name updates .1 releases once the RC stage is out the way. It's still Tom's baby, I'm just a guest really for now, so it's ultimately up to him.

 

Right, must crack on!

Link to comment

Little difference in Linux Kernel 3.9 and 3.10. Plus, Tom knows the various patches already.

 

The issues you are referring too were drivers (which are ironed out and would go into 3.10) and Slackware. not the Linux Kernel. Assuming this is CentOS, systemd will go a long way with making scripts and things work / function better too.

 

The Linux Kernel and Drivers will be solid. I suspect Tom and Ironic will be focused on CentOS, start up, services, scripts and emhttp.

 

 

 

Link to comment

Excellent news!  Count me as one that has an i3 (so no VT-d) but doesn't really need to do passthrough (at least at this point in time).  I'd love to move all my plugins off to a VM though, and this fits the bill perfectly!

 

Props to all involved for helping to move our favorite NAS product forward!

 

Pretty much the same as me.  Move the plugins off onto a VM or 3 and I'm happy with my little i3 server.

 

 

Link to comment

Excellent news!  Count me as one that has an i3 (so no VT-d) but doesn't really need to do passthrough (at least at this point in time).  I'd love to move all my plugins off to a VM though, and this fits the bill perfectly!

 

Props to all involved for helping to move our favorite NAS product forward!

 

I doubt your server doing all the things you mention about gets above 20% CPU Utilization.

Link to comment

Excellent news!  Count me as one that has an i3 (so no VT-d) but doesn't really need to do passthrough (at least at this point in time).  I'd love to move all my plugins off to a VM though, and this fits the bill perfectly!

 

Props to all involved for helping to move our favorite NAS product forward!

 

I doubt your server doing all the things you mention about gets above 20% CPU Utilization.

 

In its current form, or are you referring to after it is running what is being discussed here?  If you mean in its current form, you are correct. Pretty much the only time it gets over 20% CPU Utilization is when it's doing a Plex transcode, and that's relatively rare.

Link to comment

 

In its current form, or are you referring to after it is running what is being discussed here?  If you mean in its current form, you are correct. Pretty much the only time it gets over 20% CPU Utilization is when it's doing a Plex transcode, and that's relatively rare.

 

After if set up correctly.

 

+1 for this type of plugin virtualisation setup....expect some questions grumpy to set it up correctly :)

Link to comment

 

In its current form, or are you referring to after it is running what is being discussed here?  If you mean in its current form, you are correct. Pretty much the only time it gets over 20% CPU Utilization is when it's doing a Plex transcode, and that's relatively rare.

 

After if set up correctly.

 

+1 for this type of plugin virtualisation setup....expect some questions grumpy to set it up correctly :)

 

the thing is, if you have unraid in it's own distro running host, you don't really need plug-ins. you can run the fully functional apps  :-)

Link to comment

After if set up correctly.

 

Good to know.

 

I've read up a lot on virtualization after your and Badger's posts here.  Here's how I understand it as far as my system, running the proposed unRaid build.  Please correct me if I'm wrong.

 

I have an i3, so no VT-d, but it does have VT-x.  unRaid would run native "bare metal" as it's part of the host OS.  Any VM's I created would not be able to passthrough devices, but would run at near native speeds using virtualized network and disks.  If I were to repurpose my existing cache drive as the host OS drive (as I would no longer need unRaid to have a cache for app storage), the VM storage would reside on the same host OS drive.  Does that about sum it up?

 

I have read that vituralized network is plenty fast and not an issue, but what about the virtualized disk for the VM's?

Link to comment

 

In its current form, or are you referring to after it is running what is being discussed here?  If you mean in its current form, you are correct. Pretty much the only time it gets over 20% CPU Utilization is when it's doing a Plex transcode, and that's relatively rare.

 

After if set up correctly.

 

+1 for this type of plugin virtualisation setup....expect some questions grumpy to set it up correctly :)

 

the thing is, if you have unraid in it's own distro running host, you don't really need plug-ins. you can run the fully functional apps  :-)

 

Indeed, this is my intention. I'm thinking the only plugin ill keep is apc and perhaps plex as its pre compiled. BUT and would be where the questions come from someone like grumpy who is clearly more knowledgable....if it was their server how would they do it.

 

I have a Xeon CPU

 

PS, its refreshing to see this forum with so much activity.

Link to comment

I have an i3, so no VT-d, but it does have VT-x.  unRaid would run native "bare metal" as it's part of the host OS.  Any VM's I created would not be able to passthrough devices, but would run at near native speeds using virtualized network and disks.  If I were to repurpose my existing cache drive as the host OS drive (as I would no longer need unRaid to have a cache for app storage), the VM storage would reside on the same host OS drive.  Does that about sum it up?

 

Yes, but you are able to freely define a path for the virtual storage.

It is common that you define a local datastore, the same path for all your VMs.

S you are not limited to the OS disks hence.

 

I have read that vituralized network is plenty fast and not an issue, but what about the virtualized disk for the VM's?

 

...pretty much the same...it'll depend on your OS inside the VM and if "accelarated" drivers are available, like virtio-net for the vNICs these are  virtio-scs/virtio-block for the virtual-disks.

But not as with the vNICs, where the virtio-net driver basically open a "secure shared-memory block of some sort" between VM and host for the nic-traffic, you obviously can't get it faster than your real disk here

Physically, the virtual disk is a file on your host so there is some lag.

Rule: Having apps in the VM that require good disk performance...use iSCSI and use a datastore with enough IOPS.

Link to comment

Physically, the virtual disk is a file on your host so there is some lag.

Rule: Having apps in the VM that require good disk performance...use iSCSI and use a datastore with enough IOPS.

 

Right, so you wouldn't want to run a massive SQL database on the VM, but Plex, SABnzbd, Sickbeard, CouchPotato, etc. should be fine?  If say an unrar or par repair took a bit longer than "bare metal" that's not that big of a deal.

Link to comment

Physically, the virtual disk is a file on your host so there is some lag.

Rule: Having apps in the VM that require good disk performance...use iSCSI and use a datastore with enough IOPS.

 

Right, so you wouldn't want to run a massive SQL database on the VM, but Plex, SABnzbd, Sickbeard, CouchPotato, etc. should be fine?  If say an unrar or par repair took a bit longer than "bare metal" that's not that big of a deal.

 

Yes.

...besides, you can create an iSCSI target on your host with no problem and use that via the accelerated NICs in the VM

...or simply use a SSD as datastore....make a snapshot of he VM disks to your array and you aren't required to have a raid for the datastore safety either...going back or restore of the VM is easy.

Link to comment

Entirely true, but I'd rather run those fully functional apps in a VM to keep the host OS tidy, unless someone can convince me it's better to do otherwise.

 

Because my Server boots up so fast and into XBMC and due to several apps on various VMs accessing a mysql database... In my case, I loaded mysql on my host.

 

I was like a kid with a new toy on Christmas for a while. I ran Couchpotato, Sickbeard, Owncloud, etc. in a separate VM and had other apps in other VMs. Eventually the novelty wore off and since I am not anal... I just loaded all that stuff on the Server Host itself. The host can handle it, those apps have worked flawless, if I need to restart, upgrade, etc. one (which I don't recall the last time I did it) it's easy to do. Basically... Those apps on my Server Host... works as advertised. If you have run those apps outside of a plugin on unRAID, you will understand what I am talking about (I let them upgrade themselves even).

 

Now I am going to bet that most people are not as cavalier as I am but the nice thing about all of this... You can "geek out" and take it as far and wide as you want to go. 

 

The only VMs I run now are XBMCs and Windows (with video cards and USB Controllers passed through). I also have several "development" VMs too. For example, I have a duplicate Arch Linux (in your case CentOS) VMs that "mirror" my Host OS where I try out new software that I may or may not want to load on my host. I also have some VMs for CentOS and Debian where I am testing out several "Server reporting / Graph / Stats" Apps and several WebGUIs to manage Virtual Machines that have easy installs for those specific Linux Distros.

 

I have crashed VMs, reloaded them, had them "lock up", rebooted, turned them off and on, etc. countless times and it had no effect on the other VMs, the Host or the Apps running on both.

 

This is a PROVEN and RELIABLE technology and like I said earlier... If Fortune 500 companies are spending / investing billions into virtualization millions of mission critical servers into ESXi, XenServer, Hyper-V and KVM, you are going to be okay with your stuff too.

Link to comment

make a snapshot of he VM disks to your array and you aren't required to have a raid for the datastore safety either...going back or restore of the VM is easy.

 

This is another reason why running all the apps in a VM sounds like the way to go to me.  That said, I may find myself migrating to the way grumpy does it over time.  But then that's the beauty of this project, the flexibility.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.