unRAID Project Update, Core Features, Virtualization, and Thanks You's


Recommended Posts

EDIT:  Wanted to add a forward here to clarify something with the community.  The team at Lime Tech is a "divide and conquer" team where myself and Eric focus on the development efforts related to virtualization technologies like Docker and KVM/Xen.  Tom focuses on core NAS features.  We jointly test our combined works.  In addition, it should be noted that the next beta release will be out as soon as we complete our enhancements to the cache pool feature.  Thereafter Tom will shift focus to the other features that are on the roadmap.  Our goal is to continue to advance on both fronts of development to bring you some more amazing features to unRAID.

 

Hey guys, we have been hard at work testing both the updated enhancements to the underlying virtualization capabilities as well as a new firmware type called OVMF.  With OVMF, we can leverage Secure Boot technologies in combination with other additional advantages including support for UEFI.  We have JUST begun experimenting with OVMF per the recommendation of Alex Williamson over at Red Hat (THANK YOU ALEX) who is pretty much THE MAN when it comes to VFIO (as in he wrote it).  In addition, we will have support for a simplified method of passing through these devices in an upcoming beta release.  Hardware support is still a bit finicky, but when you have the right combination and settings, it's pretty amazing.  In addition, it doesn't just mean new hardware either.  In one current test system, I have an AMD HD3870 (OLD CARD) and an NVIDIA GTX 780 (NEW CARD). I can have two VMs running simultaneously (one with each GPU) and my setup is to use one for XBMCbuntu and the other for Windows 8.1, Steam OS, or whatever else I want.

 

For those that don't understand our obsession with this type of technology, we'll be posting videos soon to showcase the power it brings to the dynamic of using either a PC OR a NAS with unRAID.  Given that one of the primary use cases for mass storage is media content, and given that all users typically have some type of HTPC investment coupled to their NAS to make use of it (XBMC, Plex, Chromecast, Apple TV, etc.), and given that many also may have an interest in gaming / Windows Desktop PC applications, etc., wouldn't it be great if you could use those as well and dynamically switch between them?  And wouldn't it be even GREATER if running these applications and hardware pass through scenarios had ZERO impact to the risk of stability or reliability in the underlying unRAID NAS operating system?  Meaning if a VM with GPU pass through suffers a crash, you have ZERO risk to your NAS services crashing?  And what if Docker could run alongside that NAS OS at that "higher operating level" with ZERO network impact thanks to it's ability to directly attach to host storage.  And let's not forget the fact that at the same time we're also continuing to add core NAS features such as cache pool support to protect your cache/application data and support fast IO for protected write operations.  Notifications and UPS are things that should have been in unRAID for a long time, I agree, but at the same time, they have been thanks to community support for which we at Lime Tech are truly grateful.

 

I'm truly excited right now at where we are headed in this project.  I think we are differentiating ourselves from our competition that are just focused on being "just plain storage" to something more and more fun and capable.  In addition, we are making headway to automate some monotonous business process functions that will allow for more and more time dedicated to development and hopefully more frequent releases.

 

In addition, I want to take a moment to personally thank a few of our community members out there.  gfjardim, who has slaved away at making some AMAZING enhancements to the dockerMan Plugin.  needo, who has built an insane number of docker containers!  In addition, thanks to sacretagent for the docker requests thread, dalben, for the XBMC headless server idea, NAS for the early vision and foresight on how powerful Docker truly can be, and frankly everyone who's been pitching in with both requests and development work.  You guys are truly amazing...

 

Stay tuned for more information in the weeks ahead and we're working hard to get the next release out to you guys very soon!

 

Update:  Early Demo of Capabilities

 

Ok, I know this video looks not the greatest, but it was late last night and I didn't have time to do proper lighting.  In short, here's an internal build of us running 3 different operating systems on two different GPUs without ever rebooting the host.  When I change inputs, that's me going from the AMD card to the nVIDIA card.  The first two VMs just automatically boot with unRAID.  The third VM I boot up with a single command after shutting it down.

 

http://youtu.be/mkAGyzNL9jA

Link to comment
  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

I can have two VMs running simultaneously (one with each GPU) and my setup is to use one for XBMCbuntu and the other for Windows 8.1, Steam OS, or whatever else I want.

 

Do you have Hardware Video Acceleration working in your XBMCbuntu VM? For many, that is going to be a deal breaker unless you have a beefy server / CPU(s).

 

In addition, I want to take a moment to personally thank a few of our community members out there.  gfjardim, who has slaved away at making some AMAZING enhancements to the dockerMan Plugin.  needo, who has built an insane number of docker containers!  In addition, thanks to sacretagent for the docker requests thread, dalben, for the XBMC headless server idea, NAS for the early vision and foresight on how powerful Docker truly can be, and frankly everyone who's been pitching in with both requests and development work.

 

No love, huh? Why do I even bother....

Link to comment

I can have two VMs running simultaneously (one with each GPU) and my setup is to use one for XBMCbuntu and the other for Windows 8.1, Steam OS, or whatever else I want.

 

Do you have Hardware Video Acceleration working in your XBMCbuntu VM? For many, that is going to be a deal breaker unless you have a beefy server / CPU(s).

 

Why yes I do ;-).  I am using only 1 vCPU assigned to that VM and 2GB of RAM.  I am thinking I can even lower the RAM further.  I've tested some pretty hi-res stuff and so far, so good.  Great way to repurpose an old GPU:  give it to an XBMCbuntu VM (or openELEC, or XBMC on Windows, or whatever you want).

 

Also got SteamOS working ;-)  Metro Last Light is probably the best game on there right now though...

Link to comment

A question regarding the btrfs cache pooling ...

 

So right now there are a lot of people running their SSD in their systems for number of reasons.  The two most popular that I know of being Plex won't allow an HDD to spin down and faster VM's. They are also mounted outside the array because it is too small to act as a cache drive and because they need a filesystem that supports DISCARD (Trim) and this is accomplished via the go script which I know Tom wants to avoid whenever possible. 

 

Will btrfs cache pooling help this situation?

 

Will there be a way to mount a drive outside the array via the gui?

 

Can we run a cache pool with a large spinner + smaller SSD and dictate that a specific cache-only share resides on the SSD of the pool, like for Plex and VM images?

Link to comment

A question regarding the btrfs cache pooling ...

 

So right now there are a lot of people running their SSD in their systems for number of reasons.  The two most popular that I know of being Plex won't allow an HDD to spin down and faster VM's. They are also mounted outside the array because it is too small to act as a cache drive and because they need a filesystem that supports DISCARD (Trim) and this is accomplished via the go script which I know Tom wants to avoid whenever possible. 

 

Will btrfs cache pooling help this situation?

 

Will there be a way to mount a drive outside the array via the gui?

 

Can we run a cache pool with a large spinner + smaller SSD and dictate that a specific cache-only share resides on the SSD of the pool, like for Plex and VM images?

We will be experimenting with a variety of setups with the cache pool feature.  Right now in our test environments we exclusively are using SSDs for this, but testing with spinners as well will be performed.

Link to comment

Anyone want to translate the OP to English, and to explain which items matter to the average user?

 

If you buy very specific hardware, go through a 40+ hour training course on KVM or Xen, another 40+ hour course on Linux command line, another 40+ hour course on PCI Passthrough, run 2 cables through your house to a few TVs, plus some cat 5 to USB converters (for a remote)... You could put two video cards in your unRAID server and use those instead of buying a cheap Pi or other cheap ARM plex / xbmc device.

 

Or if you want to run things like a router / firewall (like pfSense) or Windows... You could run it on your unRAID machine.

 

Simply put, its not for normal users unless they have the time and ability to learn some complex things.

 

If you ask me, there are other free Virtualiztion products out there that are years ahead of what unRAID can do. With 2 hands tied behind their back due to Slackware, using a root ram file system, 100,000+ less resources (people) than the competition does... I don't see them catching up anytime in the near future.

 

Jonp is a bright guy so pulling it off in Slackware isn't out of the question. Still leaves the whole WebGUI part out which 10000+ times more complicated then what we do with unRAID. If they can't have a Virtualization WebGUI like the competition and do things like snapshots, VM migration, iSCSI, etc... I don't see the point in half assing it because you aren't going to win new business if you aren't on par with those other products (which are free).

 

I just think they should focus on NAS stuff in Version 6, switch to a modern Linux Server Distro in unRAID 7 and then focus on Virtualization. Once they switch Distros, KVM and Xen will be FAR LESS complicated. ClarkOS, Proxmox, Open Media Vault, XenServer, Neth Server, etc. focus on their core compendiences and spend their time implimenting the stuff that other Linux Distros and Linux dorks like myself figure out for them.

 

It would make much more business sense to look at a distributed file systems since a lot of people have more than one unRAID than who would  but what I do I know...

Link to comment

Anyone want to translate the OP to English, and to explain which items matter to the average user?

 

If you buy very specific hardware, go through a 40+ hour training course on KVM or Xen, another 40+ hour course on Linux command line, another 40+ hour course on PCI Passthrough, run 2 cables through your house to a few TVs, plus some cat 5 to USB converters (for a remote)... You could put two video cards in your unRAID server and use those instead of buying a cheap Pi or other cheap ARM plex / xbmc device.

 

Or if you want to run things like a router / firewall (like pfSense) or Windows... You could run it on your unRAID machine.

 

Simply put, its not for normal users unless they have the time and ability to learn some complex things.

 

If you ask me, there are other free Virtualiztion products out there that are years ahead of what unRAID can do. With 2 hands tied behind their back due to Slackware, using a root ram file system, 100,000+ less resources (people) than the competition does... I don't see them catching up anytime in the near future.

 

Jonp is a bright guy so pulling it off in Slackware isn't out of the question. Still leaves the whole WebGUI part out which 10000+ times more complicated then what we do with unRAID. If they can't have a Virtualization WebGUI like the competition and do things like snapshots, VM migration, iSCSI, etc... I don't see the point in half assing it because you aren't going to win new business if you aren't on par with those other products (which are free).

 

I just think they should focus on NAS stuff in Version 6, switch to a modern Linux Server Distro in unRAID 7 and then focus on Virtualization. Once they switch Distros, KVM and Xen will be FAR LESS complicated. ClarkOS, Proxmox, Open Media Vault, XenServer, Neth Server, etc. focus on their core compendiences and spend their time implimenting the stuff that other Linux Distros and Linux dorks like myself figure out for them.

 

It would make much more business sense to look at a distributed file systems since a lot of people have more than one unRAID than who would  but what I do I know...

In actuality, a lot of these statements are incorrect.  You won't have to go through a whole bunch of craziness to make a Linux VM with pass through become functional and useful.  We've actually gotten the process down to be fairly repeatable without much work on the user at all.  Still requires the right hardware, but so what.

 

We don't have to build a VM manager if we can make VMs like downloading an app.  We also can make it so that if you are an advanced user, you'll like it too.

 

In addition, I don't see anyone else in our space focusing on what we are doing.  Definitely open to seeing links to others focusing on it in a product offering.

 

I'd be careful how quickly you try to translate my posts grumpy.  You really don't know how far and solid we have this working right now, and Slackware has actually been more helpful than hurtful in our cause...

Link to comment

In actuality, a lot of these statements are incorrect.  You won't have to go through a whole bunch of craziness to make a Linux VM with pass through become functional and useful.  We've actually gotten the process down to be fairly repeatable without much work on the user at all.  Still requires the right hardware, but so what.

 

Great to hear. I have all the respect in the world and you without a doubt know your stuff. I look forward to seeing what the end result is.

 

We don't have to build a VM manager if we can make VMs like downloading an app.  We also can make it so that if you are an advanced user, you'll like it too.

 

It sounds like you plan on having a VM Store. Are you going to be housing / maintaining several ISOs / images for us?

 

and Slackware has actually been more helpful than hurtful in our cause...

 

<facepalm>

Link to comment

In actuality, a lot of these statements are incorrect.  You won't have to go through a whole bunch of craziness to make a Linux VM with pass through become functional and useful.  We've actually gotten the process down to be fairly repeatable without much work on the user at all.  Still requires the right hardware, but so what.

 

Great to hear. I have all the respect in the world and you without a doubt know your stuff. I look forward to seeing what the end result is.

 

We don't have to build a VM manager if we can make VMs like downloading an app.  We also can make it so that if you are an advanced user, you'll like it too.

 

It sounds like you plan on having a VM Store. Are you going to be housing / maintaining several ISOs / images for us?

 

and Slackware has actually been more helpful than hurtful in our cause...

 

<facepalm>

Answering questions like these are why we have VMs really slated for further down the pipe as "official" features.  What we at lime tech should provide directly (e.g VMs themselves) and what we should provide in terms of a platform for others (e.g. plugin authors) is something we have to handle delicately.  The phase we are at now was all about proving the concept.  Optimization, distribution, monetization, and maintenance are additional phases we will reach in due time...

Link to comment

Question, from the OP it sounds like the next beta to be published will only have virtualization updates, and no core NAS updates until further down the road? Did I read that correct? If that's not the case what should we be looking forward to seeing in this next beta release containing core NAS additions/updates?

Link to comment

Question, from the OP it sounds like the next beta to be published will only have virtualization updates, and no core NAS updates until further down the road? Did I read that correct? If that's not the case what should we be looking forward to seeing in this next beta release containing core NAS additions/updates?

The primary core NAS feature in development right now is cache pooling.  The virtualization stuff mentioned here isn't what's all going into the next beta, but rather, just an update on where we stand with that stuff. 

Link to comment

A question regarding the btrfs cache pooling ...

 

So right now there are a lot of people running their SSD in their systems for number of reasons.  The two most popular that I know of being Plex won't allow an HDD to spin down and faster VM's. They are also mounted outside the array because it is too small to act as a cache drive and because they need a filesystem that supports DISCARD (Trim) and this is accomplished via the go script which I know Tom wants to avoid whenever possible. 

 

Will btrfs cache pooling help this situation?

 

Will there be a way to mount a drive outside the array via the gui?

 

Can we run a cache pool with a large spinner + smaller SSD and dictate that a specific cache-only share resides on the SSD of the pool, like for Plex and VM images?

We will be experimenting with a variety of setups with the cache pool feature.  Right now in our test environments we exclusively are using SSDs for this, but testing with spinners as well will be performed.

 

Sound good.  But keep in mind the reason for the HDD / SSD setup is that a non-zero number of people have SSDs too small to act as their cache no less double duty as cache + VM/Docker store.  So the issue isn't cache pooling with SSD or HDD, it is a question of can the system you're setting up accommodate the desire to simultaneously have:

 

- a large spinner for cache duty / non-latency dependent storage

- a smaller SSD for Plex (because it never stops access to logs) and latency dependent VM/Dockers

 

I mean the simplest solution is to have a GUI element that lets us mount any drive, outside the array, formatted with an SSD friendly filesystem.  Shoehorning it into the cache pool situation might be an over complication [shrug].

 

I'm harping on this a bit because as you said media serving is a big part of your target user base and if they are using Plex then they will likely want to put their library index onto an SSD because it keeps an HDD spinning

Link to comment

A question regarding the btrfs cache pooling ...

 

So right now there are a lot of people running their SSD in their systems for number of reasons.  The two most popular that I know of being Plex won't allow an HDD to spin down and faster VM's. They are also mounted outside the array because it is too small to act as a cache drive and because they need a filesystem that supports DISCARD (Trim) and this is accomplished via the go script which I know Tom wants to avoid whenever possible. 

 

Will btrfs cache pooling help this situation?

 

Will there be a way to mount a drive outside the array via the gui?

 

Can we run a cache pool with a large spinner + smaller SSD and dictate that a specific cache-only share resides on the SSD of the pool, like for Plex and VM images?

We will be experimenting with a variety of setups with the cache pool feature.  Right now in our test environments we exclusively are using SSDs for this, but testing with spinners as well will be performed.

 

Sound good.  But keep in mind the reason for the HDD / SSD setup is that a non-zero number of people have SSDs too small to act as their cache no less double duty as cache + VM/Docker store.  So the issue isn't cache pooling with SSD or HDD, it is a question of can the system you're setting up accommodate the desire to simultaneously have:

 

- a large spinner for cache duty / non-latency dependent storage

- a smaller SSD for Plex (because it never stops access to logs) and latency dependent VM/Dockers

 

I mean the simplest solution is to have a GUI element that lets us mount any drive, outside the array, formatted with an SSD friendly filesystem.  Shoehorning it into the cache pool situation might be an over complication [shrug].

 

I'm harping on this a bit because as you said media serving is a big part of your target user base and if they are using Plex then they will likely want to put their library index onto an SSD because it keeps an HDD spinning

OK, maybe I misunderstood your original question.  So you want to be able to use a HDD for cache drive functions where you want the speed benefit of a non-array device but where you don't need the write performance of a SSD.  Then you want an SSD for appdata where latency is sensitive, but where your data footprint is smaller because its really just metadata, right?

 

Link to comment

I understand this OP is hear to virtualization thus the thank you's meantioning contributors in that area (my understanding). It wouldnt hurt for a seperate post touching on unRaid contributions and thanking all the people that have contributed so much (non-virtulization based) who have been here a long time and put in a tremendous amount of time.

 

By now mean an exhaustive list:

 

JoeL.

WeeboTech/and many other mods work

Speeding_ant

Few that have departEd but not nameless

The bitchers that painfully pushed on Tom

Madburg for thinking why only supermicro controllers :)

....

So many more (not being near a PC at home I am leaving out Many many others as well as tremendous amount of testers.

 

But since virtulization is what counts here (this thread), you nailed this properly I'm sure. I do feel there are quite a few that have read and will read this OP and just be speechless for a brief moment, like unRaid didn't exist until v6beta and a few new LT members arrived. Bravo, countless others will feel like dogsh^t on the bottom of a shoe.

 

Not being negative just your posts are insensitive to anything that predates your arrival. Take it for what's is worth and do a bit better in the future.

 

Link to comment

Thanks for this JonP; making roughly the same points as I was about the real benefits of virtualisation in keeping things manageable and minimising interfaces/dependencies. Needless to say, I agree!

 

A few points arising:

[*]As far as visualised Windows, etc. I trust you'll remember that there are those of us with microservers, thus no IOMMU, thus no passthrough, and only really want it for low spec servers running Teamviewer occasionally - rather than playing games whilst serving a movie up at the same time.

[*]No moves towards being able to transition away from ReiserFS for the array disks?

[*]Are the notifications/UPS going to be constructed in such a way that they can be extended? I'm still thinking that push notifications to your phone might be a more forward focused option for those of us with smartphones (98%?) Email is nice and fallback etc. - but ...

Cheers

Link to comment

Question, from the OP it sounds like the next beta to be published will only have virtualization updates, and no core NAS updates until further down the road? Did I read that correct? If that's not the case what should we be looking forward to seeing in this next beta release containing core NAS additions/updates?

The primary core NAS feature in development right now is cache pooling.  The virtualization stuff mentioned here isn't what's all going into the next beta, but rather, just an update on where we stand with that stuff.

 

So UPS support, email support, AFP, etc.. are not making it into the next beta to be released for testing? If that's the case how will we get to test before final at Q3 2014?

 

Link to comment

Question, from the OP it sounds like the next beta to be published will only have virtualization updates, and no core NAS updates until further down the road? Did I read that correct? If that's not the case what should we be looking forward to seeing in this next beta release containing core NAS additions/updates?

The primary core NAS feature in development right now is cache pooling.  The virtualization stuff mentioned here isn't what's all going into the next beta, but rather, just an update on where we stand with that stuff.

 

So UPS support, email support, AFP, etc.. are not making it into the next beta to be released for testing? If that's the case how will we get to test before final at Q3 2014?

All in time.  We are prepping for a more rapid release schedule.  I know its sometimes to see hard why approach things certain ways, but there is good reason for the slow and steady.  Know that for things like UPS support and notifications, they will require less development time than thinks like additional file systems and cache pool.

 

Keep in mind that even though we publicly beta test, we internally test a lot as well with respect to core features to make sure that our betas have a solid foundation.

 

And while I appreciate your comments on the other folks I need to thank, cut me a little slack, would yah?  I make it no secret that I've only been with LT since April.  I don't know all the contributions over the years that have occurred but I surely do appreciate everyone's efforts here, virtualization or not.

Link to comment

Thanks for this JonP; making roughly the same points as I was about the real benefits of virtualisation in keeping things manageable and minimising interfaces/dependencies. Needless to say, I agree!

 

A few points arising:

[*]As far as visualised Windows, etc. I trust you'll remember that there are those of us with microservers, thus no IOMMU, thus no passthrough, and only really want it for low spec servers running Teamviewer occasionally - rather than playing games whilst serving a movie up at the same time.

[*]No moves towards being able to transition away from ReiserFS for the array disks?

[*]Are the notifications/UPS going to be constructed in such a way that they can be extended? I'm still thinking that push notifications to your phone might be a more forward focused option for those of us with smartphones (98%?) Email is nice and fallback etc. - but ...

Cheers

Absolutely.  I think you will find the majority of you needs fulfilled by docker for headless Linux VMs.  That said, headless Linux VMs work just as good with our testing!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.