unRAID Project Update, Core Features, Virtualization, and Thanks You's


Recommended Posts

I've been a member of this community for a few years now and have stayed out of all of these discussions on unRAID 6, since I'm still rocking 5.0.5 and don't have a test machine.  I've been patiently waiting for the point where 6 is stable and the feature mix is finalized to take the plunge.  I've quietly watched while grumpy and ironic and others made great advances on their own with virtualization (and really led the charge in general), and the arrival of new LT employees like jonp, who has done such a great job updating us.  Perhaps I shouldn't have waited for so long to chime in, but now I feel I have to.

Thank you for taking the time to post.

 

Am I really in the minority in agreeing with grumpy, thinking that Slackware is a dead distro and unRAID needs to update to a modern distro?  At the very least, having quick security patches should be a priority for ANY Linux machine, no?

I'm not sure what the definition of a dead distro is, or should be.  Most of the F.U.D. posted regarding slack is completely unwarranted, especially considering how unRaid as a product is designed.

 

The mishmash of packages from various sources compiled for different kernels--it's a mess.  I only run 2 or 3 apps on unRAID now adays, but making sure they're running efficiently and with recent packages is a constant struggle.  How nice would it be to have a legit package manager?  Or to have a distro that has community support outside of these forums?

Here is a key concept: unRAID is not a distro in the sense you might think of a distro such as Ubuntu, Arch, CentOS, etc.  unRAID is better thought of as an appliance platform.  While there are plenty of hard core linux experts out there who want to run everything they want on their unRAID server, that is not our target Customer base.  My vision for unRAID has always been in fact to try and hide linux from the average user.

 

Like madburg, am I one of only a few that is shocked that basic core functions of a NAS like UPS support and email notifications are still dependent upon community plugins?

Having said the above about trying to create a NAS appliance platform, I've also wanted to set up a way for users to add value, and get kudos for doing so.  I don't mind that some components have been written by some our community members.  If it does the job, why shouldn't they?

 

You can make a (good) case that I have fallen short of the goal of making it easy to create, integrate, and maintain community-written components.  With the inclusion of technologies such as virtualization and docker, we hope to make big strides in this area.

 

But to address your two examples of notifications and UPS support - yeah well sure, it is on the list to get into unRaid 6.

 

I see the amazing potential of Docker and virtualization (I've been running unRAID as a guest since Johnm shared his build with the community), but really question having GPU passthrough for gaming and video applications taking a development priority to glaring omissions still left out of unRAID.

Jon has a real passion for this and it shows in his posts - just imagine how he is in person!  ;D

 

It might appear that's all we're doing, but most assuredly not.  This is his baby for now and it takes some of his time and little bit of mine, but gotta have some fun and excitement too right?

 

I still don't find NFS stable for my uses, and AFP is pretty much useless as is.  Heck, my Asus router does TimeMachine a heck of a lot better than unRAID, and that's unfortunate.  Thankfully SMB still works well, or else even the NAS functionality of unRAID would be crippled.

Everything is, and always has been, optimized around SMB and that is how I use unRaid personally.  Reality is that both NFS and AFP have certain issues which are impossible to get around, but yes, we are trying to improve the support for those two protocols.

 

It's hard to meet everyone's expectations, but let's not forget what made unRAID what it is today: storage.  That should at least take an equal share of development and innovation as passing through 2 GPUs.  And let's be honest--is that really going to be the majority use case?  Don't most of us use unRAID for our media storage?  So we already have media player clients and the such?

Very good points.  I guess all I would say is continue to watch unRaid 6 development; I am dedicated to making improvements in this area (storage).

 

Forgive the rant.  I just keep on seeing the same things over and over in the forums...

No worries, I appreciate your post, thanks.

Link to comment
  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

I personally gave up on trying to get steam running as a guest with unRAID and just bought a PS4 :) Not the fault of unRAID; I just couldn't be bothered to spend that much time on a solution that seems like it is bound together with scotch tape.

 

Kryspy

Link to comment

Hi Guys,

 

Long time user of unRAID, and am in the midst of replacing a 2TB drive from 2010 with a 4TB drive.  I love the product, and have preached about it to anyone who asks about HTPC storage.  We have less wear and tear on our drives than a typical RAID 5 config, less parity overhead, and damn it, it just works well.

 

I love Virtualization, and use it on a daily basis both at home and professionally.  I purposely upgraded my rig to an i7 with 32gb of memory to be able to virtualize more, and it has served me well.  The worries I have revolve around release cycles.....

 

With the storage side, the release cycles don't need to be often, since theoretically once UPS support, and AFP/NFS support are a bit better the platform is not going to evolve rapidly.  It's not like this is a Netapp where we are working daily on coming up with new ways to solve issues.  Sure, down the road we can add deduplication and such, but even so, we are looking at release cycles perhaps quarterly from the beta perspective, and release candidates even less often?  The underlying storage hardware doesn't change often, and from what has to be supported at the hardware level, the changes are less frequent.

 

With the virtualization side there are much more frequent updates to Xen, KVM, etc.  The hardware changes constantly, with new video cards and drivers always being released.  For instance the Intel HD GPU's have been a PITA to get working under Xen with PCI Passthrough.  The HD4000's finally started working under a newer patch, and then the HD4600's came out and they would not work.  It's an issue I face right now with Beta 6 in that I cannot get the HD4600 running.  Even the ATI card which works in Beta 5 will not run in Beta 6.  From what I have read we just happen to be missing some Xen patches in Beta 6, which I totally get.

 

So, unless you make the virtualization piece modular, how do you work around the update cycles?  Will we just see more frequent updates that only have virtualization advances?  Or will the virtualization updates only come as the storage piece comes along?

 

The only reason I ask, is that the approach you take has some impact on how I approach the use of my unRAID server in the future.  If virtualization updates for Xen, for instance, will not be often, it may just make more sense for me to use my unRAID server only for storage. 

 

Does that make sense?  I honestly don't care which way this goes, as long as we have some heads up so I can plan according.

 

Thanks!

 

-Marcus

Link to comment

I personally gave up on trying to get steam running as a guest with unRAID and just bought a PS4 :) Not the fault of unRAID; I just couldn't be bothered to spend that much time on a solution that seems like it is bound together with scotch tape.

 

Kryspy

I went back to a dedicated gaming rig too. As you insinuate, its just a bit too fragile for even home usage.

Link to comment

I still don't find NFS stable for my uses, and AFP is pretty much useless as is.  Heck, my Asus router does TimeMachine a heck of a lot better than unRAID, and that's unfortunate.  Thankfully SMB still works well, or else even the NAS functionality of unRAID would be crippled.

Everything is, and always has been, optimized around SMB and that is how I use unRaid personally.  Reality is that both NFS and AFP have certain issues which are impossible to get around, but yes, we are trying to improve the support for those two protocols.

 

Thank you for the post.  I really appreciate the direction you're going with Docker instead of the old style plugins/addons that had a tendency to have collisions.

 

What you've said about about your relationship with NFS, AFP, and SMB is very telling here.  I fully agree the SMB seems to be the future, but a lot of things still strongly favor NFS.

 

It does lead me to a question though.  What are your test platforms for each protocol, and do you do both short term tests (mounting NFS on XBMC to stream a single movie) and long term tests (mount an NFS share on an ubuntu box and test a week later for stale file handles)?

 

In my case, I'm pretty willing to bend to use what works best, but I need to know what those are in order to make the change.

 

Thanks,

 

-Ben

Link to comment

I still don't find NFS stable for my uses, and AFP is pretty much useless as is.  Heck, my Asus router does TimeMachine a heck of a lot better than unRAID, and that's unfortunate.  Thankfully SMB still works well, or else even the NAS functionality of unRAID would be crippled.

Everything is, and always has been, optimized around SMB and that is how I use unRaid personally.  Reality is that both NFS and AFP have certain issues which are impossible to get around, but yes, we are trying to improve the support for those two protocols.

 

Thank you for the post.  I really appreciate the direction you're going with Docker instead of the old style plugins/addons that had a tendency to have collisions.

 

What you've said about about your relationship with NFS, AFP, and SMB is very telling here.  I fully agree the SMB seems to be the future, but a lot of things still strongly favor NFS.

 

It does lead me to a question though.  What are your test platforms for each protocol, and do you do both short term tests (mounting NFS on XBMC to stream a single movie) and long term tests (mount an NFS share on an ubuntu box and test a week later for stale file handles)?

 

In my case, I'm pretty willing to bend to use what works best, but I need to know what those are in order to make the change.

 

Thanks,

 

-Ben

 

I can tell you that in all my home usage and setup, I use NFS exclusively for media sharing.  Not because it's "better" in my mind, I just always have done it that way and don't seem to have the same issues as others do with stale file handles and the like.

Link to comment

What type NFS clients are you using?  From my Mac, over about a week or so, I start loosing directories and eventually get a stale file handle.  It used to work fine under v5, but v6 has never worked well for me under NFS for a Mac.  SMB worked well for the Mac under v6b1, but after the ACL entry, I get posix errors from Makemkv.  I have been commenting out that line in the smb.conf and it has been working pretty well, execpt I can't execute any files from my windows servers, but I don't do that very often...

Link to comment

I'm wondering about these supposed issues with NFS, as I've never had an issue either.

 

I use it exclusively as that is what the Pi prefers,

I have 1 Raspberry pi with RaspBMC, and 2 Windows clients running Windows 8 and using XBMC and everything works very well.

 

 

I believe it has to do with user shares and files/directories that have not been accessed in a while.

If you have cache_dirs running on the user share this may (I don't have a definitive answer on this) alleviate this by keeping those inodes/file handles busy.

Link to comment

I just started running cache_dirs again a few days ago.  I will test again over the next week with NFS to see if I still see the issues.  I did test with vers 3 and 4 and that made no difference.  I have only noticed this with the Mac clients ( that is the only ones using NFS since I had issues with SMB )

Link to comment

unRAID is better thought of as an appliance platform.  ...My vision for unRAID has always been in fact to try and hide linux from the average user.

 

And that is one thing I LOVE about unRAID.  I'm very much a fan of appliance software.  I use OpenELEC for my XBMC installations because of that very aspect.  Easy installation.  Easy management.  Sometimes you just want your software or machine to just do one thing and do it well and easily.  The first time I installed unRAID I was amazed how fast and easily I was up and running.

 

I've also wanted to set up a way for users to add value, and get kudos for doing so.  I don't mind that some components have been written by some our community members.  If it does the job, why shouldn't they?

 

I know you've already agreed to integrate some of the more important addons to unRAID 6 so I am not trying to beat a dead horse, but I wanted to add my perspective.  It's certainly a positive to allow users to develop and add components to unRAID.  A lot of software allows/takes advantage of that and it's generally a good thing for users.  The issue I have with that at times is when it comes to "core" or important features (subjective I know).  My fear is always that user Joe Blow gets bored and/or moves on and stops developing the important feature(s).  Then the users who relied on said feature are SOL.  When the primary software developer includes them into the product there is more security for users that they will be maintained and continue to function.

 

I appreciate the features that you guys have been adding like virtualization and Docker even though I don't see myself using them anytime soon.  I'm already into virtualization (ESXi since it's pretty easy) so it's even more my preference that unRAID remain simple and focused on storage since I can run my other programs in VM's.  It's my wish to see unRAID become the best home NAS application on the market.  I can't wait to see support for multiple parity drives, support for better file systems on array drives along with the inclusion of "core" components like UPS, email notifications and preclear.  I would also love to see automatic SMART testing and reporting to help notify users of impending failures without user intervention.

 

Thanks for all you've done, Tom.  There have been some bumps in the road, but recent developments are quite promising and I think most of us are excited for see what the future holds.  I know I am.

 

P.S. I hope the streaming interruption bug gets fixed soon too.  (yes, I am beating that dead horse.  LOL!)

 

 

Link to comment

My fear is always that user Joe Blow gets bored and/or moves on and stops developing the important feature(s).  Then the users who relied on said feature are SOL.  When the primary software developer includes them into the product there is more security for users that they will be maintained and continue to function.

 

Agree with that.  While it's great the community is active, there is no real commitment from the community to continmue development or support of features created by the community.

 

I look at the Dockerman plugin as an example.  It's so good at what it does I think it would be silly for LT to spend time developing their own version.  Ideally they choose to adpt the community version, add it to the release, and own the long term support of it.  Obviously LT would need to look at the code and decide if they can support it or not, but that's something I think needs to be looked at.  LT aren't like netgear, qnap etc where they have a stable of developers.  There are 3 of them.  Leverage the cimmunity to enhance the product but make sure whatever is leveraged is supported by LT.

Link to comment

What type NFS clients are you using?  From my Mac, over about a week or so, I start loosing directories and eventually get a stale file handle.  It used to work fine under v5, but v6 has never worked well for me under NFS for a Mac.  SMB worked well for the Mac under v6b1, but after the ACL entry, I get posix errors from Makemkv.  I have been commenting out that line in the smb.conf and it has been working pretty well, execpt I can't execute any files from my windows servers, but I don't do that very often...

No Mac clients that's for sure.  I use other unRAID servers and XBMC to connect via NFS.

Link to comment

This is why as users of FOSS we should all take a moment to look at what license the code you love is running under. Not only is this good as it opens people eyes more to FOSS/GPL etc it is something non coding devs can actively get involved with. It is my experience that almost all code that is posted for people to use freely that doesn't have a license is just because the dev forgot or didn't think it mattered.

 

This doesn't solve this problem completely but it does ensure there are no barriers stopping people adopting abandonded code.

 

Join the campaign, look for the license ! :P

 

 

Link to comment

I can tell you that in all my home usage and setup, I use NFS exclusively for media sharing.  Not because it's "better" in my mind, I just always have done it that way and don't seem to have the same issues as others do with stale file handles and the like.

 

Where I've run into problems is with a Linux host (under Xen) mounting shares from the Dom0 Unraid host via NFS, and then *eventually* producing stale file handles.  CIFS/SMB as it's becoming handles things in a much friendlier much more userland centric manner. 

 

Tradeoffs either way, since CIFS is a bit more windows centric - at least last time I dug into it, it only supported single-tier permissions instead of 3-tier permissions that we like to have in Linux and MacOS filesystems, and had some filename restrictions that aren't seen elsewhere.

 

Once I get some free time, I'm going to try and migrate over to using the latest beta, and get everything (OpenVPN, Mylar, Transmission, SabNZBd, Sickbeard, Couchpotato, Apache) working in docker, and at that point, I expect things to go a bit more smoothly, mostly because for Docker, I don't need to rely on a network filesystem at all.

Link to comment

I'm wondering about these supposed issues with NFS, as I've never had an issue either.

 

I use it exclusively as that is what the Pi prefers,

I have 1 Raspberry pi with RaspBMC, and 2 Windows clients running Windows 8 and using XBMC and everything works very well.

 

 

I believe it has to do with user shares and files/directories that have not been accessed in a while.

If you have cache_dirs running on the user share this may (I don't have a definitive answer on this) alleviate this by keeping those inodes/file handles busy.

 

It helps quite a bit.  A lot of it has to deal with the way that NFS is implemented.  It's not userland, and the file handles (I believe this is correct) link directly to inodes.  If something changes about those inodes change, then the file handle goes stale, and it's supposed to refresh.  Unfortunately, that all happens at an kernel level, and is much harder to troubleshoot than if it were userland.

Link to comment

NFS works fine for me sharing to Popcorn Hours/WDTV's, but I ran into the stale file issue with a domU running Ubuntu/Plex.  The stale handles would cause items to disappear from the Plex library with each scan.  I tried tweaking a couple of parameters to fix this with the mounting in fstab, but eventually went to SMB shares, and that solved the problem for VM's.

Link to comment

Here is a key concept: unRAID is not a distro in the sense you might think of a distro such as Ubuntu, Arch, CentOS, etc.  unRAID is better thought of as an appliance platform.

 

Docker (unRAID bzimage) +  Image / Distro (bzroot)  = Appliance Platform

 

I'm not sure what the definition of a dead distro is, or should be.

 

Let me help you with that. Using Docker Hub (Appliances) as an example:

 

Slackware = 212 Downloads

This includes all whopping 5 of the Slackware images and just 1 App. 4 of those images are done by the same guy too. If you want a good laugh go look at his docker images because he has a picture of himself. Aside from looking like 70s porn actor (typical Slacker guy)... I'm sure that dork is quite the ladies man.

 

Debian =  226,298 Downloads

Using just 2 of base Debian Images and none of the THOUSANDS of Debian based Apps.

 

CentOS = 101,220 Downloads

Using just 2 of the base images and none of the THOUSANDS of CentOS based Apps.

 

Ubuntu = 657,913 Downloads

Using just 2 of the base images and none of the THOUSANDS of Ubuntu based Apps.

 

Of the 14,000+ images that all those Developers, DevOPS, Linux Experts, Redis, MySQL, Wordpress, Node.js, etc. created only 1 choose Slackware for their Appliance Platform. In 4+ months it has a whopping 14 downloads and it's not even running the latest stable version of Apache.

 

THAT is what we in the business call the definition of DEAD if you are Slackware. Much like Reiserfs in the File System world and why Reiserfs isn't included in Red Hat / CentOS for years.

 

If we are to believe that Slackware is a superior Appliance Platform... Shouldn't we at least eat our own dogfood and switch our Docker base image to it?

 

You can suck the unRAID bzroot (Slackware) image right into Docker without having to even download a image. After you trim the bzroot down by removing the linux kernel and a few other things you can get it down to 275mb or so. Upside = We save several MBs per container. Downside = You will have one hell of a dockerfile and updating it will be a MFer.

Link to comment

Docker (unRAID bzimage) +  Image / Distro (bzroot)  = Appliance Platform

 

Using slackware for this would make sense because the kernel + relatively thin set of user space tools, enough to run docker, is very easy to set up and very stable.  With slackware you know exactly what is being installed and how it all works with minimum unneeded code.

 

What's a distro?  It's just a collection of packages.  All the "distros" out there, ubuntu, centos, arch, slackware, all use most of the same code.  In the case of slackware, these packages are actively maintained, e.g.:

http://slackware.cs.utah.edu/pub/slackware/slackware64-current/

 

Take a look at how it's put together and compare with other distros: same code.

 

The major difference in distros is how package management is done.  Other differences include how system startup/initialization takes place.  If you are running a full "distro" such as ubuntu with a desktop, multiple user logins, traditional file system layout on a system disk, sure it's nice to be able to install new stuff using 'apt-get' or equivalent.  We use ubuntu for other purposes and it's fine for that.

 

In the case of unRaid, we want to know exactly everything that's being installed, striving for stability.  You can disagree with that approach, but it works.  Sure now people want to get more out their hardware investment, that is, run more types of applications on their NAS device.  Ok, slackware is not great for that if you have to download numerous packages to get the job done.  This has been a big and growing issue for plugin developers and we recognize that.

 

But docker changes all that.  Jon and I recently met up with the docker guys in San Francisco last Tuesday and got a look at their road map.  Docker really is a disruptive technology and, in our case, it's ideal for letting people easily do more with their NAS hardware.

 

Of course slackware is not appropriate for a container base image.  For these you do want more of a kitchen sink approach so that you can rapidly install all needed dependencies for your application running in the container.  What's great about this though, is that using copy-on-write, multiple apps can use the same base image, so you minimize duplication.  And even better: these containers run independent of each other, so you could even have different version of the same packages running in different containers on the same system without them fighting each other.

 

This issue of which "distro" is appropriate for unRaid platform in a docker world is largely irrelevant.  What matters is that it's stable and thin.

 

Finally, this is not the thread, and I don't have the time, to refute all the misinformation being put out here lately.  Take a look at BRiT's sig to see what I mean.

Link to comment
I use other unRAID servers and XBMC to connect via NFS.

 

Like you, JonP, I use nfs exclusively - my systems are all running Linux in one form or another, and I have no wish to add smb.

 

In its current state, unRAID nfs is almost totally secure with typical xbmc/movie streaming.  I also keep all my xbmc configuration and art files on unRAID nfs user shares - this is the part that's a little flakey, but not a significant problem.  AFAIAA, xbmc has no plans to add nfsv4 capability - I had the discussion in a discussion board, and one of the developers was adamant that there is no need for v4.

 

However, the major problem with stale file handles occurs when accessing the user shares from other Linux platforms (including VMs).  These are the platforms where it would be easy to use nfsv4, but I'm still not sure what is needed to add v4 support to unRAID.  Tom has already stated that nfsv4 would not suffer from the stale file handle problem.  I'm fairly certain that adding v4 support doesn't preclude the use of v3 (for the v3 only clients) - is it possible for users to add v4 capability to the current unRAID releases, or can LT add the necessary to facilitate v4?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.