The (un)official unRAID 6.x plugin discussion thread



Recommended Posts

I'm lost as to what the debate here is. Putting code into VMs provides protection to unRAID from poorly behaving code. If I were running a P4 I'm not sure I'd be trying to run anything resource intensive at all to worry about crashing. I learned my lesson about "crossing the beams" long ago and keep my unRAID install clear of anything that might bring it down. Is this really debatable?

 

I could run a far less powerful machine but then I'd need more of them to do what I want, hence my reason for running the machine I do! I went from multiple machines and additional code running on my desktop to a single machine with less chance of my storage being effected by misbehaving "plugins". I can spinup additional VMs quickly to experiment and revert snapshots to correct screw ups.

 

The hardware gets greater utilization this way, it's why corporations are virtualizing and it makes sense for many home users too.

 

Note: the entire machine is powerful, the VMs themselves are limited as to the resources they're given, is that perhaps not understood?

 

 

Sent from my iPad using Tapatalk

Link to comment
  • Replies 152
  • Created
  • Last Reply

Top Posters In This Topic

I'm lost as to what the debate here is. Putting code into VMs provides protection to unRAID from poorly behaving code. If I were running a P4 I'm not sure I'd be trying to run anything resource intensive at all to worry about crashing. I learned my lesson about "crossing the beams" long ago and keep my unRAID install clear of anything that might bring it down. Is this really debatable?

 

No debate. We need to qualify subjective statements like "it runs fine on my machine" because that person could have a massive amount of resources (as shown).

 

It's useful to know that VMs run fine with 32gb ram. It's not useful to know if they run fine without knowing on what.

 

I could run a far less powerful machine but then I'd need more of them to do what I want, hence my reason for running the machine I do! I went from multiple machines and additional code running on my desktop to a single machine with less chance of my storage being effected by misbehaving "plugins". I can spinup additional VMs quickly to experiment and revert snapshots to correct screw ups.

 

The hardware gets greater utilization this way, it's why corporations are virtualizing and it makes sense for many home users too.

 

One of the issues that plugins have (as they stand today) is quality assurance. Wouldn't it be nice to have quality, reliable plugins instead of burying the problem in virtualization?

 

Plugin quality aside: many users may want to take advantage of virtualized environments. That's fine. What we need is a universal solution that works all users and doesn't require a lot of peripheral knowledge to use. As cool as virtualization of plugins is, I'm not sure that it's a viable solution for modularity.

 

Note: the entire machine is powerful, the VMs themselves are limited as to the resources they're given, is that perhaps not understood?

 

FWIW your VMs have more resources than my entire machine.

Link to comment

Again I'm puzzled - you seem to think that my VMs have 32Gig of memory. What the VMs run on makes little difference, what they are granted and consume DOES. How about "it runs fine in MY VM"? Does that work for you? My unRAID VM has been granted 4gig of memory and one CPU core - it's consuming less than a gig of memory while idle - 17%. Pushing a file to it I see 66% usage peaking at 2.7Gig used. Max CPU usage is 800mhz.  :o THAT is the resources my VM is using, how you find this to be "massive" is beyond me. You're focused on the host machine and not what on the VM has been granted and it's silly.

 

The VM that runs Sick, SAB, and the rest of the kitchen sink? It has 1Gig of memory granted to it and 2 cores available. The CPU average 16mhz apiece, have maxxed out at 160mhz apiece, memory usage peaked at 17% in the last few hours but it doesn't look like it's downloaded anything so that's no biggie..

 

Yup, I've buried problems in virtualization haven't I? More accurately what I've done is separate out code that's potentially problematic into it's own sandbox - that's FAR from a "hack". Any of those programs that I use to do things other than serve files could go crazy and crash - unRAID would remain stable and functional none the wiser. My data is important to me thus I don't mix in herds of other disparate programs with its operation.

 

Alternately I could've shoehorned these 6+ programs into the same operating system environment as unRAID and run it off a cheap 2 core or less Celeron - hardware I used for years. Then when some program freaked out and whacked emhttp or ate up all of my memory I'd have been pulling hair out trying to get it all sorted and resuscitated. Been there, done that - no thanks. If this hardware represents your machine then it's no wonder my VMs have more resources. Mine do because I chose to give them that and I could obviously have given them less but saw no reason. At some point trying to squeeze the last drop out of a lemon makes no sense - penny wise can be pound foolish.

 

I can agree that virtualization isn't for everyone, I didn't do it myself until about a year ago and I invested a decent amount in hardware - I could've done it for far less. I disagree that we need a one size fits all solution that's dumbed down to the lowest common denominator. If someone is hell bent on running something a complex as Sick\SAB\Couch\Headphones\Maraschino together then I'd contend they need to invest some time in learning how it works - jump starting from a mostly pre-configured VM makes sense and is what I did on my second or third iteration. I certainly wildly agree that we need to come up with a way to better manage what plug-ins we do use, I'll try Boiler on a test system eventually but I run very few things alongside unRAID. The community will also need a way to manage and provision VMs if unRAID becomes a virtualization host using either KVM or Xen - I will certainly test and provide input where appropriate for this too once I've got test hardware ready.

 

Note that any CPU and RAM not being used by these VMs is available to others that request it and that I have 4 other active VM running currently doing things I wouldn't want mixed in with unRAID either. At various times I spin up OSX in a VM to tinker with, I have a VM with a ClearOS firewall configured I'm considering using, OpenIndiana setup, and more. Stop focusing and sneering at the capabilities of the whole machine as if it's just been built for unRAID and recognize that most of those resources are being used for other things. unRAID gets only token amounts of resources on my system and I'd suspect much the same for Sparkly as he's even more resource constrained than I am. Honestly his system could likely run all of my stuff and not break a sweat either - I need to find more toys to run  ;D

 

P.S. Sparkly thanks for mentioning HTPC Manager and Xenserver. I hadn't heard of either of these! I seldom use Maraschino but will look at the manager you use. Xen Server on the other hand looks like something I may find very useful as I've been looking for an easy way to play with Xen to better understand it vs ESXi and this looks easier than using a a standard distro and gluing bits to it :-)

 

P.P.S. HTPC Manager kicks the crap out of Maraschino - thanks! :D

Link to comment

I have an i5 3470 and 16gb of non ecc ram in my setup, quite modest tbh.

 

With this i run Arch Linux as the host OS (which runs xbmc and xen hypervisor). The host has 2vcpus and 2gb ram.

 

Next is WINDOWS. THE most resource hogging OS of them all. This has 4 vcpus and 8gb ram, youve seen my YouTube video earlier in this thread for a performance indication of gaming (yes, gaming).

 

Then I have a Usenet VM (again arch) which has 2vcpus and 2gb ram which runs everything else like sab, sick, couch, plex (including transcoding duties) and so on. PS I have a 150mb cable connection and it can easily Max that out whilst doing a plex stream concurrently.

 

Finally, unraid as a VM. 1gb ram, 1 CPU. Parity checks in the 90mbs region.

 

I have tried to outline the most stressful thing each VM does here and show real world performance. Compartmentalising components of your system is in fact what enterprise does. They have the storage on the host and then each VM has access to that storage so that if a VM falls over for any reason the other 20 VMs don't AND the storage is still available.

 

Do you get it now?

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment

Both Plugins and VMs have their place.  My rock solid unRAID with sab/sb/cp/transmission/flexget/MySQL/headphones/headless xbmc would suggest I don't need a VM but when I have the time I will look to see if I can go that route without too much effort.

 

What does sound like the perfect hybrid of VMs and Plugins is Docker.  I'd love to see unRAID head downtown that path as I reckon it's the way to go for those that don't need a full VM.

Link to comment

My main issue with plugins is the ability to stop the array and thus perform a clean shutdown. It I can shift the applications to a VM then I can either hibernate the VM or force shutdown without causing issues with the array shutting down. I have also seen memory issues with sab using as much as 3GB of ram in one instance and thus causing problems with the entire system.

 

I would suspect its actually unrar using the ram, not sab directly. giant rar set?

 

zoggy, no it wasn't an unrar or par process that was causing the issue with SAB, im fairly confident the issue is with a large queue, at the time i was seeing excessive memory usage i had around 150 items queued, and whilst this does seem high i dont really see why SAB struggled so much, after all a queue is just a list of items to process right?, im not asking SAB to concurrently process all items in the queue, its a bit odd, i know SAB does try to prevent blocking items in the queue....but still.

 

i did do some googling and came across the flag in SAB "Only Get Articles for Top of Queue" to reduce memory usage, unfortunately my queue had gone down by the time i had found this out so its untested as to whether it fixes my issue.

 

having said all of the above, myself and my brother in law STILL have shutdown issues with unraid from time to time, normally caused by one of 3 python based plugins CP, SB, SAB and thus i take the rather harsh approach of killing all python processes using a self made script before attempting a shutdown/reboot of my server, since ive taken this approach ive not a single shutdown issue, obviously this is NOT ideal  :-\

Link to comment

I'll chime in and risk the now infamous Lime Technology Forum flames to say this:  unRAID is pretty complex stuff for me already.  Adding a layer of complexity such as VMs, combined with a potential need for additional memory, processing power, etc, is going to be a show stopper.  I use my unRAID server as a media repository and provide an alternative location to backup some important files.  I'm not curing cancer or intercepting the world's emails.

 

While I'm certain some of you have a bona fide need to maintain absolute integrity for the unRAID OS with an absolutely clean install, I'm not sure that's the majority user case.  Forced introduction of more complexity will discourage adoption by new users and likely drive off existing customers who are happy enough with the current setup - especially considering the small business nature of Lime Technology and the unpredictability of its development cycle.

 

My two cents.

Link to comment

here's the important bit phil:-

 

Forced introduction of more complexity....

 

there is no forced introduction, you can run vm's if YOU want, if you don't want to do that then run plugins, it really is that simple, i dont see plugins going away any time soon, so you need not worry.

Link to comment

here's the important bit phil:-

 

Forced introduction of more complexity....

 

there is no forced introduction, you can run vm's if YOU want, if you don't want to do that then run plugins, it really is that simple, i dont see plugins going away any time soon, so you need not worry.

 

The concern though is that plug-in writers, the technical people us less technical depend on for the plug-ins, might decide to go the VM route leaving less plugin choices for the masses.

 

I guess we need to see how the landscape changes with the enablement of VMs and which direction the community takes the value-add services and tools.

Link to comment

here's the important bit phil:-

 

Forced introduction of more complexity....

 

there is no forced introduction, you can run vm's if YOU want, if you don't want to do that then run plugins, it really is that simple, i dont see plugins going away any time soon, so you need not worry.

 

The concern though is that plug-in writers, the technical people us less technical depend on for the plug-ins, might decide to go the VM route leaving less plugin choices for the masses.

 

I guess we need to see how the landscape changes with the enablement of VMs and which direction the community takes the value-add services and tools.

 

i understand your concern dalben, i still believe whilst there is a large user base who want the simplicity of plugins (and to be honest plugins are quite user friendly as far as installation goes) they will be actively worked on, there are other potential solutions people are looking at too such as Docker and also the use of Boiler to reduce the problems caused by conflicting slackware packages so its all still very active right now, i think the only thing for sure is that plugins need to evolve into a more stable form, only time will tell what that ends up looking like.

Link to comment

SABnzbd is a resource hog (CPU/RAM) partly because it was written in Python (slow). nzbget was written in C and has extensive optimizations to reduce resource usage.

 

I was involved (somewhat) in the early development of SABnzbd and most people were running it on PCs with reasonable levels of CPU and RAM. Around January 2008, I got SABnzbd running on my LinkStation NAS, but it consumed a lot of resources.

 

About 6 months later, I got nzbget running on the LinkStation.

In 2009, I built a couple of unRAID servers, but still used the LinkStations for downloading as they were low-power and the unRAID servers I built from old components were power-hogs, so were switched on infrequently for archiving only.

 

In March of 2011, I bought my first HP MicroServer and installed unRAID on it. Because the MicroServer used so little power (20 to 30W), it meant I could run it 24/7 and sell the LinkStations.

 

About a month later, I got nzbget running in unRAID on the MicroServer. overbyrn has now taken over that project, as he's an actual programmer, rather than an amateur hacker like me.  :D

 

At that time, I did a little benchmarking on the Microserver (dual core 1.3GHz AMD CPU):

 

CPU usage (measured by top)

SABnzbd: 40-50%

nzbget: 7-14%

 

RAM usage when downloading (1GB installed)

SABnzbd: 5.6%

nzbget: 0.3%

 

nzbget was actually originally written to be run on a router (something like an Asus router IIRC), so was designed from the outset to consume low levels of resources.

 

If you have a high-powered machine, you probably won't care about using hefty resources for SABnzbd, but if you are running a low-power server, nzbget is the way to go.

I also found that if you have a fast download speed, SABnzbd may not be able to cope, even if running on a fast machine. I couldn't max out my 120mbps line no matter what I did with SABnzbd, but I was able to do so with nzbget.

 

If you've looked at nzbget in the past and didn't favour it over SABnzbd, it's worth looking at again, as not only does it have most of the featured SABnzbd has, it has some additional ones (e.g. fast de-obfuscation) that make it probably the bests usenet downloader for Linux.

Link to comment

One of the issues that plugins have (as they stand today) is quality assurance. Wouldn't it be nice to have quality, reliable plugins instead of burying the problem in virtualization?

 

It would be - but I contend that the only way that will happen is if limetech takes the lead and provides an official, supported plugin framework.

 

I can't see that happening.

 

Otherwise it's left to the community and that only leads to the fragmentation we have today.

 

Where we *do* have high quality, reliable packages (/plugins) is in other linux distros so why reinvent the wheel?

 

I understand the non-technical user aspect but you have to be honest and say that as soon as you're using *any* plugin in unraid you're off the reservation and into technical land. There is no way for you to install them nicely from the webgui so you have to roll your sleeves up at that point.

 

So if you're installing a bunch of plugins on unraid or one plugin to deploy virtualisation I don't see the issue. The virtualisation can be kept completely hidden to you - no need to know whats going on under the hood in the same way as you probably don't with current plugins.

 

The fact that alot of (but not all) 'power' users of unraid seem to have switched to a virtualised unraid with companion virtualised instances of other distributions to run programs on top (using whatever flavour of virtualisation they choose) - instead of using native plugins should tell a big story. The inclusion of virtualisation into unraid only allows the potential for that sort of setup to be *hugely* simplified giving benefit to everyone - not just the power users.

 

 

Link to comment

@boof - yes. you are the man, that's a very sound argument.

 

I have just spent the afternoon looking at docker, it's very interesting indeed. I recommend a few of you take a look at it and see whether it's worth pursuing any further as this would potentially solve a lot of our issues without virtualisation.

Link to comment

SABnzbd is a resource hog (CPU/RAM) partly because it was written in Python (slow). nzbget was written in C and has extensive optimizations to reduce resource usage.

 

I was involved (somewhat) in the early development of SABnzbd and most people were running it on PCs with reasonable levels of CPU and RAM. Around January 2008, I got SABnzbd running on my LinkStation NAS, but it consumed a lot of resources.

 

About 6 months later, I got nzbget running on the LinkStation.

In 2009, I built a couple of unRAID servers, but still used the LinkStations for downloading as they were low-power and the unRAID servers I built from old components were power-hogs, so were switched on infrequently for archiving only.

 

In March of 2011, I bought my first HP MicroServer and installed unRAID on it. Because the MicroServer used so little power (20 to 30W), it meant I could run it 24/7 and sell the LinkStations.

 

About a month later, I got nzbget running in unRAID on the MicroServer. overbyrn has now taken over that project, as he's an actual programmer, rather than an amateur hacker like me.  :D

 

At that time, I did a little benchmarking on the Microserver (dual core 1.3GHz AMD CPU):

 

CPU usage (measured by top)

SABnzbd: 40-50%

nzbget: 7-14%

 

RAM usage when downloading (1GB installed)

SABnzbd: 5.6%

nzbget: 0.3%

 

nzbget was actually originally written to be run on a router (something like an Asus router IIRC), so was designed from the outset to consume low levels of resources.

 

If you have a high-powered machine, you probably won't care about using hefty resources for SABnzbd, but if you are running a low-power server, nzbget is the way to go.

I also found that if you have a fast download speed, SABnzbd may not be able to cope, even if running on a fast machine. I couldn't max out my 120mbps line no matter what I did with SABnzbd, but I was able to do so with nzbget.

 

If you've looked at nzbget in the past and didn't favour it over SABnzbd, it's worth looking at again, as not only does it have most of the featured SABnzbd has, it has some additional ones (e.g. fast de-obfuscation) that make it probably the bests usenet downloader for Linux.

 

sab is written in python 2.x which means yes not everyone's install may be optimized as we allow the user to install dependencies. pyopenssl? openssl? python version? yenc? etc.

attacking python as a performance thing is a bit silly though, since honestly most of the real work is being done by par/rar.

the downloading of articles and assembling them is the only real thing that sab does.. which is when the speed python comes into play. but if the user installs yenc then the whole encoding/decoding of articles is actually being done by C based lib not python to get better performance where applicable.

 

now setting up sab to be optimized correctly is probably more of the issue,

some people do something stupid like putting their completed folder on a network share.. meaning files have to copy across a network before they get post-processed.

people have their admin and incomplete folder on the same drive.. so you have the same hd battling i/o (thrashing).. you can tell sab to pause downloading while extracting to minimize this.

a lot of people dont setup their providers correctly in sab, give it too many threads / use ssl if their box really cant keep up -- is sab wasting time opening and closing threads for connections more than needed? http://wiki.sabnzbd.org/highspeed-downloading

 

there is just a lot of settings that may be needed to be tweaked to really get sab optimized. and that prob is the biggest thing is that sab may just work out of the box but may not work as well as it could.

 

we offer settings like pre-checking (passing a stat to each article to see if it really exists) which could prevent someone from wasting their time downloading / eating up their bw cap with their isp. do you really need this? if not disable it.. as it just delays d/ling things and skews numbers as we spend time checking first.

 

yes there are par apps that are optimized for intel.. optimized to rely on ram.. etc. some of them are more stable than others, we tend to lean on stable or performance because of the nature of how sab is used. for osx/win users we do make some effort to include things that are optimized/newer (recently in dev we now moved over to rar5 for win for example)

 

yes i do agree that we are bounded to having to run a ui while nzbget can be much slimmer by offering a cli only approach. but the ui part doesnt take up THAT much memory in todays systems (40mb) but yes sab was not designed to be installed on appliances with limited resources (there are people that do run it on rpi boxes though.. which i feel is just crazy)

nzbget is deff catching up on sab features as well. its still missing a few things but im sure given enough time it will get there.

 

about your speed comment, sab is capable (assuming youve got a decent usenet provider/pipe)

Downloaded in 26 seconds at an average of 48.5 MB/s

 

this was with: python 2.7.3, pyopenssl 0.12-1ubuntu2, unrar-nonfree 1:4.0.3-1, par2cmdline 0.4-11build1

 

anyways, we can take this to another thread if you really want to go further into depth. i love hearing back from users on the subject though

Link to comment

One of the issues that plugins have (as they stand today) is quality assurance. Wouldn't it be nice to have quality, reliable plugins instead of burying the problem in virtualization?

 

It would be - but I contend that the only way that will happen is if limetech takes the lead and provides an official, supported plugin framework.

 

I can't see that happening.

 

Otherwise it's left to the community and that only leads to the fragmentation we have today.

 

Man, wouldn't that be nice. Fortunately, we have the next best thing: officially supported OS packages. Boiler isn't a "made up" system like plgs--it's fully rooted in the OS. It would be nice if limetech could put a word in, because right now we're looking a bit like this.

 

 

Where we *do* have high quality, reliable packages (/plugins) is in other linux distros so why reinvent the wheel?

 

I understand the non-technical user aspect but you have to be honest and say that as soon as you're using *any* plugin in unraid you're off the reservation and into technical land. There is no way for you to install them nicely from the webgui so you have to roll your sleeves up at that point.

 

That may not be entirely true (it is for plgs though). Boiler is an API-based client. A web based client could install packages just as easily.

 

"The non-technical user" argument has been pretty played out too, and is just a bit bogus. Unraid is an enthusiast-OS, not a super-polished Apple product. It requires a degree of technical knowhow to install and use. That argument still applies to package management through a VM though. I'm highly skeptical that it makes it "easier" (faster, less stress, easy to remember how to do, etc) for a user that only does it on setup or every few months.

 

So if you're installing a bunch of plugins on unraid or one plugin to deploy virtualisation I don't see the issue. The virtualisation can be kept completely hidden to you - no need to know whats going on under the hood in the same way as you probably don't with current plugins.

 

Keeping it tucked away is fine if it never breaks. If and when it does, that "non-technical user" is going to throw it out or abandon it.

 

The fact that alot of (but not all) 'power' users of unraid seem to have switched to a virtualised unraid with companion virtualised instances of other distributions to run programs on top (using whatever flavour of virtualisation they choose) - instead of using native plugins should tell a big story. The inclusion of virtualisation into unraid only allows the potential for that sort of setup to be *hugely* simplified giving benefit to everyone - not just the power users.

 

There have been a lot of benefits noted for using virtualization. I'd still like to see a distributable prototype of the "hugely simplified setup" because talking about it from a theoretical standpoint is a bit like measuring the length of a coastline.

Link to comment

Man, wouldn't that be nice. Fortunately, we have the next best thing: officially supported OS packages. Boiler isn't a "made up" system like plgs--it's fully rooted in the OS. It would be nice if limetech could put a word in, because right now we're looking a bit like this.

 

I agree - though ironically Boiler adds to that ;)

 

I'm not knocking your work though. As we both know a slackware package *isn't enough* for unraid due to the way the system runs. So there are extras it's not just OS packages. Or at least I would presume that's correct? There must be wrappers to provide persistence across reboots for configs as necessary? Which boiler must know about and maintain?

 

That may not be entirely true (it is for plgs though). Boiler is an API-based client. A web based client could install packages just as easily.

 

"The non-technical user" argument has been pretty played out too, and is just a bit bogus. Unraid is an enthusiast-OS, not a super-polished Apple product. It requires a degree of technical knowhow to install and use. That argument still applies to package management through a VM though. I'm highly skeptical that it makes it "easier" (faster, less stress, easy to remember how to do, etc) for a user that only does it on setup or every few months.

 

You still need the boiler meta-data to get the packages. Who maintains that? What happens when you get bored? What happens when it dies off and doesn't support newer releases of unraid? These are all issues with other unraid package management systems. The function of the actual tool has never really been an issue. Boiler looks like a particularly elegant solution but previous (and still current) frameworks still work and provide the end result of a package being installed the end user usage was rarely the issue. And, regardless of any distributed nature of the meta-data boiler is ultimately maintained only by you.

 

Unraid problems apply much less to package management through a vm because it's all 'just there' by default when you install whatever goes in the VM. And it's a known, consistent toolchain. You also don't need package management in that case. Why would you? Any plugin for unraid deploys a prebuilt VM image with everything you need inside it. What is there to manage other than initial deployment?

 

Keeping it tucked away is fine if it never breaks. If and when it does, that "non-technical user" is going to throw it out or abandon it.

 

Exactly the same as existing plugins then? And again note the trend for many users is to abandon them and virtualise to get back onto a stabler system.

 

There have been a lot of benefits noted for using virtualization. I'd still like to see a distributable prototype of the "hugely simplified setup" because talking about it from a theoretical standpoint is a bit like measuring the length of a coastline.

 

Do you mean specific to unraid or in general? It's not hard to find ones for the 'in general'. It's pretty much how large scale deployment is done these days thanks to the 'cloud'. Specific to unraid - well still early days, but all the tech exists.

 

Link to comment

Man, wouldn't that be nice. Fortunately, we have the next best thing: officially supported OS packages. Boiler isn't a "made up" system like plgs--it's fully rooted in the OS. It would be nice if limetech could put a word in, because right now we're looking a bit like this.

 

I agree - though ironically Boiler adds to that ;)

 

Kind of. The difference is boiler a layer on top of an official, existing system, rather than an entirely new system or protocol. Packages don't actually get installed with boiler, only fetched and compiled.

 

I'm not knocking your work though. As we both know a slackware package *isn't enough* for unraid due to the way the system runs. So there are extras it's not just OS packages. Or at least I would presume that's correct? There must be wrappers to provide persistence across reboots for configs as necessary? Which boiler must know about and maintain?

 

I know you're not knocking (I hope I'm not being overly defensive. There's just a lot of misinformation.)

 

Close. When it makes a package (like with `boiler install NAME`) it takes some known directories and maps them specially for unraid (etc, bin, lib, config, etc). Configs get this treatment, so they always end up in the same (namespaced) location that doesn't get wiped out with reboots or reinstalling the package. This location is programmatically guessable, which means these configs can be dynamically loaded into a web interface. The entire package is standalone compatible with Slackware. Running `installpkg NAME` installs it WITHOUT boiler, so we get all the benefits of how unRAID installs packages on reboots natively.

 

You *could* manually do everything that boiler does. It just automates a lot so you don't have to think about architectures, persistence, dependency conflicts, uploading, keeping dev files out of the release build, etc.

 

That may not be entirely true (it is for plgs though). Boiler is an API-based client. A web based client could install packages just as easily.

 

"The non-technical user" argument has been pretty played out too, and is just a bit bogus. Unraid is an enthusiast-OS, not a super-polished Apple product. It requires a degree of technical knowhow to install and use. That argument still applies to package management through a VM though. I'm highly skeptical that it makes it "easier" (faster, less stress, easy to remember how to do, etc) for a user that only does it on setup or every few months.

 

You still need the boiler meta-data to get the packages. Who maintains that? What happens when you get bored? What happens when it dies off and doesn't support newer releases of unraid? These are all issues with other unraid package management systems. The function of the actual tool has never really been an issue. Boiler looks like a particularly elegant solution but previous (and still current) frameworks still work and provide the end result of a package being installed the end user usage was rarely the issue. And, regardless of any distributed nature of the meta-data boiler is ultimately maintained only by you.

 

The metadata is a configuration API used by boiler. Every package system has one (bower.json, Gruntfile.js, Gemfile, Homebrew formula, it goes on). It's maintained by individual package authors. The API spec is maintained by me and the API is documented versioned with boiler releases. The entire system is designed to function largely without my input. It's distributed system maintained by the authors that publish and maintain their own packages.

 

It was designed the "What if the maintainer abandons us" problem in mind (I don't intend on it. I dogfood my own product, so I'm able to function as a user as well).

 

With the exception of the 64-bit update, there's little about boiler that makes it tied to a specific version of unRAID. By using version constraints with dependencies, we can ensure reasonable compatibility automatically.

 

Unraid problems apply much less to package management through a vm because it's all 'just there' by default when you install whatever goes in the VM. And it's a known, consistent toolchain. You also don't need package management in that case. Why would you? Any plugin for unraid deploys a prebuilt VM image with everything you need inside it. What is there to manage other than initial deployment?

 

I think that's what is meant by "package manager". It's an end-to-end system for searching, installing, updating, and removing a package. A prebuilt VM, a la Docker, would give you the install part manually. How will you search, install, update, and remove it? You need a tool to glue it all together, lest you end up with a disconnected system (plgs, anyone?)

 

Keeping it tucked away is fine if it never breaks. If and when it does, that "non-technical user" is going to throw it out or abandon it.

 

Exactly the same as existing plugins then? And again note the trend for many users is to abandon them and virtualise to get back onto a stabler system.

 

Fair point.

 

There have been a lot of benefits noted for using virtualization. I'd still like to see a distributable prototype of the "hugely simplified setup" because talking about it from a theoretical standpoint is a bit like measuring the length of a coastline.

 

Do you mean specific to unraid or in general? It's not hard to find ones for the 'in general'. It's pretty much how large scale deployment is done these days thanks to the 'cloud'. Specific to unraid - well still early days, but all the tech exists.

 

Yeah, I think specific to unRAID.

Link to comment

I really hate to say it, but I hate the way plugins work.  I would so much rather have some basic VM support in unRaid (either KVM or OpenVZ) where I could drop a prebuilt Debian or CentOS based VM in, and let my applications live there, and have unRaid back up a snapshot to my protected storage once a day.

 

Just have my nice little stack of couchpotato/sickbeard/sab/transmission/flexget on a VM living on my cache drive, and I'd be so much happier.  Let my base install of unRaid stay clean, and not be affected at reboot by my own plugin fuckery.

Link to comment

I will always want to be able to Virtualize.  I have 3 ESXi servers currently.  Each has an unRAID VM and a Windows VM.  The Windows VMs are needed for SageTV which runs on Windows.  I never purchased the Linux version because it didn't support as many different tuners and support was minimal.  It isn't available for purchase now except through user transfer so I'm stuck with a Windows VM because I don't want 6 computers running in my basement (well it would be more like 8 to 10 if you count some standalones).  I might even consolidate those standalones into the 3 ESXi servers if I can put unRAID as the base OS with Xen or KVM - don't like where ESXi is heading with free version and am hoping some of the problems I had with a 2nd Windows VM will go away with Xen or KVM on unRAID.

Link to comment

I might even consolidate those standalones into the 3 ESXi servers if I can put unRAID as the base OS with Xen or KVM - don't like where ESXi is heading with free version and am hoping some of the problems I had with a 2nd Windows VM will go away with Xen or KVM on unRAID.

 

Both KVM and Xen very stable and work well in the environments we are talking about and ones 1,000 times what anyone here would do.

 

The "issue" are the lack of management tools. Right now there are no good WebGUIS that I would recommend as yet (either still in development or the ones that do work are way to complex and designed for Enterprises not home servers).

 

If setting up 1 - 10 VMs, it's really not that hard to do it via a command line / plugin (start, stop, configure X, Y and Z for the VM).

 

Who knows, by the time we have this all sorted out, one of the WebGUI projects in development might be finished / stable.

 

As far as ESXi, from what I have heard from the various VMWare Sales reps for various clients I do work for... The free version is going to be more and more crippled with each new version is what I am being told. For a lot of people, running 5/5.1 is what they have to do for PCI Passthrough support (5.1 or 5.5 breaks it) so this might not be an issue they care about anyway.

Link to comment
As far as ESXi, from what I have heard from the various VMWare Sales reps for various clients I do work for... The free version is going to be more and more crippled with each new version is what I am being told. For a lot of people, running 5/5.1 is what they have to do for PCI Passthrough support (5.1 or 5.5 breaks it) so this might not be an issue they care about anyway.

This is exactly what I meant about not liking the direction ESXi is going.  Upgrading to 5.5 might fix some of my problems but would introduce too many other problems for me to even think about it. 

 

Hope a GUI for Xen/KVM becomes available!  The desktop Java app I'm writing for VirtualBox is coming along (never have figured out why phpvirtualbox doesn't work for me and it has been fun writing the java app anyway).  If Xen/KVM have a SOAP webserver interface like VirtualBox I might be able to adapt my app but I doubt it would be possible or work as well as existing ones you don't like anyway.

Link to comment

http://www.linux-kvm.org/page/Management_Tools

 

I love that I found a list of management tools, I hate what the list says.  The only one that seems interesting is kimchi.  Everything else reads as bloated or commercial.  Something lightweight to manage the VMs is what we need out of this. 

 

I'd love to launch a VM for the types of tools that Influencer wrote plugins for, and then another VM for something like my DHCP and DNS servers.  I just need something that's fairly easy to do it in.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.