unRAID Project Update, Core Features, Virtualization, and Thanks You's


Recommended Posts

A question regarding the btrfs cache pooling ...

 

So right now there are a lot of people running their SSD in their systems for number of reasons.  The two most popular that I know of being Plex won't allow an HDD to spin down and faster VM's. They are also mounted outside the array because it is too small to act as a cache drive and because they need a filesystem that supports DISCARD (Trim) and this is accomplished via the go script which I know Tom wants to avoid whenever possible. 

 

Will btrfs cache pooling help this situation?

 

Will there be a way to mount a drive outside the array via the gui?

 

Can we run a cache pool with a large spinner + smaller SSD and dictate that a specific cache-only share resides on the SSD of the pool, like for Plex and VM images?

We will be experimenting with a variety of setups with the cache pool feature.  Right now in our test environments we exclusively are using SSDs for this, but testing with spinners as well will be performed.

 

Sound good.  But keep in mind the reason for the HDD / SSD setup is that a non-zero number of people have SSDs too small to act as their cache no less double duty as cache + VM/Docker store.  So the issue isn't cache pooling with SSD or HDD, it is a question of can the system you're setting up accommodate the desire to simultaneously have:

 

- a large spinner for cache duty / non-latency dependent storage

- a smaller SSD for Plex (because it never stops access to logs) and latency dependent VM/Dockers

 

I mean the simplest solution is to have a GUI element that lets us mount any drive, outside the array, formatted with an SSD friendly filesystem.  Shoehorning it into the cache pool situation might be an over complication [shrug].

 

I'm harping on this a bit because as you said media serving is a big part of your target user base and if they are using Plex then they will likely want to put their library index onto an SSD because it keeps an HDD spinning

OK, maybe I misunderstood your original question.  So you want to be able to use a HDD for cache drive functions where you want the speed benefit of a non-array device but where you don't need the write performance of a SSD.  Then you want an SSD for appdata where latency is sensitive, but where your data footprint is smaller because its really just metadata, right?

 

I think you've got it, but let me just make it simple with WHAT and ignore the WHYs for a moment ...

 

What:

I want to be able to run an HDD for the array cache duties (bigger,cheaper) and a non-array SSD (faster r/w,lower-power) and to be able to do it via the gui vice the manual GO script way my how-to post describes right now (see sig). 

 

Why:

#1 is that plex keeps an HDD spinning because it is constantly writing to the log so you want to keep its index database on an SSD.  Given that fact a person is lead down a decision path.  If they write large amounts of data to their array regularly (I do) then there is a chance the remaining space on their SSD is not enough given their price/GB.  Though in the last two years that has really changed so I might quickly stop caring about this ;-)  Another factor is that many SSDs have horrible built-in garbage collection and ReiserFS doesn't supported trim. So those two facts dictated the SSD is NOT the cache drive and it is formatted EXT4.  I realize BTRFS fixes that last problem, but not the first.

 

Putting things like VM images onto the SSD for faster booting / operation is just a bonus.  I don't know if Dockers benefit from being on an SSD vice an HDD, but I can't imagine it hurts [shrug].

 

So hopefully that makes it clear.  I don't know if there is anyway cache-pooling can be leveraged, or if it matters.  Like I said, just having a web gui method to format and mount non-array drives would keep us from mucking about with the command line and the GO file. 

 

Even people using Plex Docks will find themselves in the position of a constantly spinning HDD so this matters to them too.

Link to comment
  • Replies 102
  • Created
  • Last Reply

Top Posters In This Topic

Question, from the OP it sounds like the next beta to be published will only have virtualization updates, and no core NAS updates until further down the road? Did I read that correct? If that's not the case what should we be looking forward to seeing in this next beta release containing core NAS additions/updates?

The primary core NAS feature in development right now is cache pooling.  The virtualization stuff mentioned here isn't what's all going into the next beta, but rather, just an update on where we stand with that stuff.

 

So UPS support, email support, AFP, etc.. are not making it into the next beta to be released for testing? If that's the case how will we get to test before final at Q3 2014?

All in time.  We are prepping for a more rapid release schedule.  I know its sometimes to see hard why approach things certain ways, but there is good reason for the slow and steady.  Know that for things like UPS support and notifications, they will require less development time than thinks like additional file systems and cache pool.

 

Keep in mind that even though we publicly beta test, we internally test a lot as well with respect to core features to make sure that our betas have a solid foundation.

 

And while I appreciate your comments on the other folks I need to thank, cut me a little slack, would yah?  I make it no secret that I've only been with LT since April.  I don't know all the contributions over the years that have occurred but I surely do appreciate everyone's efforts here, virtualization or not.

 

Ok, I'm sorry to hear that the next release will not contain any of these was hopin to allocate time for a beta containing those. And it sure sound like that was going to be the case from Toms last post. Better to know now. Thanks, I would have take. It much harder once that beta came out and saw that wasn't the case.

 

Slack given.

 

Hope LT cranks these release out back to back.

 

I would like to point out IF something is simple to get in as u state, I (if it where me in this situation) would get them in sooner than later. First make many happy secondly you have confirmation from the community it works in the real world and if anything was missed.

 

I'll bet money it wont be right the first time out. Again not negative, its how things work. E.g. Ups support did work with xyz model. 2) preclear webgui bug under xyz condition 3) seems like you miss an alert based on xyz.

 

 

Link to comment

Thanks for this JonP; making roughly the same points as I was about the real benefits of virtualisation in keeping things manageable and minimising interfaces/dependencies. Needless to say, I agree!

 

A few points arising:

[*]As far as visualised Windows, etc. I trust you'll remember that there are those of us with microservers, thus no IOMMU, thus no passthrough, and only really want it for low spec servers running Teamviewer occasionally - rather than playing games whilst serving a movie up at the same time.

[*]No moves towards being able to transition away from ReiserFS for the array disks?

[*]Are the notifications/UPS going to be constructed in such a way that they can be extended? I'm still thinking that push notifications to your phone might be a more forward focused option for those of us with smartphones (98%?) Email is nice and fallback etc. - but ...

Cheers

Absolutely.  I think you will find the majority of you needs fulfilled by docker for headless Linux VMs.  That said, headless Linux VMs work just as good with our testing!

Thanks for that,  ??? but Linux VMs/Containers aren't really the issue. It's more a case of "perfect being the enemy of good enough" - in that I hope you don't spend all the effort on getting Windows VMs to do passthrough in all possible (gaming) situations, rather than putting at least some of that effort into making those VMs idiot proof to instantiate and shutdown in the main (90% of cases?). Kind of 'click this to set things up for a Windows 9 VM'. Oh, and I'd really like an OSX VM if possible ....

 

And I note you don't mention the other points  ;) ....

Link to comment

Jonp, is oVirt something to look into for slackware?

http://www.ovirt.org/Home

http://www.ovirt.org/Documentation

 

oVirt Supported Hosts:

 

Fedora

CentOS

Red Hat

Scientific Linux

 

CentOS and Scientific Linux are 100% clones of Red Hat. Fedora is Red Hats testing Distro.

 

Debian (experimental)

Gentoo (experimental)

 

They have been working on a port for Debian for years and some Linux Hacker has it sorta working in Gentoo (I did the same in Arch Linux).

 

I highly doubt oVirt is every going to make a Slackware version. It makes no business sense because nobody uses Slackware.

Link to comment

Ovirt is something we have looked at as well as open vswitch.  We have also looked at other tools / solutions as well.  However, these tools, while powerful, will be overkill for the everyday unRAID user.  In addition, as grumpy eluded to, they are not pre-built to run on Slackware.  We could port them, but to what end?  These two particular technologies are on our backlog to be reviewed further down the road.  As we have indicated before, we are not moving away from Slackware at this time.  As such, our focus is on extracting the value we can from the tools we have in place already.

Link to comment

Could you elaborate on:

1. "more fun and capable" ?. Is this being able to have addons and virtualisation, or something else ?  I believe that what set you apart from the competition previously, was exactly to be able to do "just plain storage" better than the competitors. But I sense a change in focus ?

 

Tom is an expert in storage technology.  I think that goes without saying.  His focus is on adding powerful underlying NAS features that anyone who has a mass storage need can appreciate, irrespective of the use of any unRAID Plugins or use of VMs.  We have already committed in previous posts that unRAID 6 will feature plenty of core NAS functionality upgrades including cache pool support, multiple file system types for array devices, notifications, UPS shutdown, native pre-clear support, and much more.  But Tom recognizes the importance of virtualization and has designed unRAID over the years with this in mind. 

 

Quite early on in development, Eric and I realized that the software architecture of unRAID actually lends itself nicely to acting as a host for virtualization technologies in certain scenarios which is why we have had so much success with some rather complicated setups in a short period of time.  This will all become more clear as we finalize our testing and roll out both a new beta and additional content in the form of videos to help better showcase and explain how we achieved our results.

 

2. "automate some monotonous business process functions " is that something in Unraid, or something in your internal development process ?

 

/Parsec

 

Quite simply, there are a number of things in business operations for ANY company that take time away from focus on development.  We are working to eliminate these with automated systems to simplify how we handle commonly recurring customer service needs, accounting, billing, etc.  Sometimes folks forget that on top of all this fun development stuff, there is a company to be run here ;-).

Link to comment

Thanks for that,  ??? but Linux VMs/Containers aren't really the issue. It's more a case of "perfect being the enemy of good enough" - in that I hope you don't spend all the effort on getting Windows VMs to do passthrough in all possible (gaming) situations, rather than putting at least some of that effort into making those VMs idiot proof to instantiate and shutdown in the main (90% of cases?). Kind of 'click this to set things up for a Windows 9 VM'. Oh, and I'd really like an OSX VM if possible ....

 

Here's how we went about testing/development for VM support in a nutshell.  We first started with a "proof of concept" phase in which we focused on just making VMs work and remain stable with hardware pass through.  A couple of key goals here were to make sure we could create VMs with a single command (virsh create vmname.xml) and also shut them down/reboot them with a single command (in the command line).  We did this with Windows 8.1, Ubuntu 14.04, SteamOS, and XBMCbuntu so far.  Our big concerns were making sure the host remained stable while multiple VMs were running at the same time and not just idle either, but performing intense IO function on complex PCI devices.  GPUs are a great test for that.  While some might say we should focus on things like SR-IOV, but I think that can be a goal for a little later down the road.  So in short, we bit off one of the most complicated problems with virtualization first, because if you can do that well, then it should be all downhill from there.

 

And to more directly answer one of your concerns, we have it where all VMs can be shutdown using a single command line, which could easily put made to work like a single "button" in the webGUI.  Full disclosure, one optimization we haven't tweaked yet for Windows is that if the Windows VM is in a monitor asleep state, you may have to send the virsh shutdown command twice right now (once wakes up the monitor, then the second actually send the poweroff).  I know we can fix this easily, but just trying to be honest.  In addition, this is an example of something that can get tossed in the backlog for fixing later.  Instead of fixing it right then, we jumped to them getting SteamOS working.  We figure that is a better approach for development right now.

 

Now with respect to OS X, see this:  http://www.lockergnome.com/osx/2012/02/24/are-hackintosh-computers-legal/

 

In the article, they even cover Virtual Machines:

 

What About if I Install it On a Virtual Machine?

Virtual machines aren’t technically Apple-branded hardware, so do they count? Actually, Apple has taken virtual machines into account in the EULA. You are allowed to install up to two instances of your OS X license within a virtual operating environment, as long as that virtual machine is running on an existing copy of the same operating system.

 

From Apple’s EULA:

 

…to install, use and run up to two (2) additional copies or instances of the Apple Software within virtual operating system environments on each Mac Computer you own or control that is already running the Apple Software.

 

So quite simply, we are not going to support or condone the posting of any content on our forums with regards to being able to build an OS X VM on our software.  Whatever a user can do on their own is up to them, and we certainly aren't going to spend any effort trying to prevent a user from doing something on their own, but know that you won't find that information anywhere on these forums, and if you do, it will be removed and quickly.

 

And I note you don't mention the other points  ;) ....

 

Well your point regarding ReiserFS was already addressed in our other roadmap thread:

 

Support for Multiple File Systems

Assign various types of file systems to array devices including REISERFS, BTRFS, and XFS (potentially more!  stay tuned!)

 

And with respect to notifications, all in due time good sir...  Gotta get back to development now...

Link to comment

Ok, I know this video looks not the greatest, but it was late last night and I didn't have time to do proper lighting.  In short, here's an internal build of us running 3 different operating systems on two different GPUs without ever rebooting the host.  When I change inputs, that's me going from the AMD card to the nVIDIA card.  The first two VMs just automatically boot with unRAID.  The third VM I boot up with a single command after shutting it down.

 

http://youtu.be/mkAGyzNL9jA

Link to comment

Tom is an expert in storage technology.  I think that goes without saying.  His focus is on adding powerful underlying NAS features that anyone who has a mass storage need can appreciate.

 

Are we getting dual (or more) parity drives in unRAID 6?

 

Are we getting metadata checks in unRAID 6?

 

Are we getting encryption in unRAID 6?

 

It's great you are catching up with the competition on multiple filesystems but if I have a drive with XFS or EXT4 stuff on it, will I be able to add it to unRAID without having to format it?

 

HRKKuja.png

 

Ok, I know this video looks not the greatest, but it was late last night and I didn't have time to do proper lighting.  In short, here's an internal build of us running 3 different operating systems on two different GPUs without ever rebooting the host.  When I change inputs, that's me going from the AMD card to the nVIDIA card.  The first two VMs just automatically boot with unRAID.  The third VM I boot up with a single command after shutting it down.

 

I'm not dismissing what you have done but this is 5+ years old technology. Many of us were doing this with unRAID as a VM back in 2009 with ESXi 4 (including Video Card Passthrough). Sure there is a lot more support for hardware / video cards but this isn't blazing any new trails. You can do this in any Linux Distro or better yet in a Best of Breed Virtualization Platform like ESXi or XenServer (all of which are free).

 

VMs are cool and all but I do not think you get why many of us used ESXi, XenServer, Virtualbox, etc for the last 5+ years. We needed access to any other Linux Distro so we could access a package manager. We did this because plugins have been a nightmare and often times crashed our servers / broke other plugins / etc. Sure a small group of us also put Windows, Linux and XBMC VMs (like what you are demonstrating) but if you haven't noticed a lot of the ESXi, Xenserver, Xen, etc. people are ditching it for Docker (another way to give us easy access to any other Linux Distro and it's package manager).

 

Home Theater Installer Companies would probably be very interested in what you are doing here if you make it easy / support it.

 

As for the rest of us,  when it comes to NAS features / functionality... You are way behind the competition when it comes to some VERY IMPORTANT NAS functions / features like Security, Integrity and Fault Tolerance. Some you are addressing in unRAID 6 like multiple file systems, UPS, notifications, etc. but if you look above you still have some work to do.

 

I have invested a lot of time / energy / effort into the Xen, XenServer, KVM, etc. all over this forum over the last several years so I am big fan of Virtualization. However, I think most of us here primarily for a NAS and to run a handful of Applications on it. With the addition of Docker (solved the Plugin, Slackware, no package manager issues) virtualization for most people is no longer needed / wanted. It still no matter what you say is going to be complicated. There is ZERO chance you are going to be able to explain in English how my sister is to install Ubuntu (she has no clue what Linux is)  or Windows in a VM. That doesn't include whether or not she has the correct hardware and forget explaining PCI Passthrough without her taking a week to first learn what VT-D, which CPU to get, which motherboard, IOMMU, what PCI stands for, etc.

 

I'd focus on NAS stuff and catch up with your NAS competitors. Reason why I say that, you are not going to be a Best of Breed Virtualization Server (you are missing a TON of tools, management features / functions) that other FREE products provide. If you continue to lag behind your NAS competition you will not be a Best of Breed NAS product either.

Link to comment

What some folks CAN do with their own equipment and for their own purposes is very different than what a company can and should do to support a wider audience of users with various usage requirements.  The fact of the matter is that when I look at the stuff you posted of all your screenshots, I see a bunch of daisy-chaining of various management toolsets that others built to create some type of mastermind NAS.  What I don't see is any clear single management interface, any use-case scenario demonstrations, or simple use instructions for the everyday user.  I see a lot of capability, but no defined direction.

 

Our mission is to service the majority of our customer base in the best way we can.  I don't think adding all that complexity would accomplish anything for the majority of users except to add confusion to something that doesn't need to be that complicated.

 

What you write and post about is a worthy mission for what you want, I am sure, but it doesn't necessary represent what we feel is in the best interests of Lime Tech or unRAID.  We want to make this stuff easier and it is our opinion that "less is more" goes a long way.

 

The powerful part about the demo that you can't see is that we have this working on MULTIPLE hardware systems with various GPUs interchangeably.  It's pretty remarkable.  While not everyone's hardware will support it, if we can support a wide enough array, I think it'd be valueable.

 

And I already posted about how core feature development is focused on by Tom and how Eric and I are more experts in virtualization anyway, so we're not putting our efforts into areas we shouldn't or don't have expertise.

Link to comment

One thing I don't see myself doing on my small form factor build is installing a video card. If iGPU is part of the plan then I might see what I could do with that capability. Just some of the usual media apps with a web interface is all I need.

 

No promises on iGPU.  It's an interesting challenge to solve and one that isn't getting the most focus right now from upstream.  I think we actually have it "working" but we may just not have the right hardware to test on it.  We can bind the iGPU correctly and even start the VM, but no display to the monitor ;-(.  In addition, there are other ways to get graphics than relying on a PCI card.  I agree that iGPU pass through would be a very nice feature and something we continually test as we advance our progress in other areas, but it isn't one for which we (Lime Tech) can provide a fix directly.

Link to comment

What some folks CAN do with their own equipment and for their own purposes is very different than what a company can and should do to support a wider audience of users with various usage requirements.  The fact of the matter is that when I look at the stuff you posted of all your screenshots, I see a bunch of daisy-chaining of various management toolsets that others built to create some type of mastermind NAS.

 

Is that so? How else could I show it without showing a bunch of open windows to show all the various features / functions?

 

What I don't see is any clear single management interface, any use-case scenario demonstrations, or simple use instructions for the everyday user.  I see a lot of capability, but no defined direction.

 

Are you are saying that Red Hat (CentOS) has no central Management Tools to manage the OS? If you believe that, you need to put down Slackware, download CentOS 7 and install it. Then get back to me.

 

There are also 20+ Linux OS WebGUI control panels available like Webmin, C-panel, OpenPanel, Kloxo, Vesta, WenYAST, C-Panel (only non-free one), etc. Webmin / Virtualmin has over 100+ Server / Management modules for 100+ apps / functions (90+ unRAID doesn't have). How is clicking on unRAID Shares better than clicking on Webmin Shares? Why do ISPs / Hosting Providers use all the various open source Control Panels and not emhttp? Last time I checked, I haven't had to drop to command line to manage one of my remote servers. Pretty much everyone has to drop to command line to do some very basic things in unRAID.

 

Also, I don't have to login / manage my server after I set it up. Once I add my disks, set my shares, install my apps... I'm done. If I load more Apps... WAY easier than Docker or VM. If I want new Shares... Just as easy as unRAID. If I want Virtualization... Blows what we have in unRAID away so far. If I want to run other fault tolerance, monitoring apps, firewall, virus protection, etc... NO PROBLEM and can been done in 10 seconds or less with a package manager.

 

Unlike unRAID, I get emails / status updates via push notifications / email if / when there is a problem (power, hard drives, package management system, ups, network, heat, fans, etc.). I also have several fault tolerances on my system. One of them being the system drives. Instead of a POS usb flash drive (single point of failure and the license is attached too it)... I have mirrored system drives. Should one of my SSDs go to crap, my other one takes over. In the background it adds one of my warm SSD spares automatically, replicates the data again all on it's own. I do not have to do a thing but I get a notification / email letting me a drive failed, it's drop it from my mirror, added the warm spare, replicated it, etc.

 

ALL OF THAT is core in Linux for 10+ years but for whatever reason, Tom doesn't think we want / need that type of stuff (even though we have asked for YEARS for it).

 

Another Thing...

 

Do any of us goto http://localhost/super-awesome-all-in-one-media-center-control-panel on unRAID to manage all of the following:

 

Any control Sickbeard, CouchPotato, sabnzdb, NZBmegasearch, Plex, XBMC, SickRage, Deluge, etc?

 

NOPE!

 

Why is that? Everyone of those apps are Best in Breed Applications for what it is they do. I doubt someone is going to come along and combine all those functions / features / programs into an all in one application. Definitely not in the open source world (due to how the open source world works).

 

My point, we do not mind / care that we have to manage various apps / features / functions in various WebGUIs (we already do it and unRAID isn't going to solve that problem anytime soon either).

 

Our mission is to service the majority of our customer base in the best way we can.  I don't think adding all that complexity would accomplish anything for the majority of users except to add confusion to something that doesn't need to be that complicated.

 

But Virtualization, PCI, Xen, KVM, PCI-Stub, PCI-Device IDs, Docker / Dockerfiles / Docker Images / Docker Containers, plugins, etc. in unRAID is easier than Ubuntu? Webmin? A Desktop GUI?

 

I can use Virt-Manager to walk through a simple Wizard to install a VM.... What are you going to? Have the user create / edit a 50+ line XML file with all the various options / choices to create a VM?

 

What you write and post about is a worthy mission for what you want, I am sure, but it doesn't necessary represent what we feel is in the best interests of Lime Tech or unRAID.  We want to make this stuff easier and it is our opinion that "less is more" goes a long way.

 

Perhaps you should take a peak at the unRAID "CentOS Edition" Poll I created before you assume anything.

 

64-Bit unRAID "CentOS OS" Edition:

 

79.5% - Yes

11.5% - Maybe

9% - No

 

Again, same problem different day. Tom / Jonp decide what is good / important for us instead of listening to their actual customers.

 

For YEARS we have jumped up and down and begged, pleaded, cried for UPS, multiple filesystems, dual parity, 64-Bit, Documentation, move off of Slackware, Updated Wikis, move the 20 - 30 user created utilities we all use / need into the WebGUI, ability to install unRAID onto a system drive (which is sorta of what a cache drive is anyway), metadata file checks, encryption, etc. etc. etc. etc. etc. etc.

 

The powerful part about the demo that you can't see is that we have this working on MULTIPLE hardware systems with various GPUs interchangeably.  It's pretty remarkable.  While not everyone's hardware will support it, if we can support a wide enough array, I think it'd be valueable.

 

We know what you think and you apparently are brand new to virtualization.

 

What about what we think? You remember us, right?

 

Look at this thread compared to any Docker one. It's crickets in here because not many people care about Virtualization now that we can use Docker to get access to any other Linux Distro but Slackware to install the apps we want. Still a complicated process, we are still depended on other users who do this for free but since LT has a love affair with Slackware... we have no choice.

Link to comment

I should have remembered, "if you give a mouse a cookie..."

 

Grumpy, we just don't have time to debate our development decisions with you or anyone for that matter.  I also don't appreciate having words put in my mouth.  Do you feel the average unRAID customer wants to manage their array using tools and utilities built for enterprise organizations?  I'm not saying that Red Hat's tools aren't good or usable, but not for an everyday user.  I've been in the enterprise virtualization/infrastructure industry for 10 years myself before joining Lime Tech, so no, I am not new to this space at all.  Eric, our other developer, has been coding for over 15 years on a variety of platforms.  Bottom line, what we're working on is directly in our wheelhouse.

 

My bigger issue at this point is the approach you take in these forums and the way you address people (not just Lime Tech, but anyone that disagrees with your viewpoint) is insulting, demeaning, condescending, egotistical, and unprofessional.  In addition, anyone that has as much time as you to post in these forums the same points over and over clearly doesn't have a business to run or a product to build, right?  I mean, seriously, just look at how many pages of content you've filled our forums with in just the last few days / weeks!

 

I can't find a nicer way to say this:  If you don't like where we are headed, leave.  We are moving forward to make unRAID better and better.  We are not just adding capabilities with virtualization either.  Building out the capabilities of a core NAS product is exactly what we're doing.  No matter how many times you and others remind us again and again that you've been asking for this for years doesn't actually help anyone.  If you don't like the order in which we're focusing on things or think that we should do X instead of Y, you are more than welcome to make your opinion known (that's what forums are for), but acting like a troll will get you treated like one.

 

All your rants in the complaint dept thread got a response, but all it did was slow down what we were working on for the next release.  We are upping our communication efforts and trying to do more, but its never good enough for people like you.

 

We do NOT follow the grumpybutfun playbook for supporting this initiative.  And instead of coming on here every day and apologizing for any sins of the past, we're too busy focused on making a better future.  That path is about iterative steps from where we're at to achieve our goal, not this "throw the last 10 years of work out the door and start from scratch."

 

This is the last response you will get from me.  It takes too much time.

Link to comment

Here's how we went about testing/development for VM support in a nutshell.  .... we jumped to them getting SteamOS working.  We figure that is a better approach for development right now.

Thanks for that. I understand your decision to go for "hard case first", rather than "incremental support" - particularly since you seem so close to 'stable'. Are there really so many looking to play games on virtual machines (I'd have thought they were 'bare metal' types)?

 

Now with respect to OS X, see this:  http://www.lockergnome.com/osx/2012/02/24/are-hackintosh-computers-legal/

...

So quite simply, we are not going to support or condone the posting of any content on our forums with regards to being able to build an OS X VM on our software.

Hmm, well needless to say I have little time for apple's attempt to sell needless hardware or propagate their insular ecosystem. Not sure I agree either with your "don't mention the OSX, I mentioned it once but I think I got away with it" - if it's possible to get it running, people will - and they'll describe how here. Lot's of times.

 

Still, it's a rod for your own back that you're making...

 

And I note you don't mention the other points  ;) ....

Well your point regarding ReiserFS was already addressed in our other roadmap thread:

Support for Multiple File Systems

Assign various types of file systems to array devices including REISERFS, BTRFS, and XFS (potentially more!  stay tuned!)

Sorry, must have missed it - won't mention it again.

 

And with respect to notifications, all in due time good sir...  Gotta get back to development now...

That's right, work on your weekend .....  ;D
Link to comment

I should have remembered, "if you give a mouse a cookie..."

 

Grumpy, we just don't have time to debate our development decisions with you or anyone for that matter.  I also don't appreciate having words put in my mouth.  Do you feel the average unRAID customer wants to manage their array using tools and utilities built for enterprise organizations?  I'm not saying that Red Hat's tools aren't good or usable, but not for an everyday user.  I've been in the enterprise virtualization/infrastructure industry for 10 years myself before joining Lime Tech, so no, I am not new to this space at all.  Eric, our other developer, has been coding for over 15 years on a variety of platforms.  Bottom line, what we're working on is directly in our wheelhouse.

 

My bigger issue at this point is the approach you take in these forums and the way you address people (not just Lime Tech, but anyone that disagrees with your viewpoint) is insulting, demeaning, condescending, egotistical, and unprofessional.  In addition, anyone that has as much time as you to post in these forums the same points over and over clearly doesn't have a business to run or a product to build, right?  I mean, seriously, just look at how many pages of content you've filled our forums with in just the last few days / weeks!

 

I can't find a nicer way to say this:  If you don't like where we are headed, leave.  We are moving forward to make unRAID better and better.  We are not just adding capabilities with virtualization either.  Building out the capabilities of a core NAS product is exactly what we're doing.  No matter how many times you and others remind us again and again that you've been asking for this for years doesn't actually help anyone.  If you don't like the order in which we're focusing on things or think that we should do X instead of Y, you are more than welcome to make your opinion known (that's what forums are for), but acting like a troll will get you treated like one.

 

All your rants in the complaint dept thread got a response, but all it did was slow down what we were working on for the next release.  We are upping our communication efforts and trying to do more, but its never good enough for people like you.

 

We do NOT follow the grumpybutfun playbook for supporting this initiative.  And instead of coming on here every day and apologizing for any sins of the past, we're too busy focused on making a better future.  That path is about iterative steps from where we're at to achieve our goal, not this "throw the last 10 years of work out the door and start from scratch."

 

This is the last response you will get from me.  It takes too much time.

 

Thank you

Link to comment

...on iGPU...It's an interesting challenge to solve and one that isn't getting the most focus right now from upstream.  I think we actually have it "working" but we may just not have the right hardware to test on it.  We can bind the iGPU correctly and even start the VM, but no display to the monitor ;-(.  In addition, there are other ways to get graphics than relying on a PCI card.

 

Maybe it's time to release a new beta to let us test your current progress in the iGPU front, and see if anyone has the right hardware for success ;)

 

Also, if not iGPU passthru, and no PCI card, how else might we get graphics on a VM?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.