unRAID Server Release 6.0-beta2-x86_64 Available


limetech

Recommended Posts

 

In case of a reboot what will be the default option unRaid OS or Xen unRaid Os or maybe the last option used before reboot?

 

I foresee a plugin for setting default boot in the future! ;-)

 

 

Sent from my iPad using Tapatalk

 

Yes a plugin to manage the /boot/syslinux/syslinux.cfg file would be nice.  Besides selecting the default boot option, for Xen dom0 some decisions should be made, for example, the amount of memory to give it.

Link to comment
  • Replies 147
  • Created
  • Last Reply

Top Posters In This Topic

6.0 boots to dom0 it really is now just another layer between the controllers and unRAID?

 

If you do not want a hypervisor in 6.0, do not install it. All Tom did was enabled the ability for the Linux Kernel to have / use the Hypervisor ability that has been stable since Linux Kernel 2.6.31 (2009).

 

To answer your question, NO. It does not add another layer to your drives, data or filesystem running as a KVM or Xen host.

 

Does that even work

 

It is a proven and reliable technology. Multi-Billion dollar Enterprises virtualize THOUSANDS of servers which run MILLIONS of dollar ERP and other mission critical applications into a few Servers all of which access Petabytes or more of data. Not to mention, 80% of the websites you goto (including this one) are virtual machine running on Hyper-V, ESXi, Xen, etc.

 

if it does and there is a performance hit and/or bugs, that's ok for all those people?

 

There will not be a performance hit whether you enable a Hypervisor or not.

 

Hypervisors are stable and BILLIONS of Servers, Desktops, etc. run on / in them. They do fix bugs, add features, etc. but those are WELL tested before they are rolled out. Using Xen as an example, Xen 4.4 has been in development since July, 2013. They did a feature freeze in Nov, 2013 and here it is already February and they are still testing, fixing bugs, etc. before it's rolled out. Not only do you have countless developers working on it and testing... You also have companies like Red Hat, Citrix, Oracle, etc. and all their money, resources developing / testing it too.

 

If you think the entire IT industry and Large, Medium and Small companies are irresponsible for using a proven technology like Virtualization... Don't install / use it.

 

Where if this would be the case at least we would not be stuck with a 32bit unRAID with this setup. We would have a 64 bit 6.0 and 6.1 would be for others.

 

That does not even make sense. You will have a 64-Bit OS so I don't understand your problem.

 

How does enabling one of the 1,000+ features / functions of the Linux Kernel that has been stable for 5+ years prevent you from using unRAID 6.0 64-Bit?

 

Does the fact that your unRAID which has Network / SATA Controller Kernel Modules that you do not load on your particular server keep you up at night? Do those Network / SATA Controller Kernel Modules that you do not use / install fry your system or delete your data?

 

Secondly even with off loading various functions to our VMs today we still have memory management issues

 

What memory management issues are your referring too? Have you reported this bug to the Linux Kernel people? How exactly is Amazon, Google and every Small, Medium and Large Company I have worked in who utilizes Virtualization not run into these memory management issues you keep talking about?

 

How about the people who really wanted cache pool? It's not cool to roadmap and then jump ship on it. As I expressed its not that all this new new stuff is not cool, it's a bit unfair not to offer a 64 bit counter part to 5.0 to basic users and who would benefit from a 32 bit to 64 bit swap.

 

That is a Business Decision that Tom made. I suspect that if you took a poll, 64-Bit, Virtualization would be at the top and running multiple cache drives would be at the bottom. As for multiple cache drives, I don't even see why people would even want / need that. What is the point of having a RAID if you keep GB / TBs of data it instead of putting it in your RAID?

 

I respectfully request some consideration from Tom on this. Moved this as 6.1 and work 5.x, 6.x and 6.1 at the same time if need be, since this work started.

 

It is IMPOSSIBLE for Tom to accommodate your request.

 

How can Tom make a 64-Bit unRAID without making a 64-Bit version of unRAID? He can't go back in time to Slackware 13.1 and make it a 64-Bit version because the 64-Bit code doesn't exist. Therefore, he has to "fast forward" to when he can. Why would Tom choose to go back in time to 2011 and Slackware 13.37 to run into the exact same issues we are working on in Slackware 14.1? 

 

The plugins were ALWAYS going to have to be updated to 64-Bit NO MATTER WHAT and even if Tom decided to make 32-Bit versions that run in a 64-Bit unRAID they still were all going to have to be updated NO MATTER WHAT.

 

Running a 64-Bit Linux Kernel does not cause memory management issues. Enabling Hypervisor support (which you can choose to use or not) in the Linux Kernel does not causes problems, breaks things, format your drives, create performance issues, etc.

 

For the record, you can run KVM / Xen as a Host in a 32-Bit Linux Kernel and run VMs in either 32-Bit or 64-Bit. Up till the last few years, that is usually what most people did. Enabling / using a Hypervisor in either a 32-Bit or 64-Bit isn't something new or different.

Link to comment

I'm very under the impression that the practical difference for the two is whether or not the Xen functionality is available.  Can someone in the know a little bit more say if it's further off - if the Xen kernel would have any disadvantages to someone who isn't going to run VM's on their system?

You bring up a good point.  The Xen/KVM options in the kernel itself don't require that much more memory.  The Xen toolset and required supporting packages require quite a bit more.  I haven't added it all up yet, but guessing it's around 10MB or so.  I don't think it will be worth it to produce a "non-Xen/KVM-enabled unRaid" vs "Xen/KVM-enabled unRaid".

Link to comment

Why Xen vs KVM?  Flipped a coin and it came up Xen  ;)

 

From your side of things... Getting Xen to work / maintaining / updating it will be easier than KVM.

 

Xen puts everything into a Kernel and one package. Where as KVM is updated via the Linux Kernel and you have all the separate packages to install / maintain / update.

 

I don't believe there is any speed degradation running as dom0, but I haven't run extensive tests yet.

 

There isn't.

 

No doubt there will be more issues to work out with Xen+unRaid and there is still a lot of work to do to make it "user friendly".  For example, starting VM's is really not difficult and would be straight-forward to code into a webGui plugin.

 

Agreed. Just need the various options (CPUs, MAC Address, Memory, PCI Passthrough, etc.) in a plugin that creates the Xen config file and starts / stops it.

 

For PCI Passthrough, Tom you are going to need to decisions.

 

Option 1

 

1. Add the xen-pciback (pcistub for KVM) driver into the kernel and not as a module.

 

2. Users would then passthrough their various devices via syslinux and kernel commands.

 

Or...

 

Option 2

 

1. Leave xen-pciback (pcistub for KVM) as a module.

 

2. Use the plugin to run some commands to passthrough the devices that way.

 

Option 2 isn't a "sure thing" and can be "iffy".

 

I think option 1 is best. I'd add the xen-pciback (pciback for KVM) driver into the kernel and have the plugin add the PCI Device IDs to the Syslinux.cfg.

Link to comment

I think that folks asking questions concerning virtualization who haven't had the benefit of experience with it and no reason for having immersed themselves in the technology have got some of us spun up ;-) Not everyone follows Linux this closely or encounters virtualization in their daily lives. It's normal for the to have questions that might seem obvious to others. Had I not taken the plunge myself awhile ago on ESXi and needed to learn other things for my job I'd be further out of my depth as well. Can we please try to realize that not everyone eats and s,reps this stuff? ;-)

 

Tom, for now if its possible I think keeping both an enabled and a disabled version available makes sense. Folks who aren't interested in virtualizing or who have more resource constrained systems may need this. They can try booting the enabled version to test and if no one reports problems perhaps then it would make the most sense to drop it?

 

 

Sent from my iPad using Tapatalk

Link to comment

As for multiple cache drives, I don't even see why people would even want / need that. What is the point of having a RAID if you keep GB / TBs of data it instead of putting it in your RAID?

"cache pool" lets you assign any number of your storage devices to a btrfs pool.  From this pool a "cache" subvolume is created that functions like the current "cache disk".  You could define other subvolumes for other purposes; maybe you create one for a VM's virtualized system disk.

 

The btrfs pool can be set up to be 'raid1' so that there is redundancy.  Typically one would use SSD's for the pool, but you don't have to.

 

Link to comment

Heh I asked once before but I'll timidly ask again - any chance of moving from Reiser to BTRFS? Not with standard RAID under it but unRAID. My understanding, subject to ignorance, is that the snapshot capability using slack space would be an advantage? Perhaps a bridge too far but I'm curious as to thoughts ;-)

 

 

Sent from my iPad using Tapatalk

Link to comment

"cache pool" lets you assign any number of your storage devices to a btrfs pool.  From this pool a "cache" subvolume is created that functions like the current "cache disk".  You could define other subvolumes for other purposes; maybe you create one for a VM's virtualized system disk.

 

The btrfs pool can be set up to be 'raid1' so that there is redundancy.  Typically one would use SSD's for the pool, but you don't have to.

 

Now that you laid it out like, using BTRFS, with the addition of people running VMs... That makes a lot of sense. Had you "stuck" with Resier for the Cache "pool"... I would have suggested / run it outside of it.

Link to comment

Heh I asked once before but I'll timidly ask again - any chance of moving from Reiser to BTRFS? Not with true RAID under it. My understanding, subject to ignorance, is that the snapshot capability using slack space would be an advantage? Perhaps a bridge too far but I'm curious as to thoughts ;-)

 

 

Sent from my iPad using Tapatalk

The plan is to have emhttp (or equivalent) invoke scripts to create/mount/check a disk file system.  The base name of the script will be configurable per disk.  This way we can support any file system you want.

Link to comment

 

"cache pool" lets you assign any number of your storage devices to a btrfs pool.  From this pool a "cache" subvolume is created that functions like the current "cache disk".  You could define other subvolumes for other purposes; maybe you create one for a VM's virtualized system disk.

 

The btrfs pool can be set up to be 'raid1' so that there is redundancy.  Typically one would use SSD's for the pool, but you don't have to.

 

Now that you laid it out like, using BTRFS, with the addition of people running VMs... That makes a lot of sense. Had you "stuck" with Resier for the Cache "pool"... I would have suggested / run it outside of it.

 

I've been curious about BTRFS obviously. How many drives would you need to run in a cache like this for both VM and classic unRAID cache for it to be protected? To date the heavy redundancy has kept me away from using it - I'm unwilling to give up that many drives to prevent a dual failure from wiping out everything.

 

 

Sent from my iPad using Tapatalk

Link to comment

No doubt there will be more issues to work out with Xen+unRaid and there is still a lot of work to do to make it "user friendly".  For example, starting VM's is really not difficult and would be straight-forward to code into a webGui plugin.

 

Once this is released and it's mostly working, I will concentrate on finishing other features, chief of which is "cache pool".  I think this feature will be very handy for VM's as well.

 

I've been thinking about this a bit.  From my perspective, the big needs would be to allow it to ingest an image from a site like turnkey linux or stacklet, give it a target of where to live, some configuration options, and then start/stop mechanism. 

 

Additionally give it the ability to snapshot on a schedule.

 

In my mind, I'd like to be able to have a VM live on a cache drive, on it's own share that isn't backed up directly.  The backup would be a stop, a snapshot, and a start in the middle of the night to co-incide with a cache drive backup process.  Transfer the snapshot to the array with the rest of the cache drive every night as normal.

 

I don't know if this should be built around the API, or shell scripts to the XL command, or even abstracted further with libvirt and virsh, or if a separate web frontend that works with libvirt would be the better way.

 

Personally I find most of the front ends to be a bit overkill for what I'd expect to see here.  They seem to be ways to attempt small clouds or to drive datacenters than something suitable as a frontend for a single machine.

Link to comment

"cache pool" lets you assign any number of your storage devices to a btrfs pool.  From this pool a "cache" subvolume is created that functions like the current "cache disk".  You could define other subvolumes for other purposes; maybe you create one for a VM's virtualized system disk.

 

The btrfs pool can be set up to be 'raid1' so that there is redundancy.  Typically one would use SSD's for the pool, but you don't have to.

 

Now that you laid it out like, using BTRFS, with the addition of people running VMs... That makes a lot of sense. Had you "stuck" with Resier for the Cache "pool"... I would have suggested / run it outside of it.

I know there's a lot of "hate" on reiserfs, but on more than one occasion, I've had a user run a parity sync targeting a data disk by mistake, realizing it after some time has passed, and then saying, "oh crap!" and hit Cancel.  In these cases we always have been able to recover at least some data (in some cases nearly all) - probably not many file systems would be that resilient.  But yeah, perception is reality these days.

Link to comment

No doubt there will be more issues to work out with Xen+unRaid and there is still a lot of work to do to make it "user friendly".  For example, starting VM's is really not difficult and would be straight-forward to code into a webGui plugin.

 

Once this is released and it's mostly working, I will concentrate on finishing other features, chief of which is "cache pool".  I think this feature will be very handy for VM's as well.

 

I've been thinking about this a bit.  From my perspective, the big needs would be to allow it to ingest an image from a site like turnkey linux or stacklet, give it a target of where to live, some configuration options, and then start/stop mechanism. 

 

Additionally give it the ability to snapshot on a schedule.

 

In my mind, I'd like to be able to have a VM live on a cache drive, on it's own share that isn't backed up directly.  The backup would be a stop, a snapshot, and a start in the middle of the night to co-incide with a cache drive backup process.  Transfer the snapshot to the array with the rest of the cache drive every night as normal.

 

I don't know if this should be built around the API, or shell scripts to the XL command, or even abstracted further with libvirt and virsh, or if a separate web frontend that works with libvirt would be the better way.

 

Personally I find most of the front ends to be a bit overkill for what I'd expect to see here.  They seem to be ways to attempt small clouds or to drive datacenters than something suitable as a frontend for a single machine.

 

Sounds like we have a VM Manager plugin author volunteer!  ;D 

Link to comment

No doubt there will be more issues to work out with Xen+unRaid and there is still a lot of work to do to make it "user friendly".  For example, starting VM's is really not difficult and would be straight-forward to code into a webGui plugin.

 

Once this is released and it's mostly working, I will concentrate on finishing other features, chief of which is "cache pool".  I think this feature will be very handy for VM's as well.

 

I've been thinking about this a bit.  From my perspective, the big needs would be to allow it to ingest an image from a site like turnkey linux or stacklet, give it a target of where to live, some configuration options, and then start/stop mechanism. 

 

Additionally give it the ability to snapshot on a schedule.

 

In my mind, I'd like to be able to have a VM live on a cache drive, on it's own share that isn't backed up directly.  The backup would be a stop, a snapshot, and a start in the middle of the night to co-incide with a cache drive backup process.  Transfer the snapshot to the array with the rest of the cache drive every night as normal.

 

I don't know if this should be built around the API, or shell scripts to the XL command, or even abstracted further with libvirt and virsh, or if a separate web frontend that works with libvirt would be the better way.

 

Personally I find most of the front ends to be a bit overkill for what I'd expect to see here.  They seem to be ways to attempt small clouds or to drive datacenters than something suitable as a frontend for a single machine.

 

Sounds like we have a VM Manager plugin author volunteer!  ;D

 

You might.  I imagine that it would be a very bash heavy plug-in.

Link to comment

I've been curious about BTRFS obviously. How many drives would you need to run in a cache like this for both VM and classic unRAID cache for it to be protected? To date the heavy redundancy has kept me away from using it - I'm unwilling to give up that many drives to prevent a dual failure from wiping out everything.

For redundancy: min is 2.  Eventually (they say) btrfs will have raid-5 capability.

Link to comment

If the cache subsystem, since it doesn't sound like it need be a single drive, has enough redundancy backing it up could simply be a snapshot using the file system, yes? ZFS can do this and I think BTRFS can as well. The caveat apparently being that if you completely fill the drive it can be "bad" from what I've read... Truthfully ZFS and BTRFS spin my head so I could be off in the weeds :-(

 

 

Sent from my iPad using Tapatalk

Link to comment

 

I've been curious about BTRFS obviously. How many drives would you need to run in a cache like this for both VM and classic unRAID cache for it to be protected? To date the heavy redundancy has kept me away from using it - I'm unwilling to give up that many drives to prevent a dual failure from wiping out everything.

For redundancy: min is 2.  Eventually (they say) btrfs will have raid-5 capability.

 

Gotcha', a two drive mirrored pool then? Perfect and we can snap as well as do some sort of backup!

 

 

Sent from my iPad using Tapatalk

Link to comment

I know there's a lot of "hate" on reiserfs, but on more than one occasion, I've had a user run a parity sync targeting a data disk by mistake, realizing it after some time has passed, and then saying, "oh crap!" and hit Cancel.  In these cases we always have been able to recover at least some data (in some cases nearly all) - probably not many file systems would be that resilient.  But yeah, perception is reality these days.

 

One of the MAIN reasons that people choose unRAID is you have your data on your drive in or out of unRAID (hence the name).

 

Many Linux Distros (CentOS, Debian, etc.) do not even include Reiserfs support anymore (you have to jump through several hoops to install it) and more have announced / considering dropping it in upcoming releases. Ubuntu Developers Discuss Dropping ReiserFS

 

Like it or not... Reiserfs days are numbered. If customers do not think they can easily access the data on their drives inside or outside of unRAID... You lose a major selling point.

Link to comment

I'm mostly file system agnostic. I care mostly that I have protection for my drives from a single failure. If a double failure occurred I won't lose my entire array, that's important. It's also important that I'm able to use drives of disparate size efficiently and that I don't use half my drives for parity. In a discussion about ZFS on Ars I was told to make ZFS pools with pairs of mirrored drives - a 50% loss of storage space. Bonus, since these other systems stripe data they spin ALL the time - unRAID doesn't. It IS important to me that I can pull a drive out and diagnose it and access it offline for data recovery but not a complete show stopper. Snapshotting with BTRFS  seems like a good idea, but having no experience with it I may be misunderstanding. The Ars comments thread was helpful but not all inclusive and home user needs like mine seemed dismissed lol.

 

Oh and yeah, bitrot which IMO seems overhyped. I did break down and hash my ISO and MKV files just to check. I did point out to those guys that with their stats and my YEARS of using unRAID on machines without ECC that my parity checks should've spotted errors quite often for no explained reason and yet haven't :-)

 

 

Sent from my iPad using Tapatalk

Link to comment

In my mind, I'd like to be able to have a VM live on a cache drive, on it's own share that isn't backed up directly.  The backup would be a stop, a snapshot, and a start in the middle of the night to co-incide with a cache drive backup process.  Transfer the snapshot to the array with the rest of the cache drive every night as normal.

 

FWIW, I wouldn't like a backup plan that involves a mandatory shutdown of the VMs. In my mind the VMs need to be servers -- always up just like unRaid. So I was thinking of in-VM backup solutions. BTW, the proper way to back up Xen VMs seems to be to export them. It seems to be a nontrivial exercise to automate it. http://www.howtogeek.com/131181/how-to-backup-citrix-xen-vms-for-free-with-xen-pocalypse-bash/

 

I just bought a couple of 128GB SSDs, and was planning on using one for a cache drive and the other for the VMs (unprotected). The cache pool seems relevant, although I guess I don't see the benefit of unraid being aware of my VM drive?

Link to comment

I care mostly that I have protection for my drives from a single failure. If a double failure occurred I won't lose my entire array, that's important. It's also important that I'm able to use drives of disparate size efficiently and that I don't use half my drives for parity. I a discussion about ZFS on Ars I was told to make ZFS pools with pairs of mirrored drives - a 50% loss of storage space. Bonus, since these other systems stripe data they spin ALL the time - unRAID doesn't. It IS important to me that I can pull a drive out and diagnose it and access it offline for data recovery but not a complete show stopper. Snapshotting with BTRFS  seems like a good idea, but having no experience with it I may be misunderstanding. The Ars comments thread was helpful but not all inclusive and home user needs like mine seemed dismissed lol

 

What you were told about ZFS is incorrect and most people set up ZFS incorrectly. You do not have to set up mirrored pairs for fault tolerance and you can run RAIDZ1,2 or 3 and have single, double or triple parity.

 

The problem is, most people only think RAID and apply that to ZFS. ZFS isn't just RAID... It's a self-healing File System with RAID capability too. 

 

The reason you see people bashing / having issues with ZFS is due to them not understanding it. They put every single drive in their system in one vdev and one zpool. Enterprises wouldn't dream of doing that so I find it funny that home users do it and think they are following some "Best of Breed" practice. It's only later, they realize the mistake they made and by then... it's too late.

Link to comment

 

Oh and yeah, bitrot which IMO seems overhyped. I did break down and hash my ISO and MKV files just to check.

 

1.) Losing data can NEVER be over hyped

2.) Not all of my data comes from torrents I can hash check. I have plenty of personal documents that would just get corrupted

 

Umm, none of my movies are from torrents and I have 100s of hours invested in ripping and compressing them. I used a Windows based program called Hashcheck to create hashes by file type. It took more than two days to do it for all of my movies... I could as easily do this for documents but frankly flipping a single bit in them is less likely to have disastrous consequences. Hashcheck can also check against the stored hashes to find errors.

 

 

Sent from my iPad using Tapatalk

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.