Clustering unRAID?


Recommended Posts

hi, I am wondering if it is possible to setup a scalable cluster with multiple nodes and then use that to install unRaid to take advantage of the clusters full processing power?

 

to the best of my knowledge this is not possible

but can I get an example of what you plan to accomplish if you could?

 

From a hypothetical stand point, I'm pretty sure this is possible.... but having said that... I have no idea how to do it.

Link to comment

Gonna have to follow this thread in hopes it gets more active. I wanted to do the same thing with plex a while back and harness the transcoding power of multiple boxes but received the same replies as the OP about not really possible. Would love to see unRAID have a feature to cluster multiple boxes as a processing node.

  • Thanks 2
Link to comment

Gonna have to follow this thread in hopes it gets more active. I wanted to do the same thing with plex a while back and harness the transcoding power of multiple boxes but received the same replies as the OP about not really possible. Would love to see unRAID have a feature to cluster multiple boxes as a processing node.

Not on the roadmap. Handling compute aggregation between disparate systems for real time applications like transcoding over a 1gbps network is unrealistic and frankly, not something I would imagine most consumers would leverage. This is more of an enterprise feature.

Link to comment

I didn't mean LT would do anything to cluster, I was referring anyone else who comes up with something. For instance with PLEX, their developers have no desire to make it easier to run multiple PLEX servers but some wild members have started to code their own add in to do just that. So you never know what kind of coders are hiding in the unRAID community that might be working on a crazy thing such as this :)

  • Thanks 1
Link to comment
  • 2 years later...

I'm aware of a couple remote transcoding projects for Plex where one instance of Plex is master and the others work as slaves. Why not just get a couple boxes that are good at transcoding with your favorite distro to run as slaves?
I had planned on doing something similar with a few cheap dual cpu 1U servers I had laying around, but the WAF pretty much killed that project. And in the end, I don't transcode enough to warrant that setup anyway.
Now if Unraid itself could be clustered for replication and HA, that would be interesting. Following.

Sent from my SAMSUNG-SM-G900A using Tapatalk

  • Like 1
  • Thanks 1
Link to comment
  • 2 years later...
On 1/9/2016 at 2:23 AM, CyberSkulls said:

Gonna have to follow this thread in hopes it gets more active. I wanted to do the same thing with plex a while back and harness the transcoding power of multiple boxes but received the same replies as the OP about not really possible. Would love to see unRAID have a feature to cluster multiple boxes as a processing node.

Meeeeee TOO!!!

 

It's a few years later now.  We just got multiple pools (mostly BTRFS) and I hear chatter of multi-UNRAID-Pools.

 

There is LITERALLY nowhere to break EPIC ground at this point aside from #UnraidServerInterOperability / clusters / stacks / Etc.

 

Once that happens, people can spread development between audio/music production on a server, VSTs on smaller purpose based servers to support those, video production on another with load balanced security surveillance, entertainment media over virtual machines (maybe as new docker implements for VMs)...

 

Sky's the limit once we have unRAID servers locked in productive, intimate embrace.  LOL

  • Like 1
Link to comment
  • 3 months later...

I could totally see use cases for this... 

A slave unraid with ssd on say a pi that could be used to access running VMS... Or a resource rich, power hungry monster a nice, quiet unraid master could WOL if it needed the CPU/GPU... Even a deep storage system, so old shares get moved to archive unraid machine that the master can wake to access the storage pool, so it can spin up an entire server 😂

  • Like 2
Link to comment
  • 2 months later...

Hello,

 

No movement for years. I'm looking for an answer to this question. Also im a bit of a noob for cluster storage so perhaps im barking up the wrong tree. 

I have one box currently at 90% capacity on the shares, I don't have any more room for drives on this box.  Mid tower that has been heavily modified to support 15 hdd and 5 SSDs. 

Goal:

Build new box with more storage. cluster the extra space. I would like the shares to be the same and have any new data save to the new drives on the second box.   

 

Bonus: If possible use the exra cpu and or exra gpu to do some work too. Transcoding.  

 

Is this possible or do i need to switch to something like Ceph which is an open-source software built on enterprise OS? 

Unraid.PNG

Link to comment
  • 3 months later...

I have some experience with building bespoke clusters using various technologies.

Forgive me here, but proxmox does have this ability if you wanted to started experimenting using a gui or something. Again sorry for mentioning a competing product.

Back to UnRaid.

If you have 2 UnRaid servers and separate shared storage, you could move all your data/docker stuff there, but you have to have software to manage the system. One method is to provide a quorum or fencing. Basically this keeps the slave machine/s from accessing data that currently belongs to the master. It keeps the slave/s from locking/writing to files while they are held open by the master, thus avoiding corruption and data loss.

 

The shared storage can be accomplished over network via iScsi, NFS or even SMB as well as a number of other more advance storage protocols. These are the ones commonly available in Open Source. You can also use older cabled scsi or sas methods as long as you have interfaces and controllers that support fencing to keep the hosts separated.

 

Now the networking bit. You would need a way of introducing a VIP (virtual ip) to the networking stack so that accessing it would take you to the current 'master' in the cluster. The idea is that when one member of the cluster becomes unavailable, code on the other machine takes over the VIP and takes over all the services that were being manage by the now defunct old master. There are a couple of ways of accomplishing that. 

 

One such piece of software is Pacemaker. It can handle switching a VIP, stopping/starting services, mounting/connecting storage, etc. It determines which is the master (assuming only 2 members in the cluster) by using a heartbeat signal which is usually sent over a private ethernet link between them. I've just used a direct connection cable between them before and let the nic cards figure the auto-mdx for themselves. The heartbeat allows to keep track of the health of the other machine. It can include status info on several aspects of the opposite machine which can all go into making decisions for what actions to take. 

Say if you have multiple dockers on a bridge interface so they all have their own ip? Well you could have certain dockers 'moved' over to the other cluster member when, say, the overall cpu usage gets too high on the other machine. I would shut down the docker so that the config info and data shares are updated and flushed to the shared storage. Then you could import it or already have it imported and start it up on the other machine and that docker ip should be now available there.

 

I'm probably missing/glossing over some of the finer details, but proper fail over clustering is a complicated subject. So it would be possible with UnRaid, but it would be A LOT of hacking. You really have to know and understand your networking and shared storage concepts.

 

That said, I really love UnRaid for it's standalone abilities and I feel that it does a lot of things very well. Proxmox has the clustering thing designed in pretty well, so that would be the better tool for the job if clustering is your aim. Use the correct tool for the job instead making the one you have fit the odd shaped hole.

  • Like 1
Link to comment
26 minutes ago, ghstridr said:

I have some experience with building bespoke clusters using various technologies.

Forgive me here, but proxmox does have this ability if you wanted to started experimenting using a gui or something. Again sorry for mentioning a competing product.

Back to UnRaid.

If you have 2 UnRaid servers and separate shared storage, you could move all your data/docker stuff there, but you have to have software to manage the system. One method is to provide a quorum or fencing. Basically this keeps the slave machine/s from accessing data that currently belongs to the master. It keeps the slave/s from locking/writing to files while they are held open by the master, thus avoiding corruption and data loss.

 

The shared storage can be accomplished over network via iScsi, NFS or even SMB as well as a number of other more advance storage protocols. These are the ones commonly available in Open Source. You can also use older cabled scsi or sas methods as long as you have interfaces and controllers that support fencing to keep the hosts separated.

 

Now the networking bit. You would need a way of introducing a VIP (virtual ip) to the networking stack so that accessing it would take you to the current 'master' in the cluster. The idea is that when one member of the cluster becomes unavailable, code on the other machine takes over the VIP and takes over all the services that were being manage by the now defunct old master. There are a couple of ways of accomplishing that. 

 

One such piece of software is Pacemaker. It can handle switching a VIP, stopping/starting services, mounting/connecting storage, etc. It determines which is the master (assuming only 2 members in the cluster) by using a heartbeat signal which is usually sent over a private ethernet link between them. I've just used a direct connection cable between them before and let the nic cards figure the auto-mdx for themselves. The heartbeat allows to keep track of the health of the other machine. It can include status info on several aspects of the opposite machine which can all go into making decisions for what actions to take. 

Say if you have multiple dockers on a bridge interface so they all have their own ip? Well you could have certain dockers 'moved' over to the other cluster member when, say, the overall cpu usage gets too high on the other machine. I would shut down the docker so that the config info and data shares are updated and flushed to the shared storage. Then you could import it or already have it imported and start it up on the other machine and that docker ip should be now available there.

 

I'm probably missing/glossing over some of the finer details, but proper fail over clustering is a complicated subject. So it would be possible with UnRaid, but it would be A LOT of hacking. You really have to know and understand your networking and shared storage concepts.

 

That said, I really love UnRaid for it's standalone abilities and I feel that it does a lot of things very well. Proxmox has the clustering thing designed in pretty well, so that would be the better tool for the job if clustering is your aim. Use the correct tool for the job instead making the one you have fit the odd shaped hole.

I could not agree with you more, and I would be surprised if anyone takes issue with your sharing such objective insights here. 

 

In the future, I would love to implements such a solution.  However, that will be many moons from now, as I struggle with my ADHD (and the difficulties it brings me to absorb text information... especially beyond a page or two of text).

 

Thank you so very much for sharing your thoughts, I have emailed your post to myself so that when I am ready, I can reference it.

 

Please enjoy your day.

Link to comment
3 minutes ago, sekrit said:

I could not agree with you more, and I would be surprised if anyone takes issue with your sharing such objective insights here. 

 

In the future, I would love to implements such a solution.  However, that will be many moons from now, as I struggle with my ADHD (and the difficulties it brings me to absorb text information... especially beyond a page or two of text).

 

Thank you so very much for sharing your thoughts, I have emailed your post to myself so that when I am ready, I can reference it.

 

Please enjoy your day.

You are very welcome. Glad to be of help.

  • Like 1
Link to comment
  • 1 year later...

I actually do this for transcoding. I have my unraid server which has a docker instance of Tdarr server running on it. The directories shared across the network. On all the other computers in the house I have Tdarr client running on them (and an instance of Tdarr client on the unraid server).

 

The clients are then pointed at the server directories and at the Tdarr server instance. The Tdarr server then controls farming out jobs to all the clients and itself at night when everyone is in bed, allowing the distributed transcoding of my media library using the other pc's on my home network when everyone is asleep. Come morning tdarr server instructs the pc's to stop transcoding and therefore not sapping system resources from my family members during the day.

 

As most of the household PC's are gaming PC's I can take advantage of their GPU's for transcoding media. I have a thing about putting everything in H265 to save space. 

 

With respect to unraid server itself, what would be really really cool (for home users) would be a way to "wakeup" other slave pc's with unraid running on it at times of peak use, or scheduled backups, etc etc.

 

I could easily see a use case (especially considering the cost of electricity at the moment) where you could build a very minimum spec unraid server running on low power, just trundling along minding its own business and not using a lot of electricity, then when the family come home at 5pm after work and want to all watch movies at the same time, or do a raft of other tasks and that server gets bogged down, it wakes up another small server, and another (as it needs extra resources), or indeed shuts them off when not needed to save electricity.

 

The ability to save money by distributing unraid across many small computers vs one large one, and only turning on what it needs, when it needs it. That would be an absolute game changer for pro-sumers. Plus, if you have say 6 small pc's you have 6 network cards, you can then serve 6 movies to 6 people at the same time at 1Gb each. Or, the idea I really like is the ability to transfer a very large file to unraid, and it split it up amongst all 6 machines into a cache pool over the network taking advantage of high bandwidth, and then have a version of mover that would recombine that on one server gradually etc.

 

The possibilities for unraid with distributed computing are just freaking mind blowing.

 

And I mean if you were to build the PC's it was running on to gaming spec? Woof! Can you imagine never needing a bloody tower in any room in the house again, whether it be for your TV or for gaming, or work. A shed load of vm's that can use the system resources of multiple pcs in one central location? 

 

Put simply, the possibilities of unraid with distributed computing in a household setting - YES PLEASE! SIGN ME UP! 

 

DO IT! LOL

Link to comment
  • 6 months later...

2 years down the line and I find myself back looking at a tread I found then!

Thought I would jump in again.

Firstly if you have found yourself here as you are running out of storage space my suggestion is to look at disk shelf with an IT mode HBA.

This sounds complex if you haven't done it but actually it's pretty simple.

You add a PCI card, get the right cable and plug it together. As far as you are concerned in the GUI any drive you add to the disk shelf (I have a netapp ds4243) looks like you added it to the original pc. I would advise putting your cache disks and maybe your Parity in the machine and have storage on the disk shelf - minimises traffic through the cable bottle neck.

 

RE the clustering, mu thoughts have changed a little, storage is too complex, I think we should be looking at slave CPUs effectively.  So a PC with no used storage (maybe a cache for its own use) and a slave unraid version, that unraid can manage out apps/VMs to, so all the storage management is still on the main machine but if powered on the others can take some/ all the load off. 

Eg, a download pc for pulling down and processing nzb, or a media box that takes over Plex/emby. Or just a straight forward secondary machine.

 

Any app moved to the slave machine could be proxied to the unraid IP and port.

 

Yeah it's not clustering, but it is to clustering what unraid is to raid.

 

The only exception to the storage thing is I might suggest a backup slave that would spin up to take off site/ second location backups as we know redundancy is not backup!

 

 

Link to comment
  • 5 months later...

Hey y'all. Doing this natively as just UNRraid is possible since Slack can cluster. You'd have to dig hard into the console and install things like slurm to manage the workload. You're not going to get the nodes to present in the UNRaid GUI properly though.

 

However, if you want it to be much easier on yourself, set up a ProxMox cluster and make a UnRaid VM on it. You can assign UNRaid all the CPU cores and memory you want, and even pass through storage for UNRaid to manage. 

 

I love UNRaid. Keep thinking differently y'all. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.