Jump to content

[Plugin] unRAID Replication


Recommended Posts

unRAID Replication Plugin
(Now in very BETA | only visible for unRAID 6.13.0-beta.2+)
 

 

DISCLAIMER: This plugin is under development and bugs may be occur (if you encounter a bug please let me know). Always make sure to have a backup from your data on another device, I can't guarantee data loss!

 

!!! PLEASE USE THIS PLUGIN WITH CAUTION !!!

 

This plugin allows you to replicate your main applications (for now limited to Docker/LXC containers and chosen directories) to a second, unRAID based, Backup machine.

With the inclusion of keepalived you can also create a virtual IP for your Main and Backup machine where the backup machine can automatically run the replicated containers when the Main server goes down automatically via the virtual IP.

If you don't want to use keepalived you can of course just replicated your applications to the Backup server.

 

Please let me know if you have feature requests or other ideas what could be added or done differently.

The release to 6.13.0-beta.2 was done on purpose so that users can test and critical bugs can be fixed if found.

 

 

Prerequisites:

  1. Backup of your Data and applications
  2. SSH enabled (Settings -> Management Access) on the Master and Backup server:
    grafik.png.60f3cfa4886d7aad2a148c1b2372dd7d.png
  3. Pools and file paths set up the same on both your Master and Backup server
    (The Pools Array must be the same Filesystem too!)
  4. Network configured the same on both machines (bridge or no bridge)
  5. If you have custom Docker networks on your Master server you have to create them before starting the replication otherwise the replication will fail for the containers with custom networks (name from the custom network must be the same).

 

 

Tutorial:

 

  1. Install the plugin on both servers through the CA App by searching "Unraid Replication" and wait for the Done button:
    grafik.thumb.png.6d516214e93b9fbdfff8ffd5e6afb6d1.png
     
  2. On the Master server (Master server will be colored blue from now on) and on the Backup server (Backup server will be colored orange from now on) go to Settings -> Unraid Replication:
    grafik.png.08f4b124bdac21629e52cddefeffb979.png
     
  3. On both servers choose the appropriate Instance type for the machine and click Update:
    grafik.png.05070cdc26c6ec569b5270fb04516a1d.png
     
  4. Generate a Key Pair on the Master server by entering the IP address from the Backup server and click Generate (in this case the Backup server has the IP address 10.0.0.139) :
    grafik.png.2c358b6691593a98f09560047467a42a.png
     
  5. Triple click the now generated Public Key on the Master server, copy it into the clipboard, past it in the field "Public Key from Host (Master)" on the Backup server and click Update:
    grafik.png.99a4f1d37fcca7ff90d146a1c5ceef69.png
     
  6. Let's test the connection on the Master server by clicking "Test":
    grafik.thumb.png.2727afa5268f25925c29df32203064b4.png

    You should get a popup that says "Info: SSH connection working properly!" (if you get an error make sure that you have SSH enabled on both the Master and Backup server) :
    grafik.png.4c8fc7215d1912790932751113ef40ca.png
     
  7. On the Master server select if you want to replicate Docker and/or LXC and click Update (if you want you can also change the Logging type from Syslog to File -> the log file will be written to: /var/log/unraid-replication) :
    grafik.png.93f4d4944f1069e61c3fc08492ede43b.png
    Note about Temporary Path: If you leave the Temporary Path empty the default path '/tmp/unraid-replication' will be used.
    In most cases the default path is sufficient, if you are low on RAM you should set a different path since the Docker Replication (if enabled) will copy the container layers to this path from the Master server (one by one and it will at most always be one container backup in the temporary path), most containers should not consume more than 1GB of RAM but please always check for exceptions, especially for LXC this can be a different story and a container image can consume a few GB of RAM if you leave the default path.
    However I would recommend for now to leave it empty until more testing is done, if you are brave enough feel free to test it and let me know if you encounter any bugs/issues.
     
  8. Go to the Docker tab within the plugin on the Master server and select your containers that you want to replicate alongside with the path, if you wish to enable Autostart on the client and click Update (Autostart only applies if you use keepalived since containers on the Backup server will not automatically start by default to avoid conflicts in your network but Autostart will work if you use keepalived) :
    grafik.thumb.png.c3e5a27377e8cc3bba2bcb49a1ecb428.png
    Replication: Enable if you want to replicate the container
    Stop Container: Will stop the container while replicating (Recommended! If you don't choose to stop the container a snapshot will be taken <- this means the container will be paused for a few seconds to be able to take the snapshot)
    Host Austoart: This shows you if Autostart on the Master server is enabled
    Client Autostart: This will let the Backup server know if the container should be auto-started when the Master server is not available. You can also enable Autostart for a container that does not Autostart on the Master server (Client Austoart only applies if you are using keepalived).
    Container paths: Choose your container paths that you want to sync (rsync is used to sync the data).
    Please note that the base path in my case /mnt/cache (for/mnt/user/appdata) or /mnt/user (for /mnt/user/Serien) needs to exist on the Backup server before starting the sync otherwise the path will be discarded while syncing.

    Note: If you choose to stop the container the container will start after the replication from the Docker image layers and the selected container paths, however I strongly recommend to stop the container since you can destroy a database when it's in the middle of writing data.
     
  9. Select your LXC containers (if enabled) and click Update:
    !!! PLEASE NOTE THAT REPLICATION CURRENTLY IS ONLY WORKING IF YOU ARE USING BTRFS AS THE BACKING STORAGE TYPE !!!
    (ZFS will be implemented in on of the next releases)
    grafik.thumb.png.6568c19a46c859a274de28281c6c61c6.png
    Replication: Enable if you want to replicate the container
    Stop Container: Will stop the container while replicating (Recommended! If you don't choose to stop the container a snapshot will be taken in running state and could lead to data corruption)
    Host Austoart: This shows you if Autostart on the Master server is enabled
    Client Autostart: This will let the Backup server know if the container should be auto-started when the Master server is not available. You can also enable Autostart for a container that does not Autostart on the Master server (Client Austoart only applies if you are using keepalived).
     
  10. On the Master server go back to the Unraid Replication tab and click Start at "Start Replication", this will initiate the replication and you can watch the progress in the syslog (Icon in the top right corner from the unRAID WebGUI: grafik.png.1defeed64fa15b9155878fde732a7cff.png), if you didn't change the Logging Type above:
    grafik.png.8c2e7cfbc60b629de42df64f00d90455.png

    This should give you something like that in your syslog:
    grafik.thumb.png.630979199cf10493e39d392a17e467ed.png

    After everything finished on the Master server you will see that a task on the Backup server is started:
    grafik.thumb.png.c2aec974f7033883861812c3467280fd.png

    After the task on the Backup server finished you will see:
    grafik.thumb.png.cd9f2d70ec7dd40bdbfdb85300f071d6.png

    When the replication is finished you get this message:
    grafik.thumb.png.811b96c264e73ff4dca73da782fa3e6f.png
     
  11. After the replication task finished you can go to the Docker page form the Backup server and you should see your synced containers:
    grafik.thumb.png.ea684f51024033b404245b1d1ef40b74.png
    Please don't worry about the version saying "not available" this is caused because the Docker layers are replicated from the Master server to the Backup server.
    The Autostart from all replicated containers as you can see is also disabled, this is done on purpose and will be handled by keepalived if used, otherwise containers will start and possible cause conflicts on your network.
    (The container Gotiry-On-Start was not replicated, that's why you see Autostart enabled)
     
  12. The same applies to your LXC page on the Backup server:
    grafik.png.f2d7d9d1066fa2e45bcf47b377256368.png

 

 

 

 

Running on a schedule:

  1. On the Master server, go to the CA App and search for "User Scripts" from @Squid and install it:
    grafik.png.741c741b45e21047de0f5dcb472bc32f.png
     
  2. Go to Settings -> User Scripts:
    grafik.png.83c0ece817b878e2e5a85088b228fb5e.png
     
  3. Click "Add new Script":
    grafik.png.ed480b7ff6128639d1609370ccb9e5de.png
     
  4. Give it a good name:
    grafik.png.907e04332dfc7dbc94f974831f55b8d1.png
     
  5. Hover over the Gear icon and click "Edit Script":
    grafik.png.e5bc982997f8481c598639ed909753d4.png
     
  6. Add the line "unraid-replication" (without double quotes) right below the line "#!/bin/bash" and click "Save Changes":
    grafik.thumb.png.46652ce5767fcc9915d163035ff05325.png
     
  7. Select your preferred schedule or create your custom one:
    grafik.png.85e767cefed289adb21dfe9a406b96a4.png
    (I wouldn't recommend to run it too often because you will obviously cause some strain on your disks/network when replicating, personally I run it once a day but you can of course run it every hour or however you prefer it)
     
  8. Click Apply:
    grafik.png.9827b6091386d2d2d65678dbbfabf234.png

    The script will now run on your set schedule.

 

 

 

 

This plugin was mainly designed to bridge short downtimes from the Master server so that your relevant services stay online on your Backup server via keepalived.

Personally I wouldn't recommend to replicated Databases without stopping the containers because that can have fatal consequences, however the plugin is also able to handle a controlled Shutdown or Reboot (the latter will be removed maybe) from the Master server and then automatically replicate the applications/data back from the Backup server to the Master server when it is online again.

Please be even more careful when syncing data back from the Backup server to the Master server!

 

Again, always make sure to have a Backup from your applications/data in case something goes wrong, I can't guarantee that no data loss is happening.

 

 

If you got any further questions please feel free to ask and also don't forget to report bugs/issues if you experiencing any.

  • Like 8
Link to comment

keepalived (configuration)

 

Prerequisites:

  1. Fully configured unRAID-Replication plugin with replication tested and working

 


Tutorial
 

  1. On the Master server go to the plugin (Settings -> Unraid Replication) :
    grafik.png.1eebbc111780ff0895918d8ef63549d0.png
     
  2. Enable keepalived and click Update:
    grafik.png.d55a23204457a4dddc09a93a52b79415.png
     
  3. Go to the keepalived tab within the plugin:
    grafik.png.193b5d6a00e8659162143d59eb14d0d3.png
     
  4. Click on "Show Host (Master) keepalived.conf example to open the spoiler at the boot from the page:
    grafik.png.cea743a653ba27934fdcfb74d4fca59a.png
     
  5. Copy the whole config example configuration and paste it in the text box above:
    grafik.thumb.png.a87251c021c9474f75e1bdc327eff203.png
     
  6. Please make sure that you change the configuration example so that it fit your needs (there are descriptions at each line, so I would recommend that you read through all of them).
    Especially check the interface (configured by default to use br0), you have define a virtual IP address (which is a free IP address on your network and not used by anything -  in my case 10.0.0.18) and set a auth-pass password (or remove the authentication section entirely - in my case testpassword)
    After you configured everything click Update:
    grafik.png.f4f30dfc65f94e5390173ba34f15735f.png
     
  7. Shortly after you clicked Update you should get a notification from keepalived through the Unraid notification system that it registered as Master server:
    grafik.png.7a90eb7f9b6b2d120673554defca9696.png

    This should be also be reflected in your syslog:
    grafik.thumb.png.379631cedd584daf325076005b378205.png
     
  8. Now move on to the Backup server and go to the Unraid Replication plugin, enable keepalived and click Update like described above for the Master server and you should see something like that:
    grafik.thumb.png.6b92bf52cd26884f093b85d36d9cfc75.png
     
  9. Enable now the Autostart from the services that you want to autostart and click Update:
    grafik.thumb.png.43c03125b5a60d4086ff2ff4c7c30f99.png
     
  10. Go now to the keepalived tab on the Backup server, click "Show Client (Backup) keepalived.conf example" and copy paste the whole example in the text box above:
    grafik.thumb.png.5af5d1e3c918ac3ede5b141f8b2f8d3a.png
     
  11. Please make also sure that you change the configuration example so that it fit your needs (there are descriptions at each line, so I would recommend that you read through all of them).
    Especially check the interface (configured by default to use br0), you have define a virtual IP address (this MUST be the same IP address as you specified on the Master server - in my case 10.0.0.18) and set a auth-pass password (this MUST be the password as you specified on the Master server - in my case 10.0.0.18)
    After you configured everything click Update:
    grafik.png.eb2a9289b5c75b0057941bf37bd714e9.png
     
  12. Shortly after you clicked Update you should get a notification from keepalived through the Unraid notification system that it registered as Backup server:
    grafik.png.9da2735cfeef2b4a6f2c54b1a04b6114.png
    If you don't get a message check the log from both the Master and Backup server.

 

 

 

With that you have now configured keepalived and you should now be able to reach your services (if they are running on the default or custom bridge from Docker without a dedicated IP on the physical interface) with your virtual IP address - in my case 10.0.0.18 (for example binhex-jellyfin) :

grafik.thumb.png.1e8e92a425222d87e2448b0bfcf56747.png

 

When the Master server for whatever reason goes down you will shortly after it goes down get another notification from the Backup server that the Master server is not reachable:
grafik.png.5e68f6e2b4506cb6173ca8a1c496b7c3.png

 

If you've enabled Autostart in Step 9. keepalived will automatically start the containers that have Autostart enabled on the Backup server:
grafik.thumb.png.5c27b5e39fed9411179aaffe5a993be9.png

 

And you then will be still be able to connect to the container through the virtual IP address - in my case 10.0.0.18:
grafik.thumb.png.d9be80ab71c8bad912e4d4cee5c16e0c.png

 

Note: It might be necessary to re-login on web applications because the connection drops for a few seconds and the container from the Backup server has initially no clue that something is/was connected.

 

Note: In the case for Jellyfin I had to set a static hostname for the container so that the WebGUI doesn't complain about a different hostname, simply do this in the Docker template with Advanced View enabled and add:

--hostname=jellyfin

to the Extra Parameters:

grafik.png.985e4bf77cc709d7c61ee6ba6501795e.png

 

After the Master server is back online you will get a notification on the Backup server that it is now available again and stop all containers on the Backup server and start them as usual on the Master server:

grafik.png.faf227c9fa37c2cde22210564b7f5587.png

and the on the Master server you'll get this notification:

grafik.png.42d19baf3aded665d0c4272e2fe265e0.png

 

Your applications are now still available at 10.0.0.18 (if Autostart on the Master server is enabled).

 

 

With this you will be able to bridge short, or longer periods of time when the Master server is offline and still being able to connect to your services through the virtual IP address, with only a few seconds of downtime.

  • Thanks 4
Link to comment

Tutorial for replicating back to the Master server TBD since this can cause issues

(Hint if you want to try it: Keyboard combination CTRL+ALT+e on the main Unraid Replication plugin page on both the Master and Backup server)

  • Thanks 1
Link to comment
  • 3 weeks later...

very nice plugin, Backup Server is now running as full fallback Server "just in case"

 

may 1 cosmetic point as note, when a Server is in sleep mode and woken up by keepalived (3rd instance) it will produce an error message.

 

FAULT STATE ( assuming networking is ... when returning from sleep)

image.png.83bf71ce6f69d9c2783a1107d797532b.png

 

but this is only cosmetic, all functions are working as expected.

 

my usecase as note

 

1/ MASTER Unraid Server 24/7

2/ BACKUP1 Unraid Server (weekly based backups from MASTER, now incl. Replication)

3/ BACKUP2 Server (RPi with keepalived setup)

 

once MASTER is offline, BACKUP2 will watch it, waiting 3 mins as MASTER may just rebooting, if back online nothing happens, if its still offline waking up BACKUP1, when woken up your plugin will start the Dockers and all systems are up and running.

 

once MASTER is back up online, your plugin/s will take care off stopping Dockers on BACKUP1, when all is done, BACKUP1 is going back to sleep.

 

very nice ...

 

i had to put Homeassistant on MASTER startup row to the end, as im using custom for all Dockers looks like HA was too fast up while BACKUP1 was not done stopping them and HA didnt like that ;) all other dockers on custom (32 dockers) had no issues with switching, prolly something with Fritz ... ;)

 

thanks again, great plugin, saved me some time writing something together ;)

  • Like 1
Link to comment

fantastic plugin.  just got it up and running on the new 7.0 beta.  cant wait for more development on this plugin. this was one of the only things i missed from proxmox was replication and now we are getting so close to it.  thanks again for all your hard work

  • Like 3
Link to comment
3 minutes ago, ffhelllskjdje said:

Working great! However, I installed LXC after installing this plugin and can't get the replication plugin to recognize i now have LXCs for replication. I tried uninstalling this plugin and reinstalling but it is still only detecting dockers.

Can you please post your Diagnostics?

What backing storage type do you have configured in LXC (please not that currently only BTRFS is supported)?

Link to comment
52 minutes ago, ffhelllskjdje said:

Ah, yeah the host (master) FS is ZFS. On the backup it's btrfs. Docker replication works great.

Just as a side note, your default backing storage type is directory and not ZFS, you can change that in the LXC Settings:

grafik.thumb.png.73caa0c80a141b3ce59223fea6cacdaa.png

 

Keep in mind I don't plan to make it possible to copy from a ZFS to a BTRFS backing storage type since then this wouldn't be replication... :)

 

However replication from ZFS and Directory backing storage type are both coming, but will still take a bit.

Link to comment
1 hour ago, Revan335 said:

Working with this?

Can you explain a bit more in detail what you mean exactly?

 

1 hour ago, Revan335 said:

For the SSH Service.

You have to enable SSH on Unraid itself, this is a Unraid to Unraid replication service, please see:

On 6/4/2024 at 3:24 PM, ich777 said:

This plugin allows you to replicate your main applications (for now limited to Docker/LXC containers and chosen directories) to a second, unRAID based, Backup machine.

 

  • Like 1
Link to comment
  • 2 weeks later...

Hello, could you please tell me how can I enable the version of a replicated Docker container? I have separated my Unraid into two Unraids, moving the essentials to a power-efficient node, and luckily I found out about this plugin that saved me at least 4 hours of replicating the containers from the main server to the mini server by hand. But now, as the replication succeeded, I wish to abandon my essentials on my main server and fully use them on my mini server, but the container versions are "not available".
Thank you.

Link to comment
6 minutes ago, ALERT said:

But now, as the replication succeeded, I wish to abandon my essentials on my main server and fully use them on my mini server, but the container versions are "not available".

This is caused because the container where not pulled from Docker Hub directly.

I think if you click "Force Update" with Advanced View enabled on the Docker page it should hopefully update the status from the containers.

 

Otherwise you have to go into each container template, make a dummy change (to make the Apply button clickable), change the dummy change back and click Apply, that should also do the job.

 

What you can also try is to wait a day or two and see if the status changes <- if you enabled automatic update check.

(Please keep in mind that you are already on the latest release from the container since you've replicated them)

  • Like 1
Link to comment
4 minutes ago, ich777 said:

This is caused because the container where not pulled from Docker Hub directly.

I think if you click "Force Update" with Advanced View enabled on the Docker page it should hopefully update the status from the containers.

 

Otherwise you have to go into each container template, make a dummy change (to make the Apply button clickable), change the dummy change back and click Apply, that should also do the job.

 

What you can also try is to wait a day or two and see if the status changes <- if you enabled automatic update check.

(Please keep in mind that you are already on the latest release from the container since you've replicated them)

Silly me, I didn't manage to think about clicking Force Update myself. Thank you! It worked!

Dummy change didn't work, btw.

Edited by ALERT
  • Like 1
Link to comment
On 7/1/2024 at 1:41 PM, ich777 said:

Can you explain a bit more in detail what you mean exactly?

You used the Unraid Default/SSH Port. I don't used this. I used mguts Rsync Server for the SSH Functionality.

 

Working your work only with the Default SSH Port that Unraid using?

Link to comment
4 minutes ago, Revan335 said:

I used mguts Rsync Server for the SSH Functionality.

This won't work since you need to be on the Unraid host for that.

What is the benefit of using a dedicated Docker container? A added layer of security?

 

4 minutes ago, Revan335 said:

Working your work only with the Default SSH Port that Unraid using?

You could also use a different port but as said above this will only work with the default Unraid SSH service (which can use a different port) that is running directly on Unraid because obviously it needs exclusive access to Unraid to Docker and so on.

 

If you are more comfortable using German then you can make a post in the German subforums and quote me and I will respond when I get the notification.

Link to comment
16 minutes ago, ich777 said:

What is the benefit of using a dedicated Docker container? A added layer of security?

Yes, @mgutt have this Description about this:

On 6/10/2022 at 12:36 PM, mgutt said:

The benefits of this container are:

  • you can define a non-default SSH port for rsync only (default of this container is 5533)
  • you can define specific paths instead of allowing access to the complete server (default is /mnt/user)
  • files access is read-only (protection against ransomware)
Link to comment
3 minutes ago, Revan335 said:

Yes, @mgutt have this Description about this:

The first one is a bit odd since you can do that on Unraid too:
grafik.png.779dbd7f18450e86eb8d66e32d83f549.png

(in Management Access)

 

7 minutes ago, Revan335 said:
  • you can define specific paths instead of allowing access to the complete server (default is /mnt/user)
  • files access is read-only (protection against ransomware)

Sure that's partially true since I assume you use SSH only in your local network and don't exposed it to the Internet.

The but about ransomware is also partially true because you need to save the login data somewhere so that this would be even possible.

 

 

Again, this plugin won't work with a third party SSH container since this plugin needs exclusive access to the second Unraid instance and vice versa otherwise it won't be possible to replicate the containers, get the necessary configuration files and so on.

 

Please keep also in mind that this plugin doesn't use a password to authenticate this plugin uses a keypair which should be much more safe than a password, however you can still use your other container for SSH access to your Unraid machine.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...