ich777 Posted June 4 Posted June 4 unRAID Replication Plugin (Now in very BETA | only visible for unRAID 6.13.0-beta.2+) DISCLAIMER: This plugin is under development and bugs may be occur (if you encounter a bug please let me know). Always make sure to have a backup from your data on another device, I can't guarantee data loss! !!! PLEASE USE THIS PLUGIN WITH CAUTION !!! This plugin allows you to replicate your main applications (for now limited to Docker/LXC containers and chosen directories) to a second, unRAID based, Backup machine. With the inclusion of keepalived you can also create a virtual IP for your Main and Backup machine where the backup machine can automatically run the replicated containers when the Main server goes down automatically via the virtual IP. If you don't want to use keepalived you can of course just replicated your applications to the Backup server. Please let me know if you have feature requests or other ideas what could be added or done differently. The release to 6.13.0-beta.2 was done on purpose so that users can test and critical bugs can be fixed if found. Prerequisites: Backup of your Data and applications SSH enabled (Settings -> Management Access) on the Master and Backup server: Pools and file paths set up the same on both your Master and Backup server (The Pools Array must be the same Filesystem too!) Network configured the same on both machines (bridge or no bridge) If you have custom Docker networks on your Master server you have to create them before starting the replication otherwise the replication will fail for the containers with custom networks (name from the custom network must be the same). Tutorial: Install the plugin on both servers through the CA App by searching "Unraid Replication" and wait for the Done button: On the Master server (Master server will be colored blue from now on) and on the Backup server (Backup server will be colored orange from now on) go to Settings -> Unraid Replication: On both servers choose the appropriate Instance type for the machine and click Update: Generate a Key Pair on the Master server by entering the IP address from the Backup server and click Generate (in this case the Backup server has the IP address 10.0.0.139) : Triple click the now generated Public Key on the Master server, copy it into the clipboard, past it in the field "Public Key from Host (Master)" on the Backup server and click Update: Let's test the connection on the Master server by clicking "Test": You should get a popup that says "Info: SSH connection working properly!" (if you get an error make sure that you have SSH enabled on both the Master and Backup server) : On the Master server select if you want to replicate Docker and/or LXC and click Update (if you want you can also change the Logging type from Syslog to File -> the log file will be written to: /var/log/unraid-replication) : Note about Temporary Path: If you leave the Temporary Path empty the default path '/tmp/unraid-replication' will be used. In most cases the default path is sufficient, if you are low on RAM you should set a different path since the Docker Replication (if enabled) will copy the container layers to this path from the Master server (one by one and it will at most always be one container backup in the temporary path), most containers should not consume more than 1GB of RAM but please always check for exceptions, especially for LXC this can be a different story and a container image can consume a few GB of RAM if you leave the default path. However I would recommend for now to leave it empty until more testing is done, if you are brave enough feel free to test it and let me know if you encounter any bugs/issues. Go to the Docker tab within the plugin on the Master server and select your containers that you want to replicate alongside with the path, if you wish to enable Autostart on the client and click Update (Autostart only applies if you use keepalived since containers on the Backup server will not automatically start by default to avoid conflicts in your network but Autostart will work if you use keepalived) : Replication: Enable if you want to replicate the container Stop Container: Will stop the container while replicating (Recommended! If you don't choose to stop the container a snapshot will be taken <- this means the container will be paused for a few seconds to be able to take the snapshot) Host Austoart: This shows you if Autostart on the Master server is enabled Client Autostart: This will let the Backup server know if the container should be auto-started when the Master server is not available. You can also enable Autostart for a container that does not Autostart on the Master server (Client Austoart only applies if you are using keepalived). Container paths: Choose your container paths that you want to sync (rsync is used to sync the data). Please note that the base path in my case /mnt/cache (for/mnt/user/appdata) or /mnt/user (for /mnt/user/Serien) needs to exist on the Backup server before starting the sync otherwise the path will be discarded while syncing. Note: If you choose to stop the container the container will start after the replication from the Docker image layers and the selected container paths, however I strongly recommend to stop the container since you can destroy a database when it's in the middle of writing data. Select your LXC containers (if enabled) and click Update: !!! PLEASE NOTE THAT REPLICATION CURRENTLY IS ONLY WORKING IF YOU ARE USING BTRFS AS THE BACKING STORAGE TYPE !!! (ZFS will be implemented in on of the next releases) Replication: Enable if you want to replicate the container Stop Container: Will stop the container while replicating (Recommended! If you don't choose to stop the container a snapshot will be taken in running state and could lead to data corruption) Host Austoart: This shows you if Autostart on the Master server is enabled Client Autostart: This will let the Backup server know if the container should be auto-started when the Master server is not available. You can also enable Autostart for a container that does not Autostart on the Master server (Client Austoart only applies if you are using keepalived). On the Master server go back to the Unraid Replication tab and click Start at "Start Replication", this will initiate the replication and you can watch the progress in the syslog (Icon in the top right corner from the unRAID WebGUI: ), if you didn't change the Logging Type above: This should give you something like that in your syslog: After everything finished on the Master server you will see that a task on the Backup server is started: After the task on the Backup server finished you will see: When the replication is finished you get this message: After the replication task finished you can go to the Docker page form the Backup server and you should see your synced containers: Please don't worry about the version saying "not available" this is caused because the Docker layers are replicated from the Master server to the Backup server. The Autostart from all replicated containers as you can see is also disabled, this is done on purpose and will be handled by keepalived if used, otherwise containers will start and possible cause conflicts on your network. (The container Gotiry-On-Start was not replicated, that's why you see Autostart enabled) The same applies to your LXC page on the Backup server: Running on a schedule: On the Master server, go to the CA App and search for "User Scripts" from @Squid and install it: Go to Settings -> User Scripts: Click "Add new Script": Give it a good name: Hover over the Gear icon and click "Edit Script": Add the line "unraid-replication" (without double quotes) right below the line "#!/bin/bash" and click "Save Changes": Select your preferred schedule or create your custom one: (I wouldn't recommend to run it too often because you will obviously cause some strain on your disks/network when replicating, personally I run it once a day but you can of course run it every hour or however you prefer it) Click Apply: The script will now run on your set schedule. This plugin was mainly designed to bridge short downtimes from the Master server so that your relevant services stay online on your Backup server via keepalived. Personally I wouldn't recommend to replicated Databases without stopping the containers because that can have fatal consequences, however the plugin is also able to handle a controlled Shutdown or Reboot (the latter will be removed maybe) from the Master server and then automatically replicate the applications/data back from the Backup server to the Master server when it is online again. Please be even more careful when syncing data back from the Backup server to the Master server! Again, always make sure to have a Backup from your applications/data in case something goes wrong, I can't guarantee that no data loss is happening. If you got any further questions please feel free to ask and also don't forget to report bugs/issues if you experiencing any. 8 1 Quote
ich777 Posted June 4 Author Posted June 4 keepalived (configuration) Prerequisites: Fully configured unRAID-Replication plugin with replication tested and working Tutorial On the Master server go to the plugin (Settings -> Unraid Replication) : Enable keepalived and click Update: Go to the keepalived tab within the plugin: Click on "Show Host (Master) keepalived.conf example to open the spoiler at the boot from the page: Copy the whole config example configuration and paste it in the text box above: Please make sure that you change the configuration example so that it fit your needs (there are descriptions at each line, so I would recommend that you read through all of them). Especially check the interface (configured by default to use br0), you have define a virtual IP address (which is a free IP address on your network and not used by anything - in my case 10.0.0.18) and set a auth-pass password (or remove the authentication section entirely - in my case testpassword) After you configured everything click Update: Shortly after you clicked Update you should get a notification from keepalived through the Unraid notification system that it registered as Master server: This should be also be reflected in your syslog: Now move on to the Backup server and go to the Unraid Replication plugin, enable keepalived and click Update like described above for the Master server and you should see something like that: Enable now the Autostart from the services that you want to autostart and click Update: Go now to the keepalived tab on the Backup server, click "Show Client (Backup) keepalived.conf example" and copy paste the whole example in the text box above: Please make also sure that you change the configuration example so that it fit your needs (there are descriptions at each line, so I would recommend that you read through all of them). Especially check the interface (configured by default to use br0), you have define a virtual IP address (this MUST be the same IP address as you specified on the Master server - in my case 10.0.0.18) and set a auth-pass password (this MUST be the password as you specified on the Master server - in my case 10.0.0.18) After you configured everything click Update: Shortly after you clicked Update you should get a notification from keepalived through the Unraid notification system that it registered as Backup server: If you don't get a message check the log from both the Master and Backup server. With that you have now configured keepalived and you should now be able to reach your services (if they are running on the default or custom bridge from Docker without a dedicated IP on the physical interface) with your virtual IP address - in my case 10.0.0.18 (for example binhex-jellyfin) : When the Master server for whatever reason goes down you will shortly after it goes down get another notification from the Backup server that the Master server is not reachable: If you've enabled Autostart in Step 9. keepalived will automatically start the containers that have Autostart enabled on the Backup server: And you then will be still be able to connect to the container through the virtual IP address - in my case 10.0.0.18: Note: It might be necessary to re-login on web applications because the connection drops for a few seconds and the container from the Backup server has initially no clue that something is/was connected. Note: In the case for Jellyfin I had to set a static hostname for the container so that the WebGUI doesn't complain about a different hostname, simply do this in the Docker template with Advanced View enabled and add: --hostname=jellyfin to the Extra Parameters: After the Master server is back online you will get a notification on the Backup server that it is now available again and stop all containers on the Backup server and start them as usual on the Master server: and the on the Master server you'll get this notification: Your applications are now still available at 10.0.0.18 (if Autostart on the Master server is enabled). With this you will be able to bridge short, or longer periods of time when the Master server is offline and still being able to connect to your services through the virtual IP address, with only a few seconds of downtime. 4 Quote
ich777 Posted June 4 Author Posted June 4 Tutorial for replicating back to the Master server TBD since this can cause issues (Hint if you want to try it: Keyboard combination CTRL+ALT+e on the main Unraid Replication plugin page on both the Master and Backup server) 1 Quote
alturismo Posted June 23 Posted June 23 very nice plugin, Backup Server is now running as full fallback Server "just in case" may 1 cosmetic point as note, when a Server is in sleep mode and woken up by keepalived (3rd instance) it will produce an error message. FAULT STATE ( assuming networking is ... when returning from sleep) but this is only cosmetic, all functions are working as expected. my usecase as note 1/ MASTER Unraid Server 24/7 2/ BACKUP1 Unraid Server (weekly based backups from MASTER, now incl. Replication) 3/ BACKUP2 Server (RPi with keepalived setup) once MASTER is offline, BACKUP2 will watch it, waiting 3 mins as MASTER may just rebooting, if back online nothing happens, if its still offline waking up BACKUP1, when woken up your plugin will start the Dockers and all systems are up and running. once MASTER is back up online, your plugin/s will take care off stopping Dockers on BACKUP1, when all is done, BACKUP1 is going back to sleep. very nice ... i had to put Homeassistant on MASTER startup row to the end, as im using custom for all Dockers looks like HA was too fast up while BACKUP1 was not done stopping them and HA didnt like that all other dockers on custom (32 dockers) had no issues with switching, prolly something with Fritz ... thanks again, great plugin, saved me some time writing something together 1 Quote
antihero412 Posted June 28 Posted June 28 fantastic plugin. just got it up and running on the new 7.0 beta. cant wait for more development on this plugin. this was one of the only things i missed from proxmox was replication and now we are getting so close to it. thanks again for all your hard work 4 Quote
ffhelllskjdje Posted June 30 Posted June 30 Working great! However, I installed LXC after installing this plugin and can't get the replication plugin to recognize i now have LXCs for replication. I tried uninstalling this plugin and reinstalling but it is still only detecting dockers. Quote
ich777 Posted June 30 Author Posted June 30 3 minutes ago, ffhelllskjdje said: Working great! However, I installed LXC after installing this plugin and can't get the replication plugin to recognize i now have LXCs for replication. I tried uninstalling this plugin and reinstalling but it is still only detecting dockers. Can you please post your Diagnostics? What backing storage type do you have configured in LXC (please not that currently only BTRFS is supported)? Quote
ffhelllskjdje Posted June 30 Posted June 30 (edited) 2 hours ago, ich777 said: Can you please post your Diagnostics? What backing storage type do you have configured in LXC (please not that currently only BTRFS is supported)? Ah, yeah the host (master) FS is ZFS. On the backup it's btrfs. Docker replication works great. mars-diagnostics-20240630-1354.zip Edited June 30 by ffhelllskjdje Quote
ich777 Posted June 30 Author Posted June 30 52 minutes ago, ffhelllskjdje said: Ah, yeah the host (master) FS is ZFS. On the backup it's btrfs. Docker replication works great. Just as a side note, your default backing storage type is directory and not ZFS, you can change that in the LXC Settings: Keep in mind I don't plan to make it possible to copy from a ZFS to a BTRFS backing storage type since then this wouldn't be replication... However replication from ZFS and Directory backing storage type are both coming, but will still take a bit. Quote
ich777 Posted July 1 Author Posted July 1 1 hour ago, Revan335 said: Working with this? Can you explain a bit more in detail what you mean exactly? 1 hour ago, Revan335 said: For the SSH Service. You have to enable SSH on Unraid itself, this is a Unraid to Unraid replication service, please see: On 6/4/2024 at 3:24 PM, ich777 said: This plugin allows you to replicate your main applications (for now limited to Docker/LXC containers and chosen directories) to a second, unRAID based, Backup machine. 1 Quote
ALERT Posted July 15 Posted July 15 Hello, could you please tell me how can I enable the version of a replicated Docker container? I have separated my Unraid into two Unraids, moving the essentials to a power-efficient node, and luckily I found out about this plugin that saved me at least 4 hours of replicating the containers from the main server to the mini server by hand. But now, as the replication succeeded, I wish to abandon my essentials on my main server and fully use them on my mini server, but the container versions are "not available". Thank you. Quote
ich777 Posted July 15 Author Posted July 15 6 minutes ago, ALERT said: But now, as the replication succeeded, I wish to abandon my essentials on my main server and fully use them on my mini server, but the container versions are "not available". This is caused because the container where not pulled from Docker Hub directly. I think if you click "Force Update" with Advanced View enabled on the Docker page it should hopefully update the status from the containers. Otherwise you have to go into each container template, make a dummy change (to make the Apply button clickable), change the dummy change back and click Apply, that should also do the job. What you can also try is to wait a day or two and see if the status changes <- if you enabled automatic update check. (Please keep in mind that you are already on the latest release from the container since you've replicated them) 1 Quote
ALERT Posted July 15 Posted July 15 (edited) 4 minutes ago, ich777 said: This is caused because the container where not pulled from Docker Hub directly. I think if you click "Force Update" with Advanced View enabled on the Docker page it should hopefully update the status from the containers. Otherwise you have to go into each container template, make a dummy change (to make the Apply button clickable), change the dummy change back and click Apply, that should also do the job. What you can also try is to wait a day or two and see if the status changes <- if you enabled automatic update check. (Please keep in mind that you are already on the latest release from the container since you've replicated them) Silly me, I didn't manage to think about clicking Force Update myself. Thank you! It worked! Dummy change didn't work, btw. Edited July 15 by ALERT 1 Quote
Revan335 Posted July 15 Posted July 15 On 7/1/2024 at 1:41 PM, ich777 said: Can you explain a bit more in detail what you mean exactly? You used the Unraid Default/SSH Port. I don't used this. I used mguts Rsync Server for the SSH Functionality. Working your work only with the Default SSH Port that Unraid using? Quote
ich777 Posted July 15 Author Posted July 15 4 minutes ago, Revan335 said: I used mguts Rsync Server for the SSH Functionality. This won't work since you need to be on the Unraid host for that. What is the benefit of using a dedicated Docker container? A added layer of security? 4 minutes ago, Revan335 said: Working your work only with the Default SSH Port that Unraid using? You could also use a different port but as said above this will only work with the default Unraid SSH service (which can use a different port) that is running directly on Unraid because obviously it needs exclusive access to Unraid to Docker and so on. If you are more comfortable using German then you can make a post in the German subforums and quote me and I will respond when I get the notification. Quote
Revan335 Posted July 15 Posted July 15 16 minutes ago, ich777 said: What is the benefit of using a dedicated Docker container? A added layer of security? Yes, @mgutt have this Description about this: On 6/10/2022 at 12:36 PM, mgutt said: The benefits of this container are: you can define a non-default SSH port for rsync only (default of this container is 5533) you can define specific paths instead of allowing access to the complete server (default is /mnt/user) files access is read-only (protection against ransomware) Quote
ich777 Posted July 15 Author Posted July 15 3 minutes ago, Revan335 said: Yes, @mgutt have this Description about this: The first one is a bit odd since you can do that on Unraid too: (in Management Access) 7 minutes ago, Revan335 said: you can define specific paths instead of allowing access to the complete server (default is /mnt/user) files access is read-only (protection against ransomware) Sure that's partially true since I assume you use SSH only in your local network and don't exposed it to the Internet. The but about ransomware is also partially true because you need to save the login data somewhere so that this would be even possible. Again, this plugin won't work with a third party SSH container since this plugin needs exclusive access to the second Unraid instance and vice versa otherwise it won't be possible to replicate the containers, get the necessary configuration files and so on. Please keep also in mind that this plugin doesn't use a password to authenticate this plugin uses a keypair which should be much more safe than a password, however you can still use your other container for SSH access to your Unraid machine. 1 Quote
eagle470 Posted October 18 Posted October 18 Hello, I am trying to decide what the best thing to do here is. I have two unraid servers and I would like to setup fail over using this plugin for Immich and Nextcloud. Plex and jellyfin, I just run two instances of since my DB's are so large. I'm currently using resilio-sync to copy backups from server to server with failover being a manual process. I have upgraded to 7-beta3 (looks great and works fine on my older hardware) and have been reading ont he plugin and looking at options. rather than setting up database replication, I'm wondering if reconfiguring nextcloud and immich on a vm and replicating that isn't a better option? Or do I runt he risk of the DB corruption still? what do you recommend? Quote
ich777 Posted October 18 Author Posted October 18 37 minutes ago, eagle470 said: rather than setting up database replication, I'm wondering if reconfiguring nextcloud and immich on a vm and replicating that isn't a better option? This plugin isn't able to replicate VMs currently because I get too less feedback if it is even useful. 38 minutes ago, eagle470 said: Or do I runt he risk of the DB corruption still? I don't think that you will run into corruption since the plugin takes a snapshot from the container, replicates this snapshot from the docker layers to the second system and then uses rsync to copy over the selected data directories, since the first system is running anyways it should not happen that you run into any corruption since this is a one way sync, at least that's how intended it to be to just bridge short gaps of downtime from your main server (it is also possible to sync back the data from the backup server to your main server but I've hidden this feature for now because this could maybe cause corruption). You could configure database replication in the containers themself or better speaking in your Databases, use this plugin to replicate the containers (without the databases) and the plugin will then start the containers (Immich,...). TBH, I would simply use the plugin to replicate everything and rely on the backup server just for short gaps of downtime, however if you plan to use this as a real fail over then I would recommend that you set up database replication in the databases themself and sync the containers with the plugin. The main issue is that if you upload something to the server (when the backup is active) you would have to manually sync back the data (pictures, files, whatever) to the main server because I find that really risky to do that automatically. From my testing it is working correctly but I find it risky no matter what. So my recommendation would be to use the plugin as is as replication and use the automatic start function on the backup server to bridge gaps of downtime, then even if someone is uploading something to the backup server it would still be on there and you could then manually sync that back since this is in my opinion the best way of avoiding database corruption. Of course it's not a perfect solution for Failover but I designed the plugin just to be a Replication service with the benefit of starting the containers automatically if the Main server goes down. @alturismo uses it like that to bridge short gaps of downtime or his most important containers and I think it is working... 1 Quote
eagle470 Posted October 18 Posted October 18 I thought about doing database replication inside the application, for just two users (my wife and I) this feels like over kill. I'm using resilio-sync to copy to all pictures and everything both ways between servers (R/W on both ends.) they are effectively clones of each other. I'm debating on encrypting the nextcloud data set entirely, it's much simpler to do that and put anything I want "safe" away there than to enable volume encryption. This is more incase a server gets borked or I have to take it down due to HW maintenance for long periods. both systems sit int he same rack and utilize the same power whip. So what I'm after on this is more server redundancy, I did this with two adguard servers and I like the results from that as then I'm not scrambling to get things back online if there is an outage. I would like to see the ability to live migrate between systems, it seems like that would be a great feature to have. I love the flexibility of this product and the fact that it's easy to present the same data sets to multiple systems natively within it, but I keep missing more enterprise features that I got used to as an admin. I have a 10gb network and nvme drives for all this to run over, I just want a simple way to do fail over and fail back in the event of an outage. On thing to look at adding to the Replication plugin would be to copy custom defined networks to the slave server. Though a token based solution that you can move between systems if desired would make more sense than a static definition. The the admin could move the flag manually to initiate fail back, so basically one is primary and one is secondary, but the token has to be re-assigned to the master for fail back to occur. Quote
alturismo Posted October 19 Posted October 19 7 hours ago, ich777 said: @alturismo uses it like that to bridge short gaps of downtime or his most important containers and I think it is working... actually i do a complete sync and yes, it is working once even so that i didnt notice my Main was down but my usecase is a little different as the backup Server is in "sleep" most of the time, its my Backup, Desktop VM, Gaming VM Server ... so here is a 3rd keepalived instance running (kvm rpi), which then wakes "Backup" in case of a "Main" is going down (usually maintenance), after wakeup keepalive/replication is coming into the game on "Backup" and doing its job ... until "Main" is back online. but the state is 0 - 7 days behind on the backup as im replicating once a week, so depends when ... and im not using VM's for services, NC, etc ... all in Dockers here as i dont have any usage for running those Services in a VM, VM usage here is usually full Desktop VM (passed dGPU and bare metal usage Monitor), Gaming VM (passed dGPU and bare metal usage TV), HO remote VM (vgpu usage with parsec), ... i also thought about "something" for maria databases ... but in the end, i can live with the max 7 days old data there as its for bridging the downtime gap only here. 1 Quote
eagle470 Posted October 19 Posted October 19 50 minutes ago, alturismo said: actually i do a complete sync and yes, it is working once even so that i didnt notice my Main was down but my usecase is a little different as the backup Server is in "sleep" most of the time, its my Backup, Desktop VM, Gaming VM Server ... so here is a 3rd keepalived instance running (kvm rpi), which then wakes "Backup" in case of a "Main" is going down (usually maintenance), after wakeup keepalive/replication is coming into the game on "Backup" and doing its job ... until "Main" is back online. but the state is 0 - 7 days behind on the backup as im replicating once a week, so depends when ... and im not using VM's for services, NC, etc ... all in Dockers here as i dont have any usage for running those Services in a VM, VM usage here is usually full Desktop VM (passed dGPU and bare metal usage Monitor), Gaming VM (passed dGPU and bare metal usage TV), HO remote VM (vgpu usage with parsec), ... i also thought about "something" for maria databases ... but in the end, i can live with the max 7 days old data there as its for bridging the downtime gap only here. I just got my hands on the hpe gen10 servers and so I will not be running into these issues anytime soon I don’t think. That being said, I want to use these services to start replacing iCloud functions and if one system is down fail over and fail back would be much more useful than a manual restore and backup, but not critically so. Quote
ich777 Posted October 19 Author Posted October 19 1 hour ago, eagle470 said: I just got my hands on the hpe gen10 servers and so I will not be running into these issues anytime soon I don’t think. That being said, I want to use these services to start replacing iCloud functions and if one system is down fail over and fail back would be much more useful than a manual restore and backup, but not critically so. I mean in general you can replicate back to the Master server, just look at this post: There is a key combination mentioned that show you more options for replicating back to the Master server. However as said above this is more kind of a replication plugin and you can replicate from the Master to the Backup and vice versa. However it is working pretty well and you should only have minimal downtime even when replicating back and forth since it is done container by container, have you yet tried it with a "Test" container? Just set up a small container like my Firefox one and try to replicate it. You could also look into LXC if you care about system resources like something that I do with PiHole here, this is basically a failover with keepalived (granted that Gravity sync is now deprecated the data between the PiHole instances is not synced) -> I also have a similar container for AdGuard. I have a second instance of PiHole running on my KVM and whenever my Main server is offline the PiHole on the KVM jumps in so that at least Internet is working at home as usual. Quote
eagle470 Posted October 19 Posted October 19 12 hours ago, ich777 said: You could also look into LXC if you care about system resources like something that I do with PiHole here, this is basically a failover with keepalived (granted that Gravity sync is now deprecated the data between the PiHole instances is not synced) -> I also have a similar container for AdGuard. I have a second instance of PiHole running on my KVM and whenever my Main server is offline the PiHole on the KVM jumps in so that at least Internet is working at home as usual. I run two unraid servers (though I have three licenses) and run two adguard servers with adguard homesync between them. it seems to work fine. The more we talk about this the more un-nessecary it feels to do this leg work...the data is replicated and I can manually recover. 1 Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.