Syncronize servers using rsync over SSH next door or across the world


Recommended Posts

37 minutes ago, Hoopster said:

 

Thanks for this.  These are very clear instructions.  I followed them on both servers and I am able to rsync in either direction between servers without a password prompt.  Of course, when repeating the instructions on server two "server1" info is replaced with "server2" info and vice versa.  I think most people will figure that out if I did :)

 

These instructions work perfectly!

 

I have not yet rebooted either server as I have an additional question related to your follow-up post below.

 

 

"Server1" in my case is named "medianas" and "Server2" is called "backupnas."  When I generated the ssh keys on each server I named them "medianas_key" and "backupnas_key." 

 

Even though these keys are being copied to the file id_rsa in the go file (as per your instructions) and "id_rsa" is the key file referenced in my rsync commands, are you saying that since the keys generated by ssh-keygen are not named "id_rsa" that I need to take the additional steps you mentioned, or, is everything OK since "id_rsa" is the file being created in /root/.ssh by the go file commands?

 

I am assuming that since the generated keys (even though they are not called id_rsa) are being copied to "id_rsa" in /root/.ssh that this will work without creating /root/.ssh/config and specifying the identity files.  Is this correct?

 

yes. ssh naturally assumes the id_rsa file name. hence the copy instruction above.

but in case you have more than two servers, you now have an idea how to go about it.

 

3 minutes ago, Hoopster said:

 

@ken-ji The above was added to the original post based on a comment you made regarding permissions.  I see that after following your instructions, there are three files in .ssh.  authorized_keys has 600 permission, yet you indicated it should have 644 permissions.; id_rsa has 700 permissions, yet you indicated the key file should have 600 permissions; known_hosts is also in .ssh with 600 permissions.

 

This is what is see in .ssh:

-rw------- authorized_keys

-rwx------ id_rsa

-rw------- known_hosts

 

The inclusion of


chmod g-rwx,o-rwx -R /root/.ssh

in the go file would seem to indicate that all files in .ssh should have rwx permissions for group and other users, but, it appears that is not being set. Only owner permissions are set.

 

If specific file permissions are important, I just want to make sure the proper permissions are being set.

 

I apologize for all the replies/questions; however, in case others want to go down the rsync backup path, I want to make sure that the correct information is in this thread.  I thank you and @tr0910 for all your great work to get this properly documented.

 

the chmod command is the simplest way to be compliant to ssh requirements; older ssh seems to allow/need authorized_keys to be 644, a lot of the newer ones are more strict now.

as for id_rsa it doesn't really matter if the execute bit is set, but if you were going to do the commands individually, I'd remove the execute bit as well.

 

Link to comment
On 1/16/2018 at 4:27 PM, ken-ji said:

Sorry about that: Let me be a bit more concise

On Serever1:


# ssh-keygen -t rsa -b 2048 -f /boot/config/ssh/server1_key
Generating public/private rsa key pair. 
Enter passphrase (empty for no passphrase): [press enter here] 
Enter same passphrase again: [press enter here]
# scp /boot/config/ssh/server1_key.pub root@server2:/boot/config/ssh

Insert this into your /boot/config/go file


mkdir -p /root/.ssh
cp /boot/config/ssh/server1_key /root/.ssh/id_rsa
cat /boot/config/ssh/server2_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

Repeat for Server2

 

Then on each server, run the same lines you inserted into the go file


# mkdir -p /root/.ssh
# cp /boot/config/ssh/server1_key /root/.ssh/id_rsa
# cat /boot/config/ssh/server2_key.pub > /root/.ssh/authorized_keys
# chmod g-rwx,o-rwx -R /root/.ssh

at this point you can do password-less ssh between servers. If you have more servers, just add the keys to the cat command which dumps it all into the authorized_keys file

I forgot that you shouldn't use the wildcard *.pub here as it will include the unwanted ssh_host keys too.

 

I followed these instructions to the letter on my main and backup servers.  After running the indicated commands, modifying the go file and executing these commands on each server, moving files between the two servers is now possible without a password prompt. 

 

After rebooting my backup server, all the appropriate SSH files persist and the backup from main to backup server runs without a password prompt.  Success!

 

Thanks to @ken-ji and @tr0910 for this information.  Combining Ken-Ji's instructions above and the intermediate tests in the original post and tr0910's sample script, automating rsync backup via ssh works great.  Now it's on to refining my backup script and automating it via the User Scripts plugin.

 

The unRAID community, as always, comes through again.

Edited by Hoopster
  • Like 2
Link to comment
15 hours ago, Hoopster said:

If specific file permissions are important, I just want to make sure the proper permissions are being set

The .ssh directory must only be accessable by the owner.

 

16 hours ago, Hoopster said:

I am assuming that since the generated keys (even though they are not called id_rsa) are being copied to "id_rsa" in /root/.ssh that this will work without creating /root/.ssh/config and specifying the identity files.  Is this correct?

If id_rsa is registered with multiple servers (the corresponding *.pub file content added to the authorized_keys file of the servers) then you don't need to do anything extra to connect to these servers.

 

But if you want to use different keys when connecting to different servers, then you either need to supply the key name on the command line whenever you connect. Or create a config file that describes which key to use for which server.

 

So a sample /root/.ssh/config file might look like:

Host nas
	HostName 1.2.3.4
	IdentityFile /root/.ssh/key_for_server_1
	Port 2222

Host monster
	HostName server2.long.domain
	IdentityFile /root/.ssh/key_for_server_2
	User sysadmin
    LocalForward 1080 80

 

With above, you can do "ssh nas" and the connect happens to IP 1.2.3.4 with the server key key_for_server_1.

And you can do ssh "ssh monster" and you will connect as user sysadmin on the machine server2.long.domain using the second server key.

Link to comment

One more refinement was necessary.

 

Everything worked perfectly until I rebooted my main server.  It is the server from which I am copying new files to the backup server.  Generally, it is running 24x7 but, I upgraded it to 6.4.0 stable which required a few reboots to upgrade and to resolve a couple of minor issues.  After reboot, the rsync operations failed.  Although there was never a prompt for a password, my backup server was an unknown host. 

 

After confirming the host, a new known_hosts file was created in /root/.ssh which I copied to /boot/config/ssh.  I then modified the go file and added an additional line to those documented in ken-ji's post which copies known_hosts from /boot/config/ssh to /root/.ssh on reboot.

 

This solved the problem and the rsync commands function properly after a main server reboot.

Link to comment
  • 1 month later...

I am reading this and would like to try to implement this on my two servers. I have Encrypted both server filesystems and have a feeling this will slow any progress. Is there a way to ssh into the remote sever after a WOL command to allow the disk drives to be unlocked for backup? I know this is just adding another step to a somewhat complex setup. Also if there is a way to SSH the passphrase to unlock the drives is there any way to make sure only a specific machine can do the SSH unlock procedure?  

Link to comment
On 3/6/2018 at 2:48 PM, sentein said:

I am reading this and would like to try to implement this on my two servers. I have Encrypted both server filesystems and have a feeling this will slow any progress. Is there a way to ssh into the remote sever after a WOL command to allow the disk drives to be unlocked for backup? I know this is just adding another step to a somewhat complex setup. Also if there is a way to SSH the passphrase to unlock the drives is there any way to make sure only a specific machine can do the SSH unlock procedure?  

 

I am no expert on this, but, perhaps this will give you some clues:

 

http://blog.neutrino.es/2011/unlocking-a-luks-encrypted-root-partition-remotely-via-ssh/

Link to comment
  • 2 weeks later...

I tried this twice now. I doesn't work for me. After a reboot, and when starting the first rsync between both machines, I'm always welcomed by this message:

 

root@Tower2:~# rsync -avPX --delete-during --protect-args -e ssh "[email protected]:/mnt/user/xyz/" /mnt/user/xyz/
The authenticity of host '192.168.178.35 (192.168.178.35)' can't be established.
ECDSA key fingerprint is SHA256:SSFXwWXedKMmxBao0vvheifFEfIoiiQl5rtfPuZ8x3w.
Are you sure you want to continue connecting (yes/no)?

Here are my steps I did twice:

 

On Tower:
----------
# ssh-keygen -t rsa -b 2048 -f /boot/config/ssh/tower_key
Generating public/private rsa key pair. 
Enter passphrase (empty for no passphrase): [press enter here] 
Enter same passphrase again: [press enter here]

On Tower2:
-----------
# ssh-keygen -t rsa -b 2048 -f /boot/config/ssh/tower2_key
Generating public/private rsa key pair. 
Enter passphrase (empty for no passphrase): [press enter here] 
Enter same passphrase again: [press enter here]

On Tower:
----------
# scp /boot/config/ssh/tower_key.pub root@tower2:/boot/config/ssh

On Tower2:
-----------
# scp /boot/config/ssh/tower2_key.pub root@tower:/boot/config/ssh

On Tower:
----------
mkdir -p /root/.ssh
cp /boot/config/ssh/tower_key /root/.ssh/id_rsa
cat /boot/config/ssh/tower2_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

On Tower2:
-----------
mkdir -p /root/.ssh
cp /boot/config/ssh/tower2_key /root/.ssh/id_rsa
cat /boot/config/ssh/tower_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

Add to /boot/config/go on Tower:
-----------------------------------
mkdir -p /root/.ssh
cp /boot/config/ssh/tower_key /root/.ssh/id_rsa
cat /boot/config/ssh/tower2_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

Add to /boot/config/go on Tower2:
------------------------------------
mkdir -p /root/.ssh
cp /boot/config/ssh/tower2_key /root/.ssh/id_rsa
cat /boot/config/ssh/tower_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

 

I really would like to get this running.

 

Many thanks in advance.

 

 

Link to comment

I am no expert (just a novice at this) but here is what I added to my go files

 

Tower1

 

# Tower1

# umask for root .ssh setup with known hosts and authorized keys 
umask 077

# Copy the files and set the permission
cp /boot/config/ssh/Tower2-rsync-key.pub /root/.ssh/
cp /boot/config/ssh/authorized_keys /root/.ssh/
cp /boot/config/ssh/known_hosts /root/.ssh/
chmod 700 /root/.ssh/

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &

Tower2

# Tower2

# umask for root .ssh setup with known hosts and authorized keys 
umask 077

# Copy the files and set the permission
cp /boot/config/ssh/Tower1-rsync-key.pub /root/.ssh/
cp /boot/config/ssh/authorized_keys /root/.ssh/
cp /boot/config/ssh/known_hosts /root/.ssh/
chmod 700 /root/.ssh/

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/emhttp &

I have rebooted many times and things work as expected.

 

Make sure your files are in the correct directories.

 

I hope this helps!

  • Like 1
Link to comment
4 hours ago, hawihoney said:

I tried this twice now. I doesn't work for me. After a reboot, and when starting the first rsync between both machines, I'm always welcomed by this message:

 


root@Tower2:~# rsync -avPX --delete-during --protect-args -e ssh "[email protected]:/mnt/user/xyz/" /mnt/user/xyz/
The authenticity of host '192.168.178.35 (192.168.178.35)' can't be established.
ECDSA key fingerprint is SHA256:SSFXwWXedKMmxBao0vvheifFEfIoiiQl5rtfPuZ8x3w.
Are you sure you want to continue connecting (yes/no)?

As I mentioned a few posts above yours, I had this same issue after rebooting.  You also need to copy known_hosts to /root/.ssh/

 

Here is what is in my go file on each host (it has been working unattended for months):

 

Server MediaNAS (source for files to be copied to BackupNAS):

# Copy SSH files back to /root/.ssh folder and set permissions for files
mkdir -p /root/.ssh
cp /boot/config/ssh/medianas_key /root/.ssh/id_rsa
cp /boot/config/ssh/known_hosts /root/.ssh/known_hosts
cat /boot/config/ssh/backupnas_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

Server BackupNAS (destination server for files to be copied from source):

# Copy SSH files back to /root/.ssh folder and set permissions for files
mkdir -p /root/.ssh
cp /boot/config/ssh/backupnas_key /root/.ssh/id_rsa
cat /boot/config/ssh/medianas_key.pub > /root/.ssh/authorized_keys
chmod g-rwx,o-rwx -R /root/.ssh

Note that since the file copy is only going one way (from MediaNAS to BackupNAS) I am not copying known_hosts on the backupNAS - not that it would be a problem if you did so - it is not necessary since my script does not need to login to MediaNAS; just to BackupNAS.

Edited by Hoopster
Link to comment

Just want to give an update on how this works from behind the GFW (Great Firewall of China).  I have a server there that copies some website backup zips from a server in USA.  USA is on Google Fiber and in China we are on China Telecom with 200mbit down and 3mbit up.  The GFW is known to slow down SSH transfers, and it sure does.  In the evening, transfer speeds drop way down to dial up speeds, but this example below shows that they can go 1000 times as fast in the early morning.  Anybody else seeing significant fluctuations in transfers speeds?

 

(this was from a 7am transfer and the is about as good as it gets)
receiving incremental file list
backup_2018-03-26-2045_8b52ec48f571-db.gz
     19,965,146 100%    4.05MB/s    0:00:04 (xfr#1, to-chk=5/541)
backup_2018-03-27-0045_b289b6cf94e3-db.gz
     20,072,759 100%    3.84MB/s    0:00:04 (xfr#2, to-chk=4/541)
backup_2018-03-27-0445_a68f8c15d41b-db.gz
     19,913,288 100%    2.97MB/s    0:00:06 (xfr#3, to-chk=3/541)
backup_2018-03-27-0845_4e34663424ee-db.gz
     19,932,127 100%  912.26kB/s    0:00:21 (xfr#4, to-chk=2/541)
backup_2018-03-27-1245_1fb788df7e07-db.gz
     20,027,387 100%  725.39kB/s    0:00:26 (xfr#5, to-chk=1/541)
backup_2018-03-27-1646_03c8d71e4bed-db.gz
     19,927,016 100%  833.73kB/s    0:00:23 (xfr#6, to-chk=0/541)

Number of files: 541 (reg: 541)
Number of created files: 6 (reg: 6)
Number of deleted files: 0
Number of regular files transferred: 6
Total file size: 8,469,511,456 bytes
Total transferred file size: 119,837,723 bytes
Literal data: 119,837,723 bytes
Matched data: 0 bytes
File list size: 89,580
File list generation time: 8.661 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 152
Total bytes received: 119,957,216

sent 152 bytes  received 119,957,216 bytes  1,181,845.99 bytes/sec
total size is 8,469,511,456  speedup is 70.60

 

Link to comment
  • 6 months later...
3 hours ago, xman111 said:

hey guys, i got to the if [ ! -d .ssh ]; then mkdir .ssh ; chmod 700 .ssh ; fi  and it says if: Expression Syntax. then: Command not found.
fi: Command not found.   any ideas?

 

The command looks like this:

if [ ! -d .ssh ]; then mkdir .ssh ; chmod 700 .ssh ; fi

 

There is a {space} between the opening bracket [ and the ! symbol and between .ssh and the closing ] and in several other places.  Make sure you have the {spaces} where they belong.

Link to comment
1 hour ago, xman111 said:

hey, i was trying this on my FreeNas box since i will be backing up unraid to it.  I copied and pasted the exact command on unraid and it seems to work.  Wonder why the exact same syntax doesn't work on Freenas.

 

I rsync between two unRAID servers, so, I don't know if the shell or script interpreter, etc is different on Freenas.  There must be something different.  I wish I were more of a scripting guru.  Fortunately, I just followed instructions and asked for help from the experts (that's not me in this case) when I ran into problems.  I eventually got it all working beautifully.

 

Hopefully, someone can help you out with this.

Link to comment

@xman111 You only need to do the mkdir .ssh; chmod .ssh part once. since freeNAS lives on a drive partition, the changes will persist. you only need that part of script for Unraid, as the .ssh directory is lost every shutdown/reboot.

 

in you instance, just run this on the freeNAS CLI

cd ~
mkdir -p .ssh
chmod 700 .ssh 
mv Tower-rsync-key.pub .ssh/ 
cd .ssh/ 
cat Tower-rsync-key.pub >> authorized_keys
chmod 600 authorized_keys

 

Link to comment
1 hour ago, ken-ji said:

@xman111 You only need to do the mkdir .ssh; chmod .ssh part once. since freeNAS lives on a drive partition, the changes will persist. you only need that part of script for Unraid, as the .ssh directory is lost every shutdown/reboot.

 

in you instance, just run this on the freeNAS CLI


cd ~
mkdir -p .ssh
chmod 700 .ssh 
mv Tower-rsync-key.pub .ssh/ 
cd .ssh/ 
cat Tower-rsync-key.pub >> authorized_keys
chmod 600 authorized_keys

got an error after the cat Tower line..  no such file or directory..

 

 

Edited by xman111
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.