Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

23 Good

About tr0910

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. In my case, the source data is on a XFS disk, and the destination disk is on a XFS Encryped disk connected via unassigned devices. rsync is only looking at the date and time and filesize in my example. So no issue. However, once the XFS Encrypted drive is mounted with a correct keyfile, the contents are readable by anybody at destination. But if you want your files to be unable to be read at destination, you need to make them unreadable before sending. In your example, are you sending files to say Google Cloud and you don't trust Google Cloud to not snoop in your data? Me I don't care. I own the servers on each end, and they are located in my offices both end. I trust myself.... You could zip up the files with a strong password at source before sending. Then rsync will be sending zipped and password protected files. Your issue will be maintaining the zip process so that if a file changes at source, your zip process knows and rezips and makes it ready for resend. Example, you backup spreadsheet "myBankAccounts.xls" and it gets password protected in a zip file and sent to destination. But later you edit this file and change your bank balances, and you need to have it backed up with the newly modified data. In my examples, rsync takes care of this by the "rsync -avuX" switches. The important one is the "u" that tells rsync to check files that have changed and resend them if they are newer at source. Crashplan had encryption during backup built into it and I ran it for awhile for server to server backups, it didn't scale well, and many ended their use when they closed the linux solution.
  2. We are all waiting for somebody to take our hacked together solution and pare it down the the bare essentials, and create a better version. I agree with Hoopster, it just works. All the consumer friendly solutions have their issues. I've added a bit of more scripting to mine so that it creates nicely formatted emails summarizing the results of the backup. This is where you can spend a lot of time making the output look pretty. Attached is the email output and the code that generates a backup between USA and China. It is actually amazing how fast this can transfer data. And no VPN is being used here at all. It's not required for the actual transfer. Subject: China Web 6 Back 0 Doc 2 =============================== ##### USA WebBackups ##### Sat Feb 16 04:40:02 CST 2019 =============================== receiving incremental file list =============================== ##### China Backups ##### Sat Feb 16 04:40:15 CST 2019 =============================== sending incremental file list Number of files: 72,026 (reg: 66,554, dir: 5,472) Number of created files: 0 Number of deleted files: 0 Number of regular files transferred: 0 Total file size: 85,547,872,771 bytes Total transferred file size: 0 bytes Literal data: 0 bytes Matched data: 0 bytes File list size: 2,490,176 File list generation time: 9.385 seconds File list transfer time: 0.000 seconds Total bytes sent: 6,848,560 Total bytes received: 6,619 sent 6,848,560 bytes received 6,619 bytes 95,876.63 bytes/sec total size is 85,547,872,771 speedup is 12,479.31 =============================== ##### China Documents ##### Sat Feb 16 04:41:26 CST 2019 =============================== sending incremental file list List of files transferred shows up here. Removed for security reasons. Number of files: 81,837 (reg: 74,447, dir: 7,390) Number of created files: 2 (reg: 2) Number of deleted files: 0 Number of regular files transferred: 2 Total file size: 707,160,237,755 bytes Total transferred file size: 12,445 bytes Literal data: 12,445 bytes Matched data: 0 bytes File list size: 851,903 File list generation time: 0.004 seconds File list transfer time: 0.000 seconds Total bytes sent: 7,629,047 Total bytes received: 8,864 sent 7,629,047 bytes received 8,864 bytes 116,609.33 bytes/sec total size is 707,160,237,755 speedup is 92,585.56 =============================== ##### Finished ##### Sat Feb 16 04:42:31 CST 2019 =============================== #!/bin/bash # This creates a backup of documents from China to USA, and sends web backup files from USA to China # A summary email is sent with a header listing backup summary activity. (number of files sent each way) # # Note IP addresses are hard coded as the duckdns was having problems from China. echo "Starting Sync between servers USA and China" # Set up email header echo To: some_email@hotmail.com > /tmp/ChinaS1summary.log echo From: different_email@yahoo.com >> /tmp/ChinaS1summary.log echo Subject: Backup rsync summary >> /tmp/ChinaS1summary.log echo >> /tmp/ChinaS1summary.log # Backup Disk 1 getting files from yesterday echo "##### USA WebBackups ##### `date`" echo "===============================" >> /tmp/ChinaS1summary.log echo "##### USA WebBackups ##### `date`" >> /tmp/ChinaS1summary.log echo "===============================" >> /tmp/ChinaS1summary.log dte=$(date -d "yesterday 13:00 " '+%Y-%m-%d') src="root@xxx.yyy.253.39:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dte"*.gz" dtetoday=$(date -d "today 13:00 " '+%Y-%m-%d') srctoday="root@xxx.yyy.253.39:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dtetoday"*.gz" dest="/mnt/disks/SS_1695/downloads/ftp_dump/usa_newtheme/" #dest="/mnt/disks/ST4_30A6/downloads/ftp_dump/usa_newtheme/" rsync -avuX --stats -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" $src $dest > /tmp/Chinayesterday.log echo "##### China Backups ##### `date`" echo "===============================" >> /tmp/Chinayesterday.log echo "##### China Backups ##### `date`" >> /tmp/Chinayesterday.log echo "===============================" >> /tmp/Chinayesterday.log #Rsync to USA from China #Backups excluding some stuff like Software rsync -avuX --stats --exclude=Software --exclude=FirefoxProfiles --exclude=hp7135US -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" /mnt/user/D2/Backups/ root@xxx.yyy.253.39:/mnt/user/Backups/ > /tmp/Chinabackups.log echo "##### China Documents ##### `date`" echo "===============================" >> /tmp/Chinabackups.log echo "##### China Documents ##### `date`" >> /tmp/Chinabackups.log echo "===============================" >> /tmp/Chinabackups.log #China\Documents #Documents rsync -avuX --stats --exclude=.* -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456" /mnt/user/Documents/ root@xxx.yyy.253.39:/mnt/user/Documents/ > /tmp/Chinadocuments.log echo "##### Finished generating summaries ##### `date`" echo "===============================" >> /tmp/Chinadocuments.log echo "##### Finished ##### `date`" >> /tmp/Chinadocuments.log echo "===============================" >> /tmp/Chinadocuments.log # Create the summaries stripping out the detailed files being transferred cd /tmp/ tac Chinayesterday.log | sed '/^Number of files: /q' | tac > yesterday.log tac Chinadocuments.log | sed '/^Number of files: /q' | tac > documents.log tac Chinabackups.log | sed '/^Number of files: /q' | tac > backups.log backups=$(sed -n '/Number of created files: /p' /tmp/backups.log | cut -d' ' -f7 | rev | cut -c 2- | rev) docs=$(sed -n '/Number of created files: /p' /tmp/documents.log | cut -d' ' -f7 | rev | cut -c 2- | rev) web=$(sed -n '/Number of created files: /p' /tmp/yesterday.log | cut -d' ' -f7 | rev | cut -c 2- | rev) echo "China Web "$web" Back "$backups" Doc "$docs # now add all the other logs to the end of this email summary cat ChinaS1summary.log yesterday.log backups.log documents.log > diskallsum.log cat ChinaS1summary.log Chinayesterday.log Chinabackups.log Chinadocuments.log > diskall.log # Adjust email subject to reflect results of backup run subject=`echo "China Web "${web}" Back "${backups}" Doc "${docs}` sed 's@drChina rsync summary@'"$subject"'@' /tmp/diskall.log > /tmp/diskallfinal.log zip China diskallfinal.log # Send email of summary of results ssmtp tr0910@hotmail.com < /tmp/diskallfinal.log cd /tmp mv China.zip /boot/logs/cronlogs/"`date +%Y%m%d_%H%M`_China.zip"
  3. tr0910

    Backup one server to another

    I just downloaded and tested it. It seems to behave somewhat like rsnapshot. So yes, the 2nd and 3rd backups and on are really only symlinks. This means that as long as you are only modifying a few small files between backups, you get the benefit of a complete versioned backup with only a tiny bit of additional disk space being used for the versioning. I would not use this for backing up a complete server, but for backing up documents or source code that has significant potential for revisions, and you would like to keep a copy of all the revisions just in case. Straight rsync copy is more relevant for backing up media files that never change. For simple backups without versioning, I use the following... rsync -avu /from/some/source /to/some/dest I haven't tested your linked rsync_time_backup extensively yet so can't really comment further on it. Perhaps your problems are related to scale. Did you test with a small 1-3gb folder first and see how it behaves?
  4. tr0910

    (SOLVED)-Old Flash Drive with unRAID 4.7

    Check your backup unRaid flash drive for the pro.key file. This is all you need from that. Save that file in a safe place then install new fresh unRaid on that 2nd flash Depending on the age of your backup of the current unRaid flash drive, you can use parts of it to recover your server Do you have a print out of the unRaid array screen listing your drives and which ones are parity, cache, data etc... ? Sent from my chisel, carved into granite
  5. tr0910

    Installing 3 workplaces at one PC

    What you are trying to accomplish is pushing the borders of the normal expertise you will find here. Because of that you will find that not all of your questions will be answered. I have a one server that has dual Intel Xeon 2670 and 96gb RAM. This server works best for flexible VM work as you are proposing. I have 5 Win 10 VM on this server and create and destroy many more on an ad hoc basis. But I don't pass through any video. I find pass through really isn't flexible for my needs. Requiring 1 video card for each VM just kills it for me. I find Microsoft RDP best for connecting to these VM in spite of what grid runner suggests. This is ok for normal browsing and office work. Try it and see if it works for you. The beauty is that dedicated video cablng is not required, and dedicated GPU are not required. The downside is that video is limited for gaming, streaming and other GPU intensive stuff. Basically just follow grid runners first video about windows 10 VM and ignore video 2. Then connect to the VM with windows RDP from a laptop and test it out. This is flexible VM at it's best. Sent from my chisel, carved into granite
  6. I started using Resilio but after a time just got rsync working. Resilio would sometime work and sometimes not. No experience with the others. Sent from my chisel, carved into granite
  7. Yes, get ready for a steep learning curve. The unRaid experts are quite willing to help in their area of expertise, and this is quite deep in the area of local data hoarding. However, when venturing beyond local operations, expect to do some pioneering on your own. I too would love to have snapshots working over large distances but I already have something working. I presently use rsync over ssh to backup servers from USA to other countries including behind the Great Firewall of China. This testifies to the resiliance of rsync over ssh. It just works. I have set up this using rsync that works quite well over ssh. See
  8. Interesting, have you used it this way? International data connections are hit and miss. I wonder how resilient this might be? What if I only want to snapshot a /mnt/disk1/sometopfolder/somesubfolder Is that ok? Not all data deserves this level of protection. I would only snapshot a very small fraction of my data.
  9. @johnnie.black That link Method 2 suggests both source and destination need to be on the same volume for snapshots. This won't work across the world will it? Method 1 uses cp. Is there any way this can be accomplished with rsync. It seems more resilient to bad internet connections. I want to snapshot between USA and Asia.
  10. @johnnie.black I would love to try this across the world sometime. Totally different continents. I have rsync backups over ssh working fine, but somethings would be better snapshotted. I used to have rsnapshot on old RFS drives using code from https://rsnapshot.org But this was in the old v5 days. How is btrfs different? Some of my disks are still XFS and some are BTRFS. Do I need both ends of the snapshot converted to BTRFS in order to do this?
  11. tr0910

    OpenVPN is not working 6.7.0rc2

    Peter's Open-VPN plugin has a known problem with 6.7 Nothing to do but wait for a 6.7 compatible update as some legacy crypto code longer is found in 6.7.
  12. We await your fix at your convenience. The docker is ok, but Peter's plugin is better. His works even if the array is stopped. If Peters Open-vpn plugin is important to you wait on installing 6.7 for a bit. It will get fixed.
  13. I was just about to reboot my server after upgrading to 6.7rc2 when I saw this. OpenVPN is required for me. I will postpone this reboot now, possibly for a long time........
  14. tr0910

    Encryption passphrase per disk?

    Right now encryption is only one key for the entire array, as well as encrypted unassigned devices. This is painful a as you cannot plug in an encrypted unassigned device with a different key. The limiting factor is the unRaid GUI only supports one key for the entire array including all plugged in unassigned devices. The underlying technology is well able to support multiple keys but the GUI needs to add support for this Sent from my chisel, carved into granite
  15. tr0910

    unraid keyfile how to create?

    As long as it is a horrible image of your cat, you are good. Just don't use some cute image that has been shared a million times. Unraid will use the first 8mb of the file and ignore the rest. Sent from my chisel, carved into granite