Jump to content
maxse

Need a simple off-site unraid-unraid incremental backup solution

131 posts in this topic Last Reply

Recommended Posts

14 hours ago, bnevets27 said:


 

 

 


Yeah I have asked before and suggested that if unraid could build in (or a plugin) support for unRAID to unraid backups I'm sure many more people would have proper backups this way. I'm sure it would increase sales. It's definitely not simple to setup. After learning rsync/vpn etc it's much easier but it's a big initial hurdle that some never get over.

 

 

Yea I think this would be really valuable. Maybe a little tricky to set up and maintain tho

Share this post


Link to post
On 1/29/2019 at 11:18 AM, Ascii227 said:

I would just add to this to be very careful if you are moving into BTRFS snapshots for the first time. I got caught out by a user who moved 3Tb of data from one folder to another without letting me know they were going to do a bulk move of data. Obviously the backup server the other side and BTRFS is not aware of anything moving, just data being deleted and added. Therefore, all of this data ended up being duplicated via the snapshots. For a day or two I was wondering where all my disk space had disapeared to (as BTRFS snapshot size reporting is not quite up to scratch).

Was going to reply to this but forgot, yes, that's the disadvantage of the second method, the snapshot then rsync method, with the first method, btrfs send/receive, you can move/rename folders on source and only the metadata changes will be sent.

 

Unraid's independent array filesystem has many advantages, but in this case makes send/receive not practical, unless backup server disk config mirrors the 1st server, I still use send/receive for some of my smaller servers which use only the cache pool in raid5/6.

Share this post


Link to post

So I still didn't even get to this, just not that much time these days, and therefore no backup yet :(

Im not sure I'll have the time to learn all this stuff to backup with versioning, SSH, btrfs snapshots, vpn, rsync, etc...

 

Isn't there an easier way like cloudberry? I was reading up on the docker in the forum. I wouldn't mind paying for cloudberry although it's about $100, if it could make this simpler for me. Any thoughts?

 

*EDIT* Would rsync even be able to do what I need, and that is to encrypt the files? The goal is to have the offsite unraid have an encrypted file system within unraid, and then also I would like the files themselves encrypted.

Edited by maxse

Share this post


Link to post

I think a lot of people would like an easy way or at least an step by step tutorial of how to archieve this.

 

Perhaps the people of unraid could make a simple way of doing this, it will be another plus to choose UNRAID between other options.

 

I’m actually paying 140€/year for crasplan and I have the possibility to have a second offsite server at my work.

 

Gus

Share this post


Link to post

Crash plan doesn’t allow to backup to an offsite server other than the cloud from what I understand anymore...

 

Guys,

hkw about if I use an old windows laptop that I have laying around. It’s an i3 that I don’t even use anymore that has windows 10... I now this wouldn’t be a sleek solution but is there easiest software I could get for windows that will back up my unraid shares to the remote unraid server? That will also do versioning to protect me against a ransom attack and also encrypt the data?

I guess I could manually kick it off once a week. Any thoughts on this? 

Share this post


Link to post
17 hours ago, maxse said:

Isn't there an easier way like cloudberry? I was reading up on the docker in the forum. I wouldn't mind paying for cloudberry although it's about $100, if it could make this simpler for me. Any thoughts?

this is what I use for local and offsite (in combination with synching for other stuff too.) It works and is pretty simple. But for offsite, you need to make a connection b

 

etween the servers, either via vpn dialing in, or possibly zero tier (which I've slightly used but never for offsite backups, but the theory looks like it could work)

Share this post


Link to post

oh nice! Thanks @1812 

 

How would I set this up between 2 unraids? Is it simply an openvpn server docker one the remote, and openvpn client connection docker on the main server, and then it just connects at a time that I specify until the backup is complete and then cpn turns off until the next evening or does the vpn connection always stay connected? How do I set it as a target in vpn? Would it be through unassigned devices plugin like an SMB share?  If so, and the main computer gets a cryptovirus, wouldn't it pretty much infect the remote backup server also once the target gets mounted with unassigned devices through the vpn?

 

I'm a newbie so I am learning all of this, sorry about all the questions, but I want to make sure that I set this up correctly once and hopefully never worried about it again

Share this post


Link to post
14 hours ago, maxse said:

oh nice! Thanks @1812 

 

How would I set this up between 2 unraids? Is it simply an openvpn server docker one the remote, and openvpn client connection docker on the main server, and then it just connects at a time that I specify until the backup is complete and then cpn turns off until the next evening or does the vpn connection always stay connected? How do I set it as a target in vpn? Would it be through unassigned devices plugin like an SMB share?  If so, and the main computer gets a cryptovirus, wouldn't it pretty much infect the remote backup server also once the target gets mounted with unassigned devices through the vpn?

 

I'm a newbie so I am learning all of this, sorry about all the questions, but I want to make sure that I set this up correctly once and hopefully never worried about it again

For offsite previously I was having the offsite server remote in via vpn, then mount the server address via unassigned devices on the local server and backup as a “local” device.

 

i played around a little with zero tier because it creates a network without setting up a vpn. So I believe you could then mount the zero tier ip locally to connect to the remote server but I didn’t have enough time to finish playing with it.

 

thw problem with viruses and backups is that yeah, it will also infect. So if you backup only every few days, then you should know if your current local set is valid or not and can stop the backup.  In theory you could use whatever program you wanted, including sync thing which I believe had the ability to delay a backup file by saying “only backup if x number of days old.” Maybe it’s duplicati that has that, I don’t really remember.

Share this post


Link to post

Thanks! Okay so the VPN connection should be initiated on the remote unraid to connect to the main unraid? Then remote unraid is the one that runs unassigned devices and mounts the shares to itself that need to be backed up? After which cloudberry starts the backup process running on the remote server, to back up from the assigned devices to its own shares and the disconnects after completed? Is that correct? Does it run through this process every time, running unassigned devices mounting, etc... or is it just the vpn that will running and disconnecting every time?

 

So now I'm confused about these crypto viruses which is one of my main reasons for having a backup... I thought the entire point of having incremental backups was so that if a cryptovirus runs, I should be able to still access my backed up files and say restore from a point that's let's say, 1 week away, assuming that those were clean files that did not get the virus yet. I would lose at most than the data for that week, correct?

 

I want to run backups every night. So if the virus activates at night and the remote computer activates the restore process, connects via vpn, etc... the virus would then infect ALL of the files on the remote backup server as welll? Including all of the incremental files? How would I then be able to restore from a backup that was 7 days ago, if all the files will be unreadable? Maybe I'm not understanding something? Because wouldn't this also then affect any kind of cloud back up as well? I remember reading a while back that a true "protection" from ransom ware would be incremental backups, so that yes, you loose data for a few days worth but then would still be able to get almost everything back, no?

Share this post


Link to post
52 minutes ago, maxse said:

Thanks! Okay so the VPN connection should be initiated on the remote unraid to connect to the main unraid?

You need vpn access on your router so the remote server can access your network. zero tier is suppose to create a mesh network that flows through your router, so no vpn or firewall holes need to be opened. As I mentioned before, I looked into this and didn't have time to finish setting it up, but in my initial tests, I was able to see offsite devices on my local network without any firewall modifications.

 

53 minutes ago, maxse said:

After which cloudberry starts the backup process running on the remote server, to back up from the assigned devices to its own shares and the disconnects after completed? Is that correct?

there is no auto disconnect. you either need to remote into the offsite server and start the vpn, or just leave it on. You could run a script on the server, but it wouldn't have any idea how to know when it was done (unless it was monitoring network traffic and when a lower threshold was hit, it could then disconnect from the vpn... but I'm not the person to write that for you)

 

54 minutes ago, maxse said:

.. I thought the entire point of having incremental backups was so that if a cryptovirus runs, I should be able to still access my backed up files and say restore from a point that's let's say, 1 week away, assuming that those were clean files that did not get the virus yet. I would lose at most than the data for that week, correct?

 

 

55 minutes ago, maxse said:

I want to run backups every night. So if the virus activates at night and the remote computer activates the restore process, connects via vpn, etc... the virus would then infect ALL of the files on the remote backup server as welll? Including all of the incremental files? How would I then be able to restore from a backup that was 7 days ago, if all the files will be unreadable? Maybe I'm not understanding something? Because wouldn't this also then affect any kind of cloud back up as well? I remember reading a while back that a true "protection" from ransom ware would be incremental backups, so that yes, you loose data for a few days worth but then would still be able to get almost everything back, no?

 

The big issue is that there are different types of viruses that do different types of things. If the virus only locks your local files, and the locked version of the local file is then saved to a versioned server offsite (where the virus does not run) then yes, you can use the older backup to restore the file. If your remote server is able to be accessed by the virus because it is on your network, and your backups are accessed and encrypted by the virus, you can't do anything about it. Best way to combat it is make sure you use a strong password on the remote server that is different than your local one, so that way the virus can't use dictionary attack to try to login because, technically it becomes a local machine when it uses the vpn to login. If there is an SMB exploit and it gains access around your strong password, there is nothing you can do. Except make weekly supervised backups that you know are in good condition, and then disconnect those backups off your network so they can't be accessed by anything, like usb drives or another server that is shutdown.

 

Having solid, 100% data retention can be costly, given our current climate of nasty viruses. 

Share this post


Link to post

@1812 you are THE man, thank you soo much for explaining it. I have spent so much time reading and researching this to try to understand the most optimal back up solution. I think you answered my big issue, because I dont really understand how all these things work including vpn and its encryption, I was not sure if a crypto type of virus (that looks for SMB shares, and network share, etc..) would see the remote server basically as a local sever when it's connected via the VPN. It seems that it would since the purpose of the VPN is to make a remote computer basically behave like a local network.... which would then also encrypt all of the previous healthy backups on the remote server and I wouldn't even be able to access them at that point! Geeeez. what a nightmare!

 

I guess I figured that these virus can't travel and continue to infect across the internet to a remote site. Figured it can only infect local network shares, but seems like with a VPN, and then mounting with unassigned devices basically you're trying to connect the 2 computers to appear as if it's a local network, possibly making the whole point useless. I'll look into zero tier, haven't come across it, but seems like issue would be the same? But how do the big corporations, like even AMAZON handle this? Surely they have clones of servers in different geographic location and don't have someone to physically disconnect and reconnect the power to their servers all of the time?

 

So, wow... how does one manage to back up a large server? Not mine but say a 100tb worth of data? Any kind of cloning and even versioning would still render any kind of backup useless, especially if a backup is run nightly and you don't realize there's an issue (I'm not sure if these things stay dormant and than get activated later, I'm sure it's possible)...

 

So now seems like the most fool-proof way possible is external drives? I could get a bunch of 10tb easystores, and when one fills up just move that off-site and start the next one correct? On like a weekly basis manually connect them via usb, to give me a 1 week buffer I guess in case a virus stays dormant that long?

 

Also, if going the external drive route and manually connecting and disconnecting the drives every week...

What would be the best way to manage this since the server is larger than a single 10tb drive. Is there a program/docker/plugin that will fill until say a threshold, then prompt that the drive is full and continue the backup process once a new empty drive is physically connected, from where it left off? Can this program used to do this (assuming there's a simple way to do this, gotta righttt?) detect new changes to files and backup only those new files to the hard drives? So say last week I backed up share XYZ to the external drive, which got full and I had to use a new empty drive. Now this week I add another file to the share.  is there a way for the program to know to back up only this new file placed on the XYZ share to the new external drive? Even though the now-filled external drive that contains last-weeks contents of the share is no longer connected to the server and is now safely stored off-site.

 

 

Yikes, sorry about the long post but I think I see what direction I need to go with, even though it's totally not what I expected and basically means there is no reason to build a 2nd remote unraid server. Perhaps I should start a new thread focused on the best practice for the external hard drive backup method, and the workflow involved?

 

*EDIT*

Damn it, I must be confused about something. Seriously, in this case, how do companies like AMAZON, or even GOOGLE, how do they protect their data servers from this. I get that they have them in different locations, cloned, but how they protect against a crypto locker type of virus? Manually disconnecting then reconnecting the server periodically can't be what they do on that scale, right?

 

P.S. My original plan was duckdns to get that static IP, then was hoping for an app to do this, or openvpn server and then vpn client on the main, and that's how far I got.. What a nightmare.

 

*EDIT 2* SORRY! one more thing

What about those BTRFS snapshots that were mentioned earlier? Someone said those snapshots are protected or something, from cryptovirus having access to the healthy snapshots? In that case I would just use mirror shares with rclone and btrfs take snapshots going back 1 week? So if crypto infects, main server then infects remote unraid server, I will still be able to login to remote unraid GUI, and restore to a healthy snapshot even though the virus would be spread to the remote server as well? Or is that incorrect?

Edited by maxse

Share this post


Link to post
9 hours ago, maxse said:

But how do the big corporations, like even AMAZON handle this? Surely they have clones of servers in different geographic location and don't have someone to physically disconnect and reconnect the power to their servers all of the time?

 

 

Bigger infrastructure, realtime file monitoring, custom software, and a highly trained IT staff to monitor everything at all times. 

 

9 hours ago, maxse said:

basically means there is no reason to build a 2nd remote unraid server.

 

remote file backup is good to prevent against local theft or destruction by fire or other cause. I have 2nd local server on my network for daily backups, and another offsite for remote backups, with super important data also replicated and encrypted in a cloud service.

 

9 hours ago, maxse said:

What about those BTRFS snapshots that were mentioned earlier? Someone said those snapshots are protected or something, from cryptovirus having access to the healthy snapshots?

I don't use them, though the functionality is there via CLI.

 

9 hours ago, maxse said:

So if crypto infects, main server then infects remote unraid server, I will still be able to login to remote unraid GUI, and restore to a healthy snapshot even though the virus would be spread to the remote server as well? Or is that incorrect?

 

It all depends on if it only encrypts shares or if it encrypts the os files preventing boot. The goal for the crypto lockers is to let you see your data files are locked, but a poorly written one may inadvertently lock the os files on unRaid. There was something that use to monitor for crypto locking behavior on unRaid, as a plugin, but I haven't seen much about it in a year or so.  Check the "App Store" for ransomware protection. It essentially placed bait files in the shares and monitored them. If they became modified, it would stop all activity on the server.

Share this post


Link to post
10 hours ago, maxse said:

Someone said those snapshots are protected or something, from cryptovirus having access to the healthy snapshots?

Like I already mentioned btrfs snapshots are read only, can't be infected/modified/deleted.

Share this post


Link to post

Got it! Okay thanks guys, looks like the solution will be to do the snapshots, thanks @johnnie.black I had to go back and re-read the entire thread, I've been reading so much lately and thanks @1812 and others as well 

 

Any suggestions on how to set up the VPN to use with cloudberry? How would that even work so the VPN connects while the backup is taking place then turns off? How would openvpn even know when to do these things? How do you guys do this?

 

Share this post


Link to post
7 hours ago, maxse said:

Got it! Okay thanks guys, looks like the solution will be to do the snapshots, thanks @johnnie.black I had to go back and re-read the entire thread, I've been reading so much lately and thanks @1812 and others as well 

 

Any suggestions on how to set up the VPN to use with cloudberry? How would that even work so the VPN connects while the backup is taking place then turns off? How would openvpn even know when to do these things? How do you guys do this?

 

 

You need to be able to run openvpn on your firewall, then run the openvpn plugin on the remote server. The remote server stays logged in once it's on. Mine just always stays logged in to my network. Then you just map the "local" drive on cloudberry for the backup to the ip assigned (part of the other reason I leave mine on, so it retains the same ip.)  I'm sure there are user scripts that can turn it on and off, but it doesn't hurt anything for me leaving it on all the time.

Share this post


Link to post

Thanks everyone for helping me with this! So it looks like I am pretty much settled on the following workflow. Time to invest some time into but I'll just follow along the SSH tutorial shouldn't be too bad.

 

Here's the workflow that I'll go withL

 

rsync via SSH to remote unraid server. Remote unraid server will be running DuckDNS docker, and will always be on.

remote unraid server will have btrfs with daily snapshots. 

 

That seems to be pretty much the most straigh-forward way to do what I need. Of course I'm sure I'll have some questions about setting up SSH and rsync, snapshots, etc... but now I have a plan..

 

Oh my only unknown is, can I encrypt the files with rsync before sending it over? Not only encrypted but so that it also obfuscates the file names? That's the big thing for me  also, is the encryption at the file level. Can it do that?

 

Any thoughts?

Share this post


Link to post

Yikes! So I just read through the SSH thread that was linked here. Damn, that's pretty complex, definitely don't have the time to learn all that, and worst case if something breaks I would never be able to figure it out again.

 

So now I'm back to square one :(

 

Can't believe how complicated this is, really! I figured as drives get cheaper and people build new systems most people would just want to back up their unraid servers to another at another location, and do it easily....

 

So, say I want to go with the solution of just using external USB drives, plug them in once a week, have an automated backup then unplug it when it's done. Is such a thing possible? So for about 30tb, I would need 3-4 drives, I need the software to remember where it left off, as one drives fills up, etc... Also would need it to encrypt files and obfuscate before sending the external drive.

 

Does such a thing even exist?

Share this post


Link to post
19 minutes ago, maxse said:

Yikes! So I just read through the SSH thread that was linked here. Damn, that's pretty complex, definitely don't have the time to learn all that, and worst case if something breaks I would never be able to figure it out again.

It took me about 4-5 days (not all day of course) of trying things, figuring out what worked and what didn't for my use case, asking questions which the experts answered and fine tuning the script to my needs.

 

It has been a year now that I have had the rsync over SSH setup working flawlessly.  It just works and has never "broken."  The only way it could break is if SSH keys somehow get deleted without a backup and needed to be regenerated.

 

It looks daunting, I will admit, but a lot of it is just reading for understanding.  You need three things to make this work; SSH keys and associated files, go file modifications so the setup survives a reboot and the script that runs it all.

 

I went into this a complete newbie with respect to these things and managed to get it working over a few days of trial-and-error and asking questions.

Share this post


Link to post

We are all waiting for somebody to take our hacked together solution and pare it down the the bare essentials, and create a better version.

 

I agree with Hoopster, it just works.  All the consumer friendly solutions have their issues.  I've added a bit of more scripting to mine so that it creates nicely formatted emails summarizing the results of the backup.  This is where you can spend a lot of time making the output look pretty.  Attached is the email output and the code that generates a backup between USA and China.  It is actually amazing how fast this can transfer data.  And no VPN is being used here at all.  It's not required for the actual transfer.

 

Subject: China Web 6 Back 0 Doc 2

===============================
##### USA WebBackups ##### Sat Feb 16 04:40:02 CST 2019
===============================
receiving incremental file list
===============================
##### China Backups #####  Sat Feb 16 04:40:15 CST 2019
===============================
sending incremental file list

Number of files: 72,026 (reg: 66,554, dir: 5,472)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 85,547,872,771 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 2,490,176
File list generation time: 9.385 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 6,848,560
Total bytes received: 6,619

sent 6,848,560 bytes  received 6,619 bytes  95,876.63 bytes/sec
total size is 85,547,872,771  speedup is 12,479.31
===============================
##### China Documents #####  Sat Feb 16 04:41:26 CST 2019
===============================
sending incremental file list

List of files transferred shows up here.  
Removed for security reasons.

Number of files: 81,837 (reg: 74,447, dir: 7,390)
Number of created files: 2 (reg: 2)
Number of deleted files: 0
Number of regular files transferred: 2
Total file size: 707,160,237,755 bytes
Total transferred file size: 12,445 bytes
Literal data: 12,445 bytes
Matched data: 0 bytes
File list size: 851,903
File list generation time: 0.004 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 7,629,047
Total bytes received: 8,864

sent 7,629,047 bytes  received 8,864 bytes  116,609.33 bytes/sec
total size is 707,160,237,755  speedup is 92,585.56
===============================
##### Finished #####  Sat Feb 16 04:42:31 CST 2019
===============================

 

#!/bin/bash
# This creates a backup of documents from China to USA, and sends web backup files from USA to China
# A summary email is sent with a header listing backup summary activity.  (number of files sent each way)
#
# Note IP addresses are hard coded as the duckdns was having problems from China.

echo "Starting Sync between servers USA and China"


		# Set up email header

		echo To: some_email@hotmail.com > /tmp/ChinaS1summary.log
		echo From: different_email@yahoo.com >> /tmp/ChinaS1summary.log
		echo Subject: Backup rsync summary >> /tmp/ChinaS1summary.log
		echo   >> /tmp/ChinaS1summary.log

		# Backup Disk 1 getting files from yesterday
		echo "##### USA WebBackups #####  `date`"
		echo "===============================" >> /tmp/ChinaS1summary.log
		echo "##### USA WebBackups ##### `date`" >> /tmp/ChinaS1summary.log
		echo "===============================" >> /tmp/ChinaS1summary.log

		dte=$(date -d "yesterday 13:00 " '+%Y-%m-%d')
		src="root@xxx.yyy.253.39:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dte"*.gz"
		dtetoday=$(date -d "today 13:00 " '+%Y-%m-%d')
		srctoday="root@xxx.yyy.253.39:/mnt/disk1/downloads/ftp_dump/usa_newtheme/backup_"$dtetoday"*.gz"
		dest="/mnt/disks/SS_1695/downloads/ftp_dump/usa_newtheme/"
		#dest="/mnt/disks/ST4_30A6/downloads/ftp_dump/usa_newtheme/"
		
		rsync -avuX --stats   -e "ssh -i /root/.ssh/China-rsync-key  -T  -o Compression=no -x -p39456"  $src $dest  > /tmp/Chinayesterday.log

		echo "##### China Backups #####  `date`"
		echo "===============================" >> /tmp/Chinayesterday.log
		echo "##### China Backups #####  `date`" >> /tmp/Chinayesterday.log
		echo "===============================" >> /tmp/Chinayesterday.log
		
		
		#Rsync to USA from China
		#Backups excluding some stuff like Software
		rsync -avuX  --stats --exclude=Software --exclude=FirefoxProfiles --exclude=hp7135US -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456"  /mnt/user/D2/Backups/  root@xxx.yyy.253.39:/mnt/user/Backups/  > /tmp/Chinabackups.log

		echo "##### China Documents #####  `date`"
		echo "===============================" >> /tmp/Chinabackups.log
		echo "##### China Documents #####  `date`" >> /tmp/Chinabackups.log
		echo "===============================" >> /tmp/Chinabackups.log
		
		
		#China\Documents
		#Documents
		rsync -avuX  --stats --exclude=.*  -e "ssh -i /root/.ssh/China-rsync-key -T -o Compression=no -x -p39456"  /mnt/user/Documents/  root@xxx.yyy.253.39:/mnt/user/Documents/  > /tmp/Chinadocuments.log


		echo "##### Finished generating summaries #####  `date`"
		echo "===============================" >> /tmp/Chinadocuments.log
		echo "##### Finished #####  `date`" >> /tmp/Chinadocuments.log
		echo "===============================" >> /tmp/Chinadocuments.log
	
	
		# Create the summaries stripping out the detailed files being transferred
		cd /tmp/
		tac Chinayesterday.log | sed '/^Number of files: /q' | tac > yesterday.log
		tac Chinadocuments.log | sed '/^Number of files: /q' | tac > documents.log
		tac Chinabackups.log | sed '/^Number of files: /q' | tac > backups.log

		backups=$(sed -n '/Number of created files: /p' /tmp/backups.log | cut -d' ' -f7 | rev | cut -c 2- | rev)
		docs=$(sed -n '/Number of created files: /p' /tmp/documents.log | cut -d' ' -f7 | rev | cut -c 2- | rev)
		web=$(sed -n '/Number of created files: /p' /tmp/yesterday.log | cut -d' ' -f7 | rev | cut -c 2- | rev)
		echo "China Web "$web" Back "$backups" Doc "$docs

		# now add all the other logs to the end of this email summary
		cat ChinaS1summary.log yesterday.log backups.log documents.log > diskallsum.log
		cat ChinaS1summary.log Chinayesterday.log Chinabackups.log Chinadocuments.log > diskall.log
		
		# Adjust email subject to reflect results of backup run
		subject=`echo "China Web "${web}" Back "${backups}" Doc "${docs}`
		sed 's@drChina rsync summary@'"$subject"'@' /tmp/diskall.log > /tmp/diskallfinal.log		
		zip China diskallfinal.log 
		
		# Send email of summary of results
		ssmtp tr0910@hotmail.com < /tmp/diskallfinal.log
		cd /tmp  
		mv China.zip /boot/logs/cronlogs/"`date +%Y%m%d_%H%M`_China.zip"

 

Edited by tr0910

Share this post


Link to post

Wow, interesting, that's impressive!

 

Now let me ask you this. How would I be able to encrypt the data, though? From what I understand encryption modifies the files, so rsync may think everything is modified when looking for changes and uploading them to the remote server. Or am I wrong? Is it possible to have it obfuscate and encrypt, than send? Because that's a critical part of whatI need to do. Not sure I would want the people where the remote server being able to browse my backups...

Share this post


Link to post
On 2/10/2019 at 5:36 PM, maxse said:

So I still didn't even get to this, just not that much time these days, and therefore no backup yet :(

Im not sure I'll have the time to learn all this stuff to backup with versioning, SSH, btrfs snapshots, vpn, rsync, etc...

 

Isn't there an easier way like cloudberry? I was reading up on the docker in the forum. I wouldn't mind paying for cloudberry although it's about $100, if it could make this simpler for me. Any thoughts?

 

*EDIT* Would rsync even be able to do what I need, and that is to encrypt the files? The goal is to have the offsite unraid have an encrypted file system within unraid, and then also I would like the files themselves encrypted.

This is a bit of a longshot and not sure if might work. So if I understand correctly:

You need a reliable offsite backup of all your data on an unRAID box.

You need it to be incremental.

You want to host it yourself.

You would rather work in the GUI.

 

let me know if i've missed anything.

 

Regarding CloudBerry, it's $149 for the version you're after. ie. Linux Ultimate. There's no restriction on how much data it can handle.

CloudBerry can backup to a number of destinations:

 

image.png.dd1c417c62cbf99430f8ff308bcfc75c.png

 

Just so i Can understand your use case.

 

1. How will you be connecting your source unRAID to your destination unRAID?

 

I've seen mention of OpenVPN however there are some options depending on where your destination is hosted.

Are you willing to pay for an MPLS network style VPN? These are dedicated lines that a provider will setup for you essentially giving you your own VPN. Businesses use them because of their reliability but they are expensive as they require little to no maintenance on your side.

 

Are you willing to install routers at each location? If you can do this rather. Ubiquiti's USG has a dead simple site-to-site VPN option that is really really easy to configure. I'm sure pFSense has something similar as well. trade off here is that you will need to maintain this VPN and check that it's working correctly.

 

2. Once you've figured that out, you should be able to address your destination unRAID box via your VPN. How will you backup your data?

Seeing as CloudBerry has File System integration:

You could in theory:

  1. Create a share on your destination unRAID box
  2. Mount the remote share using Unassigned Devices plugin as it will be addressable via its DynamicDNS / IP address.
  3. Pass the mounted share path to the CloudBerry container
  4. Backup using CloudBerry File System.

image.png.104fc2808811426bdb35086b384337d7.png

 

Or

 

CloudBerry has Minio integration so you could theoretically:

  1. Install the Minio container on your destination unRAID box
  2. use Cloudberry to connect directly to that and do your backups.

I haven't used Minio previously but I could try and setup a backup there and write a guide if you're willing to overcome the networking hurdle above :)

 

I'm not sure of the reliability of the above as well which is why I just pay for BackBlaze. However if someone could shed some light as to if this is a bad idea I'm all ears!

 

 

 

 

 

 

 

 

 

Share this post


Link to post
On 2/17/2019 at 11:55 AM, maxse said:

Now let me ask you this. How would I be able to encrypt the data, though? From what I understand encryption modifies the files, so rsync may think everything is modified when looking for changes and uploading them to the remote server. Or am I wrong? Is it possible to have it obfuscate and encrypt, than send? Because that's a critical part of whatI need to do. Not sure I would want the people where the remote server being able to browse my backups...

In my case, the source data is on a XFS disk, and the destination disk is on a XFS Encryped disk connected via unassigned devices.  rsync is only looking at the date and time and filesize in my example.  So no issue.  However, once the XFS Encrypted drive is mounted with a correct keyfile, the contents are readable by anybody at destination.

 

But if you want your files to be unable to be read at destination, you need to make them unreadable before sending.  In your example, are you sending files to say Google Cloud and you don't trust Google Cloud to not snoop in your data?  Me I don't care.   I own the servers on each end, and they are located in my offices both end.  I trust myself....

 

You could zip up the files with a strong password at source before sending.  Then rsync will be sending zipped and password protected files.  Your issue will be maintaining the zip process so that if a file changes at source, your zip process knows and rezips and makes it ready for resend.  Example, you backup spreadsheet "myBankAccounts.xls" and it gets password protected in a zip file and sent to destination.  But later you edit this file and change your bank balances, and you need to have it backed up with the newly modified data.  In my examples, rsync takes care of this by the "rsync -avuX" switches.  The important one is the "u" that tells rsync to check files that have changed and resend them if they are newer at source.  Crashplan had encryption during backup built into it and I ran it for awhile for server to server backups, it didn't scale well, and many ended their use when they closed the linux solution. 

 

Share this post


Link to post

Thanks guys, yeah seems not so simple. @yusuflimz I'm not really sure how to connect my remote server to the main server and how to network them. That's one of the things thatI figured would be simple initially but turns out not to be the case. I don't have a huge budget so a special VPN company is out.

 

I am open to getting 2 routers, I can place one at the remote location, no problem if it would make things more smooth? It seems that the solution suggested here with SSH and rsync is probably best. No VPN and SSH just connects when the backup needs to run. Then I could use butterfs on the remote unraid to do snapshots to protect in case of crypto virus...

 

I understand that when the drive is mounts and unlocked anyone could read the contents. I currently have XFS encrypted on the main server. I don't want the contents to be readable at the remote site. I'll have the server always on... Zipping files individually is not an option, because I would be constantly adding content, would defeat the purpose of having it automated.

 

I'm very surprised there's no docker type of app that can run on the remote and just have it listen to when the main server needs to connect for backups... What about a solution like nextcloud? I plan to eventually install it on the main server as I learn more and more. Can it somehow be used on the remote and then link a folder from nextcloud back to the main server, and have cloudberry point to the nextcloud folder, which exactly is on the remote server?

 

Yeah, I dont trust google or other companies. I have personal stuff on there, media etc.. once you upload your data to a different company, it's no longer yours... :(

Especially with hard drives being cheaper and cheaper, figured I could run my own "cloud" 

Edited by maxse

Share this post


Link to post
However, once the XFS Encrypted drive is mounted with a correct keyfile, the contents are readable by anybody at destination. 


Just out of interest where is the key file stored? How easy is it to get at? So worst case scenario someone steals the destination server. Are they able to retrieve the key file, mount the encrypted disks and view the data?

If that’s the case what’s the point I’m encrypting the data anyway? If they can’t get to the key file then it’s all good. Right?


Sorry if I’m missing the point!

Share this post


Link to post
8 hours ago, maxse said:

What about a solution like nextcloud? I plan to eventually install it on the main server as I learn more and more. Can it somehow be used on the remote and then link a folder from nextcloud back to the main server, and have cloudberry point to the nextcloud folder, which exactly is on the remote server?

That's what I use, not exactly like you outline, but pretty close.

The title of the thread is about using Minio, and since you only want one function that might be what you want. Personally I already had nextcloud running, so connecting duplicati to it was a no brainer, for many reasons.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.