Need a simple off-site unraid-unraid incremental backup solution


maxse

Recommended Posts

Wow! You guys are the best! I don't know anything about scripting or how they work or where to enter but I know there is a plugin for it and I will figure it out. Most important thing is that after going through all these options for a month or so since I started this thread, I have settled on the remote power on with IPMI SSH with rsync method then shut off, no one at the remote site will have any passwords, so after shutting down the drives will be encrypted again. Perfect! This seems to be the most reliable and stable method and also free!

 

I just got my case in, Node 804, and ordered the Supermicro X8SIL-F, 8 gigs of RAM, and Xeon X3470. Can't wait to get this going guys, thank you so much! I'm sure I'll have other questions about SSH and how to power on, etc... But I'll ask those in the other threads. 

 

Awesome about how to make the files read-only after transferring them to protect from crypto. So I don't have to use BTRFS with snapshots. I was concerned about the stability still with it, and me screwing something up with the snapshots.... My only question is what happens when I rename a file or delete a file locally, then the rsync script runs and it wont be able to delete the file or rename it. What happens in this case? Does the entire backup just fails, or will it worst case, write a another file being a duplicate and just leave the original one intact on the remote server?

Link to comment
46 minutes ago, maxse said:

My only question is what happens when I rename a file or delete a file locally, then the rsync script runs and it wont be able to delete the file or rename it. What happens in this case? Does the entire backup just fails, or will it worst case, write a another file being a duplicate and just leave the original one intact on the remote server?

I'm not sure I'm following..So you're looking do to a two-way sync? As in everything you do on server A gets mirrored to server B and vice versa? If so then that's a good question.. But I don't think the entire backup will fail, I think it just skips the file. I don't really know though. I only do a one-way sync so any files I delete locally after it's synced will need to be manually deleted on the remote server if I wish to delete them. I'm sure there are other ways, but I want it this way. And I don't really do much deleting anyway. I think most people around here only do a one-way sync so if you're looking to do two-way, then you might have some googling to do. I guess you can maybe use the --delete flag in some way, but I also think you need to run rsync twice to able to do a two-way sync. I'm not sure though.

Link to comment

Having rsync replicate deletes can be dangerous.

 

If I delete a file on the source server my rsync does not replicate the delete. This allows me to restore from backup if I made a human error. If I rename a file at source I end up with 2 copies of the file at destination.

 

This forces discipline on file organization first time, because the backup will have already replicated your initial structure.

 

If you are a file organization fanatic this may be frustrating as changes in file organization must be replicated manually in my setup.

 

You could manually run a cleanup that replicates deletes using rsync on a user controlled basis rather than automatically.

 

Sent from my chisel, carved into granite

 

Link to comment

Thanks guys. Fair enough. I’m doing one-way sync. I was originally looking to have incremental backups to protect me from an accidentally deleting something... but this method will work. Just no replicating a delete, write protect backup, and shit down the server when finished to keep everything encrypted. Great!

 

I watched a video by spaceinvaderone that showed how to have unraid grab the keyfile to decrypt drives and start array on a hosted ftp site. Not sure how I would do that from my main server. Any thoughts on this? Wouldn’t this mean I would have to open up my server to the internet just for this specific file?

 

thanks!

Link to comment
28 minutes ago, tr0910 said:

Having rsync replicate deletes can be dangerous.

 

29 minutes ago, tr0910 said:

If I delete a file on the source server my rsync does not replicate the delete. This allows me to restore from backup if I made a human error.

I totally agree with this!

 

30 minutes ago, tr0910 said:

because the backup will have already replicated your initial structure.

Talking about folder structure, there is a huge "design flaw" using rsync with user shares on the first sync. Rsync will create the entire folder structure of a share on the first disk it chooses before it starts transferring the files. This will result in all files going to the first disk causing it to totally fill up and ultimately fail with out of space errors.

 

This is only a problem with rsyncing user shares I think, and also only a problem on the first sync. Or I should say only a problem syncing to an empty server. I prefer using user shares, but I knew about this issue before I started transferring the files to the backup server so I didn't use rsync on the first transfer. I just mounted the server via UD and transferred all the files via Midnight Commander, locally. But I have been using rsync with user shares ever since then and it works great, just not for the first big transfer to an empty server.

Link to comment
4 minutes ago, maxse said:

and shit down the server when finished to keep everything encrypted.

lol, funny spelling mistake! I must be dead tired :P I hope you bring enough of that encrypting toilet paper! Time to go to bed.. 🤣

 

9 minutes ago, maxse said:

Wouldn’t this mean I would have to open up my server to the internet just for this specific file?

Yes. Since you already have to open up a port for SSH I would fetch the file via SCP instead which uses the SSH port. 

Link to comment



Talking about folder structure, there is a huge "design flaw" using rsync with user shares on the first sync. Rsync will create the entire folder structure of a share on the first disk it chooses before it starts transferring the files. This will result in all files going to the first disk causing it to totally fill up and ultimately fail with out of space errors.
 
This is only a problem with rsyncing user shares I think, and also only a problem on the first sync. Or I should say only a problem syncing to an empty server. I prefer using user shares, but I knew about this issue before I started transferring the files to the backup server so I didn't use rsync on the first transfer. I just mounted the server via UD and transferred all the files via Midnight Commander, locally. But I have been using rsync with user shares ever since then and it works great, just not for the first big transfer to an empty server.


For this reason I do my rsync by disk, and I have my server organized so that certain user shares go to certain disks. Initially unRaid does not require this, but it is best in the long run.

Doing the rsync by disk also requires same size disks in source and destination server. If you can't do this, just be aware of the issue above.

Sent from my chisel, carved into granite

Link to comment
On 3/8/2019 at 3:04 PM, maxse said:

...

2) Borg program came up. Does pretty much everything I need it to and obfuscates and encrypts the file names also (unlike cloudberry). Only downside of borg seems that I would have to back it up on the local network first and THEN transfer that backup to the remote server. This could also be done with Minio it seems, but wonder if for this part I could use rsync with SSH, would it be easier without having to install minio, reverse proxies, etc??

-- issue with this is, I may have 40tb to back up... If using borg, I see it does compression, but does that mean I would in effect have to have double the space available? So 40tb to back up, I would need roughly additional 40tb free on the machine just for this backup, and then use another method to then transfer that new backup to the remote server..

...

You don't have to backup locally first with Borg, it can backup directly over SSH. No need for any additional storage space. You can additionally create append only SSH keys for unRAID so Borg only has append-only access to the repository, which can allow further lock down of your backups.

Edited by ptirmal
Link to comment

@strike thanks for telling me about the issue! I would do share-shares, it would have been a nightmare if I didn't know about it trying to figure out what I did wrong!

 

So an SSH port has to be open both main and target servers from what I see, correct? And what is SCP? Any tips on how to get the backup server look for the keyfile on the main server? I didn't see any tutorials or guides for doing that, and don't really know what it means. I already have a duckdns running on my main server so IP will be static so that's good.

 

Also, just to make sure I understand, the power on IPMI command can be sent via SSH to remote server right? The remote site won't need a pivpn running? I saw in the linked thread that they just switched to SSH completely, but I don't if they meant that you still need to SSH into the remote rpi to send the IPMI command to turn it on?!!

 

I'm pumped about this, my buddy said this would be perfect, and I am happy too because the files will get encrypted always since the server will be set to shut down, and no one will have access. I'll also figure out how to use the write protect command suggested about to prevent files being changed in the event of a crypto, boom! You guys are the best! I hope you can all help me once I am ready to implement this! I am still waiting on parts to arrive to put everything together, X8SIL-F board, X3470 processor and 8 gigs of RAM. That will be my first time working with used and server parts, but should be similar to consumer components. 

Edited by maxse
Link to comment

Yes, if you do it my way you have to have the SSH port open on both servers. You need it open on the remote server to run rsync over ssh and you need it open on your local server to download the keyfile to your remote server. SCP stands for Secure Copy and uses the same port as SSH does. There is an example in my fetch_key script which I posted earlier on how to fetch the key via SCP.  

 

My backup server doesn't have IPMI, so I don't know how to answer that one. But you can't do it via SSH on a server which is powered down since the SSH service isn't running yet. So you need to SSH into another remote machine (pi?) and do the IPMI command from there. Or if there's a possibility to open a port to be able to communicate with the IPMI service directly on the remote server? I don't know since I haven't researched it and I can't try it.

 

On my setup, I have a small NUC on the remote site also running unraid which is always on. It has the wake on lan plugin installed among other things, which includes etherwake, and I use the etherwake command to send a magic packet to wake up the backup server. So I SSH into the NUC, which wakes up the backup server, then I SSH into the backup server and run the rest of the backup script. But you can do the same thing with a pi or any other machine on the same network as long as it has SSH,etherwake and is always on. 

 

In any case, I would recommend having a small "always on" machine on the remote site running a vpn, (if you don't set up a VPN on the remote router that is). You need to be able to log in to the unraid webui securely if you want to.

 

Edit: A quick google search reveals that IPMI uses UDP port 623, maybe you can open that to run the IPMI command directly?

And of course, if you can do that then you don't need a pi, as you can power up the server directly and have a vpn server running as a docker container on unraid. But as I said I don't use IPMI so I don't know if it's possible.

 

Edited by strike
Link to comment

Thanks, amazing! Yes, now that I think about it, they talked about SSH into the rpi, which is what I'd rather do, than send the IPMI turn on that way. 

 

Quick question that I just realized. This solution doesn't really provide any versioning? But is that even necessary since delete will not be synced, and also the remote server files will be read only so no worries of a crypto.

But what if one of the people on my home network running windows gets a virus that then gets to the main unraid server through the mapped drive, and if I don't detect it right away will then get get copied to the remote server when rsync runs? Or do things not work in this way and a windows virus (of any sorts) can't make it's way to the unraid file system? I was just thinking about crypto, but it is that the only "threat" that I need to worry about in this scenario, how about the more typical windows viruses, trojans, etc... How would I be able to "go back in time and restore prior to when the virus or malware hit? Or does this solution really eliminate the need for versioning in backups?

 

Thank you soooo much again guys! I can't wait to start setting this up! Very exciting

Edited by maxse
Link to comment
6 hours ago, maxse said:

Thanks, amazing! Yes, now that I think about it, they talked about SSH into the rpi, which is what I'd rather do, than send the IPMI turn on that way. 

 

Quick question that I just realized. This solution doesn't really provide any versioning? But is that even necessary since delete will not be synced, and also the remote server files will be read only so no worries of a crypto.

But what if one of the people on my home network running windows gets a virus that then gets to the main unraid server through the mapped drive, and if I don't detect it right away will then get get copied to the remote server when rsync runs? Or do things not work in this way and a windows virus (of any sorts) can't make it's way to the unraid file system? I was just thinking about crypto, but it is that the only "threat" that I need to worry about in this scenario, how about the more typical windows viruses, trojans, etc... How would I be able to "go back in time and restore prior to when the virus or malware hit? Or does this solution really eliminate the need for versioning in backups?

 

Thank you soooo much again guys! I can't wait to start setting this up! Very exciting

If those files you're backing up get changed they will propagate  to your backup, rsync doesn't version. You could look at rclone or Borg like you mentioned before. You can also add some lines so changed files keeps the old file by renaming it with the date, this kind of works but is a real pain if dealing with a high quality of files.

Link to comment

Yeah, if you need versioning you need to look into other things. Maybe combining it with rsync, like snapshotting all the disk using the BTRFS file system like already mentioned. I don't really know what a virus can do though if it gets to you backup server. I mean it only turns on for backup, and it can't do anything with your files since they are read-only. Even the file(s) which are infected get put in a read-only state, once backed up. I guess it depends on what the virus does. Well, if you make your files writable again the virus can do any changes it wants. 

Link to comment
22 hours ago, strike said:

Yes, if you do it my way you have to have the SSH port open on both servers. You need it open on the remote server to run rsync over ssh and you need it open on your local server to download the keyfile to your remote server. SCP stands for Secure Copy and uses the same port as SSH does. There is an example in my fetch_key script which I posted earlier on how to fetch the key via SCP.  

 

In any case, I would recommend having a small "always on" machine on the remote site running a vpn, (if you don't set up a VPN on the remote router that is). You need to be able to log in to the unraid webui securely if you want to.

 

Edit: A quick google search reveals that IPMI uses UDP port 623, maybe you can open that to run the IPMI command directly?

And of course, if you can do that then you don't need a pi, as you can power up the server directly and have a vpn server running as a docker container on unraid. But as I said I don't use IPMI so I don't know if it's possible.

The problem with remotely connecting to IPMI directly is that IPMI has so many security holes (google ipmi security hole) that having it port forwarded to the wider internet is totally not recommended.  The rpi provides a secure, always on connection that can also host a VPN.  Perfect for many things you might need to do remotely and securely.

 

Here is some recent IPMI vulnerabilities.  https://kb.iweb.com/hc/en-us/articles/230268508-Protecting-Intelligent-Platform-Management-Interface-IPMI-devices

Link to comment
On 3/18/2019 at 5:29 PM, tr0910 said:

You could manually run a cleanup that replicates deletes using rsync on a user controlled basis rather than automatically.

This is what I do.  Every 90 days or so, if I have not noticed anything accidentally deleted, I manually run an rsync with delete which will delete from the backup server anything not on the source server.  I do this by share (not by disk) which is easier because my backups are by share anyway. This way I can choose which share(s) to cleanup regardless of how many disk(s) they may span. However, even if the regular-scheduled backups are by disk, the delete rsync could still be by share, it would just be a different command.  Th

Edited by Hoopster
Link to comment

thanks guys. I'll stick with rsync and the method I decided. Already looked into all the software and this seems like the most reliable and "simple method." I do not need versioning.. Well I thought I did but I guess making the files read-only protects that.

 

My question is more so, if the virus files got backed up, and now I need to wipe the main computer and restore from the remote. I would be in effect restoring some of the virus files too, no? How will I know which files are the malware or virus files?

 

For IPMI, yeah, I figured it's not optimal to forward it directly. I'll do the rpi thing.

Edited by maxse
Link to comment
11 hours ago, maxse said:

thanks guys. I'll stick with rsync and the method I decided. Already looked into all the software and this seems like the most reliable and "simple method." I do not need versioning.. Well I thought I did but I guess making the files read-only protects that.

 

My question is more so, if the virus files got backed up, and now I need to wipe the main computer and restore from the remote. I would be in effect restoring some of the virus files too, no? How will I know which files are the malware or virus files?

 

For IPMI, yeah, I figured it's not optimal to forward it directly. I'll do the rpi thing.

Manually, or maybe a script depending on how the ransomware propagated the encryption. That's where versioning would come in. 

 

I'm not 100% sure how rsync works with ransomware, would it simply see the change in file and overwrite the old file or would it see it as a new file and save the old file? I don't know that read only access really protects you as you're simply talking about read only for the backup machine. Your main server that you're backing up clearly has write access, no? 

 

I did the rsync over ssh thing, but didn't think it was a powerful enough solution for the long term.

Edited by ptirmal
Link to comment
19 hours ago, maxse said:

How will I know which files are the malware or virus files?

Having an updated OS, regularly run a virus scan, frequent backup intervals ( at least once a week) and be causes about clicking links in e-mails from untrusted senders are going a long way to be virus free. But things do happen, especially if you have kids clicking around and about on everything. 

 

So say you run backup once a week, then you have a small window of 7 days you can potentially be infected, and if you in those 7 days you get a virus which is then in return get to the backup server on the next sync, the virus is locked by setting it in a read-only state so it can't do any harm to your backup. So by this point, you have done a virus scan on your main machine and confirmed that you have a virus. Now what?

 

To know whether or not it's safe to restore the backup and make the file writable again or if it's infected you need to implement checksumming. 

I don't know how tech-savvy you are, but every file gets a checksum, which is a string of numbers and letters which is unique to that file. If the file gets corrupted in any way it gets a new checksum. So an infected file would have a different checksum then the original. That's how you check if the file is indeed the original or if it has changed or been corrupted in any way. 

 

Now how do you implement this? You could use the BTRFS file system which has automatic checksumming. If you're not going to use BTRFS you could try the file integrity plugin, which you can set up to run on a schedule so you'll get notified if a file has been corrupted. Or you could use some other checksumming tool, I know there is one for windows which many around here use but I can't remember the name, but you'll find it if you search. I did use it myself some time ago.

 

But what if you confirmed that both files are infected and can't restore the original file? Well, then you're screwed if you do not use versioning, it's as simple as that. I'm really a nazi about my files and how I use the internet so I haven't needed versioning, yet... I do plan to implement it in the near future though. 

 

Link to comment

Ransomware is nasty. Rsync backups to a remote server provides no protection. Once the file are hit by ransomware they get backed up remotely overwriting the good files. Here is how I protect against it.

1. Read only access to media files where write access isn't necessary.
2. Snapshot versioning backup locally of frequently changed files such as documents. Make sure the snapshots are read only.

Rsync off-site backup to protect against flood, fire, theft and other destruction but not ransomware


Sent from my chisel, carved into granite

Link to comment
16 hours ago, strike said:

I haven't needed versioning, yet... I do plan to implement it in the near future though. 

On second thought, I'm not really sure if I need it. All my media files (which is kinda what I care about anyway) on the mainserver I set to read only once a week, all other SUPER important files I have backed up in various clouds (in addition to my backupserver) which use versioning so I can restore. And all the files thats get's transfered to the backup server gets put in a read-only state at the end of the backup. Worst case senario, if a virus of any kind makes it to one or both of my servers and tries to do a havoc on my files I loose maybe one week of media files. All other files I can restore from the cloud.

 

So I think I'm pretty safe, I may look at versioning/snapshotting anyway just becuase I want to learn and it could be convenient to have snapshots if worst case happens. But do I NEED it, no I don't think so.

Edited by strike
Link to comment
  • 3 weeks later...

Thank you guys!! All my parts finally came in for my backup server and I put it together today, and everything is working great! I used a Node N804 case, Supermicro X8SIL-F with a Xeon X3470 processor working great!

 

I'm transferring files in Krusader now from main-->backupserver. Then the software part will start to implement what you all suggested in this thread but just to mention it again what I want to do:

Power on remote server with raspberry pi, somehow unlock the encrypted drives when unraid powers up (haven't researched the optimal way to do this yet) run SSH rsync backup to remote server on changed files, then make files readonly before shutting down the server. Rinse and repeat every night :)

Have no clue how I'm going to actually set up all the above, but one step at a time, just put together all the parts, than will read again all the threads you get linked to in this post. 

 

BTW, any suggestions on which raspberry pi model I need to get to implement all of this so that it runs smooth?

I found this kit, but it's not exactly cheap at $75. Not sure if I could get a different lower power version though? There are so many versions of this thing lol. 

 

https://www.amazon.com/gp/product/B01C6Q2GSY/ref=oh_aui_search_asin_title?ie=UTF8&psc=1

Link to comment
19 minutes ago, tr0910 said:

Most of those kits include things you do not need.  You only need an 8gb card.  Your old cell phone charger with Micro USB will power the pi, and the pi itself is only $35.  Case for the pi should only a couple of dollars. 

That is assuming that you need that powerful Raspberry Pi for the intended use?    It sounded to me that even a Pi Zero W might be enough and that comes in at only $10.

Link to comment

The lack of a rj45 ethernet port is what makes the zero a non-starter for me.  To make this bulletproof, wired ethernet connections are preferred. 

 

But yeah, it has enough power for the job, if all it needs to do is wake up your backup server via ipmi.  If you also want it to do dual duty as a openvpn server its a bit slow. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.