Jump to content

Time for another (backup) server


ConnerVT

Recommended Posts

I've just about completed the upgrades to my Unraid server.  Was taking an inventory of all of the spare hardware I now have, and it got me thinking.  Perhaps it is time to build a backup server.  I already have a solid backup strategy for all of my data, except for my media.  This backup server would be just the place to plug that hole.

 

My hope for this thread is for the folks here who have an Unraid backup server to share their solutions of backing up one server to another.  For me, ideally it would back up my TV and (1080) Movie folders which are part of my Media share perhaps once a month.  The backup server would be powered down until it is time to sync the two.  Saves power and helps isolate the backup from any bad actors.

 

If you have a setup like this, please tell how you implement it.  Do you run a script or utilize a docker?  Run both servers on your LAN or direct connect the two?  Does your backup server run 24/7 or do you power it up/down as needed, and if the later, how do you accomplish this?  I am really interested in hearing how you approached this, as I'm just at the beginning steps of implementing something here.

Link to comment

I'm using Vorta (Borg).  You can use it on your regular server and point the Vorta container to your backup server as a target.

 

However, I do something a bit weird.  I syncthing my data from my regular server to the backup server and then back up that data using Vorta.  So the backup server has two copies of everything, one that is synchronized and the other is the backup.  The drawback is that you need double the space.  The good thing is that it's easy to restore (because syncthing repropogates everything back to all the other servers and all the work (Vorta backup process) happens on my most powerful server (the backup server).

 

In your case I would just use the backup server as a target.  Bring it up, run Vorta on your main file server so that the backup server gets a new copy, and then shut down the backup server until next time.

  • Thanks 1
Link to comment

Thanks for the input so far - keep it coming!

 

I definitely am looking at the backup server to be a part time operation.  It will be built from retired parts.  A Silverstone SG11 case (Sugo) and 3 non-He HGST 6TB drives I replaced in my current server.  The SG11 was the case for my daily driver PC, retired as it has a poor cooling design (so not great for a system sitting next to my desk) .  The non-He drives run about 3 to 4C warmer than the He filled drives that replaced them.  Running the server, even for a day a week with the fans all wound up is OK by me.  Running 24/7/365 just isn't desired, especially as the data it will hold isn't non-replaceable and I have no backup of at this time.  Almost all of the data is better than none in this situation.

 

I know there are folks here on the forum that have automated their BU solutions (be it by WOL or ???).  Would love to have them chime in with how they sync things.  I imagine there are a number of issues, such as determining the remote server is ready, when backup is completed, and powering down.

Link to comment
6 hours ago, ConnerVT said:

If you have a setup like this, please tell how you implement it.

A little over four years ago, I did what you are thinking of doing.  I upgraded my main server and had enough parts laying around that I decided to turn them into a backup server.

 

My setup has been going unattended for a little over four years now.

  1.  The backup server is powered down until the backup script runs (automated through User Scripts once a week)
  2.  The backup server is powered on via IPMI. Before I had a motherboard with IPMI in backup server the script just woke up the server from sleep and put it back to sleep when finished
  3.  The backup performs a disk-by-disk rsync copy of all files that are new or changed since last backup.  Before I had the same amount of disks of the same size in both servers, I had the script configured to backup based on shares, not disks.  The backup server originally had smaller disks than the main server but over time I have upgraded them to the same size as the main server.
  4.  The script sends me a summary via email of what was backed up to each disk (used to be summary by share).
  5.  The backup server is powered down via IPMI to await the next backup.

 

The preparation for this was to get login to the backup server automated via ssh.

 

Here is what my script looks like currently:

#!/bin/bash
#description=This script backs up shares on MediaNAS to BackupNAS
#arrayStarted=true

echo "Starting Sync to BackupNAS"
echo "Starting Sync $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log

# Power On BackupNAS
ipmitool -I lan -H 192.168.1.17 -U admin -P {password} chassis power on

# Wait for 3 minutes
# echo "Waiting for BackupNAS to power up..."
sleep 3m

echo "Host is up"
sleep 10s

	# Set up email header
	echo To: {my email address} >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo From: {sending email address} >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo   >> /boot/logs/cronlogs/BackupNAS_Summary.log
				
	# Backup Disk 1
	echo "Copying new files to Disk 1 =====  $(date)"
	echo "Copying new files to Disk 1 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Disk 1 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Disk1.log

	rsync -avu --stats --numeric-ids --progress --exclude 'Backups' -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/disk1/ [email protected]:/mnt/disk1/  >> /boot/logs/cronlogs/BackupNAS_Disk1.log
	
	# Backup Disk 2
	echo "Copying new files to Disk 2 =====  $(date)"
	echo "Copying new files to Disk 2 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Disk 2 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Disk2.log

	rsync -avu --stats --numeric-ids --progress --exclude 'Backups' -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/disk2/ [email protected]:/mnt/disk2/  >> /boot/logs/cronlogs/BackupNAS_Disk2.log

	# Backup Disk 3
	echo "Copying new files to Disk 3 =====  $(date)"
	echo "Copying new files to Disk 3 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Disk 3 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Disk3.log

	rsync -avu --stats --numeric-ids --progress --exclude 'Backups' -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/disk3/ [email protected]:/mnt/disk3/  >> /boot/logs/cronlogs/BackupNAS_Disk3.log
	
	# Backup Disk 4
	echo "Copying new files to Disk 4 =====  $(date)"
	echo "Copying new files to Disk 4 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Disk 4 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Disk4.log

	rsync -avu --stats --numeric-ids --progress --exclude 'Backups' -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/disk4/ [email protected]:/mnt/disk4/  >> /boot/logs/cronlogs/BackupNAS_Disk4.log
	
	# Backup Disk 5
	echo "Copying new files to Disk 5 =====  $(date)"
	echo "Copying new files to Disk 5 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo "Copying new files to Disk 5 =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Disk5.log

	rsync -avu --stats --numeric-ids --progress --exclude 'Backups' -e "ssh -i /root/.ssh/id_rsa -T -o Compression=no -x"  /mnt/disk5/ [email protected]:/mnt/disk5/  >> /boot/logs/cronlogs/BackupNAS_Disk5.log
	
	echo "moving to end =====  $(date)"
	echo "moving to end =====  $(date)" >> /boot/logs/cronlogs/BackupNAS_Summary.log

	
	# Add in the summaries
	cd /boot/logs/cronlogs/
	echo ===== > Disk1.log
	echo ===== > Disk2.log
	echo ===== > Disk3.log
	echo ===== > Disk4.log
	echo ===== > Disk5.log
	echo Disk1 >> Disk1.log
	echo Disk2 >> Disk2.log
	echo Disk3 >> Disk3.log
	echo Disk4 >> Disk4.log
	echo Disk5 >> Disk5.log
	tac BackupNAS_Disk1.log | sed '/^Number of files: /q' | tac >> Disk1.log
	tac BackupNAS_Disk2.log | sed '/^Number of files: /q' | tac >> Disk2.log
	tac BackupNAS_Disk3.log | sed '/^Number of files: /q' | tac >> Disk3.log
	tac BackupNAS_Disk4.log | sed '/^Number of files: /q' | tac >> Disk4.log
	tac BackupNAS_Disk4.log | sed '/^Number of files: /q' | tac >> Disk5.log


	# now add all the other logs to the end of this email summary
	cat BackupNAS_Summary.log Disk1.log Disk2.log Disk3.log Disk4.log Disk5.log> allshares.log
	zip BackupNAS BackupNAS_*.log 
	
	# Send email of summary of results
	ssmtp {my email address} < /boot/logs/cronlogs/allshares.log
	cd /boot/logs/cronlogs  
	mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip"
	rm *.log
	
	#Power off BackupNAS gracefully
	sleep 30s
	ipmitool -I lan -H 192.168.1.17 -U admin -P {password} chassis power soft

 

Here is a copy of the summary email I get when each backup completes:

Copying new files to Disk 1 =====  Mon Apr 17 01:03:13 MDT 2023
Copying new files to Disk 2 =====  Mon Apr 17 01:03:28 MDT 2023
Copying new files to Disk 3 =====  Mon Apr 17 01:03:32 MDT 2023
Copying new files to Disk 4 =====  Mon Apr 17 01:03:39 MDT 2023
Copying new files to Disk 5 =====  Mon Apr 17 01:05:12 MDT 2023
moving to end =====  Mon Apr 17 01:05:13 MDT 2023
=====
Disk1
Number of files: 146,607 (reg: 145,793, dir: 814)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 4,980,993,327,789 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 131,071
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 2,442,573
Total bytes received: 1,279

sent 2,442,573 bytes  received 1,279 bytes  168,541.52 bytes/sec
total size is 4,980,993,327,789  speedup is 2,038,173.07
=====
Disk2
Number of files: 32,838 (reg: 32,110, dir: 728)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 3,861,096,255,504 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 576,189
Total bytes received: 870

sent 576,189 bytes  received 870 bytes  128,235.33 bytes/sec
total size is 3,861,096,255,504  speedup is 6,690,990.45
=====
Disk3
Number of files: 62,940 (reg: 61,611, dir: 1,329)
Number of created files: 0
Number of deleted files: 0
Number of regular files transferred: 0
Total file size: 3,328,714,163,265 bytes
Total transferred file size: 0 bytes
Literal data: 0 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 1,137,723
Total bytes received: 1,580

sent 1,137,723 bytes  received 1,580 bytes  151,907.07 bytes/sec
total size is 3,328,714,163,265  speedup is 2,921,711.05
=====
Disk4
Number of files: 35,038 (reg: 34,478, dir: 560)
Number of created files: 524 (reg: 511, dir: 13)
Number of deleted files: 0
Number of regular files transferred: 511
Total file size: 2,747,653,356,727 bytes
Total transferred file size: 6,706,090,028 bytes
Literal data: 6,706,090,028 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 6,708,322,877
Total bytes received: 10,516

sent 6,708,322,877 bytes  received 10,516 bytes  72,522,523.17 bytes/sec
total size is 2,747,653,356,727  speedup is 409.59
=====
Disk5
Number of files: 35,038 (reg: 34,478, dir: 560)
Number of created files: 524 (reg: 511, dir: 13)
Number of deleted files: 0
Number of regular files transferred: 511
Total file size: 2,747,653,356,727 bytes
Total transferred file size: 6,706,090,028 bytes
Literal data: 6,706,090,028 bytes
Matched data: 0 bytes
File list size: 0
File list generation time: 0.001 seconds
File list transfer time: 0.000 seconds
Total bytes sent: 6,708,322,877
Total bytes received: 10,516

sent 6,708,322,877 bytes  received 10,516 bytes  72,522,523.17 bytes/sec
total size is 2,747,653,356,727  speedup is 409.59

 

Edited by Hoopster
  • Like 3
Link to comment

Good topic. It's always good to learn how other people do things their way.

I think, the way you backup your data depends on how much and how precious your data is to you. We have a whooping 1TB of pictures/videos and documents to back up.

So i just sync my windows folders to unraid using resillio-sync. Overnight, my unraid server copies the new data to another unraid server using rsync -avh in a cron job. WOL is used to wake the backup server.

Besides that i also use the windows default backup solution to backup to my unraid server in case my windows install goes kaput.

I like to minimize my docker app's to keep it simple but i am experimenting with tailscale to sync my backup to a remote server. Right now my data is stored on 3 seperate machines but i feel i need to have an off-site backup too.

If you need to backup +10TB then you probably need a more robust solution.

  • Thanks 1
Link to comment

The script I posted above does not delete anything from the backup server so the backup server is my fail safe against something that is accidentally deleted on the main server.  Once, accidentally of course, my wife deleted and entire year's worth of photos.  No problem, back they came from the backup server.

 

Every few months, I run a clean up script.  This is identical to the one above accept it adds the "--delete-during" parameter which deletes files on the destination which are no longer on the source.

 

I also backup key shares (between 10-11TB of data) to external USB drives via Unassigned Devices.

  • Thanks 1
Link to comment
2 hours ago, Hoopster said:

I also backup key shares (between 10-11TB of data) to external USB drives via Unassigned Devices.

 

I currently do the same for what are effectively the backup of my other computer's backups.  PCs/devices back up onto my (main) Unraid server, then those backups go onto an external USB drive weekly.  Every month or so, I swap the USB drive with another, which I keep offsite.  Belt, suspenders, and some duct tape.  I use a script for this, similar to Hoopster's but much more rudimentary.  I plan to borrow some from what he posted, thanks for that.

 

The backup server I plan to put together will hold a much more static (and less un-replaceable) data, so once a month sync with the main server. 

 

Curious how folks handle things without an ipmi to do wake/sleep/login on the destination server.

 

Link to comment
13 minutes ago, ConnerVT said:

Curious how folks handle things without an ipmi to do wake/sleep/login on the destination server.

IIRC, when I was just sleeping/waking the server before I had IPMI I was doing the following:

  • "echo 3 >/proc/acpi/sleep" to put the server to sleep at the end of the script.  This can be "iffy" depending on the motherboard/NIC
  • I found some command line options involving the IP address and MAC address I could run in the script to wake the server.  Cannot remember exactly what that was

Maybe there are some useful hints here.  

 

IPMI is just so useful, I can't remember fully what it was like before I had it on my backup server motherboard.

  • Like 1
Link to comment
37 minutes ago, ConnerVT said:

Curious how folks handle things without an ipmi to do wake/sleep/login on the destination server.

I use the 'Dynamix S3 Sleep' plugin on my backup server and wake it with a user script on my main server with 'etherwake -b macadressofpctowakeup'.

The backup server goes back to sleep when there is no drive activity for 20 min.

 

I never had a motherboard with ipmi and i don't know how it works so i'll stick with this basic and simple solution. :)

  • Like 1
  • Thanks 1
Link to comment
55 minutes ago, Aran said:

etherwake -b macadressofpctowakeup

That's it!  That's what I used to wake from sleep before I had IPMI.

 

By the way, I have also seen unattended backup solutions that used a smart power switch to turn the backup computer on and off when needed.

Edited by Hoopster
  • Like 2
Link to comment

Excellent!  Exactly the kind of conversation I was hoping this thread would produce.

 

Ipmi would make for a neater solution, but I'm not buying any more hardware for awhile.  Just dropped $$$ on 4 16TB drives (that's why I have the drives for this build).  I did pick up a PiKVM from someone on this forum (*wink*), but it is working wonderfully fulfilling its planned purpose on the main server.

 

Let's see who else may chime in with some backup server solutions.

Link to comment
On 4/18/2023 at 2:04 PM, Aran said:

The backup server goes back to sleep when there is no drive activity for 20 min.

 

That's cool.  This is probably a stupid question but how do you do that?

 

On 4/18/2023 at 4:36 AM, Aran said:

If you need to backup +10TB then you probably need a more robust solution.

 

Not to take anything away from @Hoopster and rsync because I like that solution, especially the logging/email part, but I really like the backup software I see out there.  Since Vorta/Borg (and others) use dedupe it's pretty efficient and there's a history of file locations if files are moved around.  I used to have files in six different places and couldn't remember why I had them there and they were taking up lots of space because of that.

 

My backup runs each night and I have a detailed prune cycle so I keep daily info for a couple weeks then weekly info for a few months then monthly info for a couple years, etc.  My largest archive is 25TB so it runs pretty good at that size (although if you add a TB of data it can take a day to process all that).

 

Here's an example of a couple movie folders that I renamed.  In the end they don't take up any extra space and I know what happened if I wonder about it later.

 

image.png.551e11b2c777210b0104f99d29e667b0.png

 

image.png.cbae6afef9d60945f4ade038bfafd3ae.png

 

image.png.4ac997153dabc42f89f2df403ad7f6d8.png

 

image.png.96f5c036d0a6e4a0d00be0c268b8c7db.png

 

Also, thanks to @Aran and @Hoopster.  As @ConnerVT points out, your suggestions are gold.  I love the remote management.

  • Like 1
Link to comment
13 hours ago, TimTheSettler said:

 

That's cool.  This is probably a stupid question but how do you do that?

 

It's an option in the 'Dynamix S3 Sleep' plugin 😋

 

When in S3 sleep state the RAM is still powered so all data stored in RAM will/should be preserved while sleeping. I think S4 writes data to the hard drive so it is preserved even with a sudden power loss. However, i don't know much about the different sleep states or what state is best for an unraid server TBH so correct me if i'm wrong.

Anyway, this is a little off-topic.

 

I never used Vorta, urBackup or any other solution but maybe the time is now. Thanks to this thread 🙂

 

BTW,

 

What do you folks actually back up regarding your unraid server?

 

I have:

- appdata folder (CA Backup/restore appdata plugin)

- libvirt volume (CA Backup/restore appdata plugin)     --> is this even needed?

- flash drive (My Servers plugin)

- user share (rsync)

- media share (rsync)

 

I want:

- VM backup solution     --> should i just 'stop vm - copy vdisk.img - start vm' ?

 

ps: i feel i'm hyjacking your thread, please tell me if i do

 

Edited by Aran
Link to comment

No worries.  It is a broad subject, so it isn't hijacking as long as you are bringing something to the table.  You offered food for thought to me with your S3/S4 sleep comment.  Something to look deeper once I reach this point.

 

As far as to "what" I back up, first need to say what's on my server.  It has backups of my other PCs and devices, about 20TB of media, an an aggregate compilation of my family's photos.  I also have a NextCloud instance, but am not very impressed with its performance, and the data it holds is duplicated elsewhere, and I will likely just blow it all away anyway.

 

Appdata, libvirt and flash are backed up weekly (old Appdata backup plugin, I'm still on 6.10.3)

Photos and music media are on an external USB drive, kept off site.  Very static, I tend to bring the drive home and update it every few months.  (rsync)

Backup files, as well as Appdata, libvirt and flash, are backed up weekly to an external USB drive (rsync).  Each month, I swap that drive with another kept offsite.

 

The gap I have in my backup of my media (movies and TV).  It is all replaceable, but would be a heartbreaking loss.  I have 18TB of drives now sitting in a drawer, all too small to be of use in my current server.  It will hold my TV and (1080p) movies with plenty of room to spare in the backup server I'm planning.

Link to comment
7 hours ago, Aran said:

What do you folks actually back up regarding your unraid server?

  • Appdata once a month (automated).
  • Flash drive every three months or so (manually).
  • All data (documents, pictures, media, etc.) (~28TB) synchronized real-time (syncthing) between four servers in three different locations with daily backup (Vorta/Borg).

 

Link to comment

The build has begun.  This may be the least expensive system I've assembled (ignoring the fact I did already spend $$$ when I first acquired the parts).  Nearly everything is from upgrades done to other systems around the house.  I bought a mid-grade power supply, an 80mm fan and a header to USB cable.  Having 16TB of drives and not having a use for them is not usually a problem I've had in the past.

 

Building in a Silverstone SG11 Sugo case.  It was originally my daily driver's case, but the cooling isn't that great without really spinning up the fans.  Really goes against the main reason to build a SFF/MFF system.  It sits next to the desk in the family room, so it needs to be quiet.  Fan noise is not an issue on the server bench in the basement.

 

The build went smoothly, except for one SATA cable I forgot to attach to a drive, and the usual "Why can't I get this *#@% flash drive to boot??!?".  All now resolved.  Done messing with it for today and I will pick back up on it tomorrow.  The operation will move downstairs, to put the system on the network, get basic Unraid functions working and start configuring it for its backup functions.

Edited by ConnerVT
Link to comment

Finishing up the last of several 12 hour work days, so haven't had time to "Play with my toys" (as my wife would say).

 

The plan is to keep this machine as simple and basic as possible.  No parity, no cache, VM and Docker turned off.  Just a plain ol' NAS.  I could of done the same thing with a 20TB drive, but what fun is that?

 

Plan for the upcoming days:

  • Get a persistent ssh connection between main server to the backup server.
  • Test that rsync can write/update to backup server.
  • Install S3 sleep plugin on backup server and test WOL works reliably.
  • Write script that will wake backup server, rsync desired folders (subset of my Media share), Put backup server back to sleep, send notification of a job well done.

Once all is working and happy, I'll then see if I can transition from apcupsd to NUT, as both servers, as well as my OPNSense firewall mini PC, all run from the same UPS.

Link to comment



Write script that will wake backup server, rsync desired folders (subset of my Media share), Put backup server back to sleep, send notification of a job well done.


You can use the UD plugin to mount the smb shares of the main server and set it it to 'auto mount'. I like to keep my scripts to a minimum because i'm not cli-guy.

Off topic: what hardware do you use as your opnsense router? I also use opnsense.

Link to comment

I could go the SMB route.  But it would add a another layer of complexity that isn't needed in this case.  The backup server will be sleeping nearly 99% of the time (once initially loaded) and in a perfect world, never needs to be accessed other than running the backup task once a month.

 

I don't mind diving into the cli world.  As I said earlier, it would be much more simple for me to just toss a 20TB drive in my main server, and back up to that.  I use these projects to keep me up to date on things.  Hate to say it, but been messing with computer tech, both professionally and as a hobby, for nearly 40 years now.  The toys I took apart as a kid had motors and gears.  Now young teens take apart microprocessor based systems and write code.  I've stayed relevant and employed by keeping up to date with things in the computing world, by taking on different projects to learn stuff.

 

As for OPNSense, I started a new thread, as it may completely derail this thread if I posted here.

 

 

Link to comment

It is done.  The backup server is running in the way I had hoped, and have the initial backup of my media now contained in it.  Figure I would write up a post build report, both to help anyone looking to do something similar, and for me to have a place to come back to when I question "Why did I do that?".

 

Many thanks go to @Hoopster and @Aran, whose input to this thread got me thinking of how to put this all together.  Also a h/t to RealLukeManning (don't know if he is a forum member) whose Telegram bash script on GitHub was a great starting point for the notifications in my backup script.

 

The Hardware:

As I've said before, I could have just thrown a 20TB drive in the server, mounted with UD.  But still had 3 of the 5 6TB drives I had replaced from my initial server build (as well as CPU/MB/DRAM and a case from another system).  The backup server was an opportunity to put it all to use, as well as learn some new stuff.  Isn't that the goal of a home lab server?

 

The backup server is as stripped down as you can get.  3x 6TB drives, no parity or cache drives.  A GT710 GPU that was sitting in a drawer, which I only use for basic troubleshooting (lowest powered card that works with Win10).  I built the server, created a 6.10.3 Unraid flash (same as I currently run in my main server), and installed some basic plugins.  Both Docker and VM services are turned off, as I will not be using them.

 

SSH from Main to Backup Server:

With the backup server functional, the next thing was to set up a ssh connection from the main server to the backup and transfer a file.  I first started with a video from SPX Labs, but was having issues getting that to work.  As always, coming in for the win was SpaceInvaderOne, with his SSH Keys on Unraid video.  Fast and simple solution, and could now copy a file from one server to the other via ssh.

 

Sleep and WOL:

Much of what I've read about Unraid systems sleeping/WOL has been mixed, ranging from simple to you may have issues.  On the surface, it is very simple, more than 95% so.  Getting the little details ironed out, well that took a bit more effort.

 

Installing the Dynamix S3 Sleep plugin got me 95% there.  Configured to trigger on disk activity alone, my system would go to sleep, and then wake on a keyboard touch (I'm still configured with a GUI boot up).  I then tried to wake the backup server from my main, using etherwake.  Got back an unknown command.  Etherwake isn't in Nerd Tools, but is installed with the Wake On Lan support plugin.  Could now wake up the backup server remotely.  I allow the server to go back to sleep automatically.

 

One thing I did notice.  After waking up, I had one core stuck at 100%.  Found it was proc XORG.  As I had booted into the GUI interface, it looked as my Nvidia GPU wasn't playing happily (though it did display the login screen as would be expected).  I installed the Nvidia Driver plugin (and set nvidia-persistenced to reduce power usage) and this issue has not been seen since.

 

Backup Script:

I don't have a lot of experience in cli or bash scripts.  But I do have the internet and can build on the work of others before me.  I have some previous experience with rsync (I use a script to back up other files from my server).  So I set out with a simple task:  Wake up the backup server.  Back up three folders from my main server, sending me notifications (via Telegram, which is how my servers already send notifications), and once complete, send/save some stats as to how things went.

 

I didn't need to get all -verbose with rsync (no need for 58K lines of file listings).  This will likely be a once a month operation, so just interested to see how much was added.  Also I didn't feel need to keep permanent log files, so the script's logfile is more than sufficient (plus basically duplicated in my Telegram bot stream).  My bash code might be more brute force vs eloquently written, but I like the output formatting and most importantly, it works.

 

#!/bin/bash

#description=This script backs up shares from Malta-Tower Media share to NASty
# Backs up Media share folders Movies, TV and Music
# WOL sent to NASty then wait for destination to wake
# Sync folder, saving stats
# Send Telegram sync complete message for folder, send stats to log
# Once all folders sync, send Telegram message all done + stats for each folder

#arrayStarted=true

# Telegram variables - From Github - h/t to RealLukeManning
TOKEN=XXXXXXXXX
CHAT_ID=XXXXXXXXX
URL="https://api.telegram.org/bot$TOKEN/sendMessage"

# Send message backup is starting
MESSAGE="Starting sync to NASty - $(date)"
curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE" > /dev/null

# Wake NASty from sleep
etherwake -b XXXXXXXXXXXXXXXXXXXX

# Wait for 2 minutes
sleep 2m

# Start Music backup
MUSICSTAT=$(rsync -ah -p --delete-during --stats "/mnt/user/Media/Music/" "[email protected]:/mnt/user/Media/Music")
MESSAGE="Music is synced to NASty - $(date)"
curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE" > /dev/null
echo $MESSAGE
echo $MUSICSTAT

# Start Movies backup
MOVIESSTAT=$(rsync -ah -p --delete-during --stats "/mnt/user/Media/Movies/" "[email protected]:/mnt/user/Media/Movies")
MESSAGE="Movies is synced to NASty - $(date)"
curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE" > /dev/null
echo $MESSAGE
echo $MOVIESSTAT

# Start TV backup
TVSTAT=$(rsync -ah -p --delete-during --stats "/mnt/user/Media/TV/" "[email protected]:/mnt/user/Media/TV")
MESSAGE="TV is synced to NASty - $(date)"
curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE" > /dev/null
echo $MESSAGE
echo $TVSTAT

# Send message that all backup is complete
MESSAGE="All Media backup activities are completed  - $(date)"
curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE%0A%0AMovies - $MOVIESSTAT%0A%0ATV - $TVSTAT%0A%0AMusic - $MUSICSTAT" > /dev/null
echo $MESSAGE

 

Conclusion:

I'm glad I chose to do things the "hard way" and build a second server vs. just tossing a USB drive on my main server.  I learned a bit about bash, as well as digging deeper into several Linux operations.  I find it is vastly easier to learn of something when you have a project.  It then isn't just more information for you to soon forget, but an enjoyable journey (even with the cussing that always happens) as you work your way to a satisfying conclusion.  I need to get out of my comfort zone more often.

 

Thanks for reading.

Edited by ConnerVT
formatting
  • Like 1
  • Thanks 1
Link to comment

A few reasons for the no parity decision.

 

First is the case is limited as far as how many 3.5" drives it will hold.  It has only 3 bays for 3.5".  I could mount a fourth drive in the 5.25" bay, but the case is airflow challenged.  The 3.5" bays have a quality 120mm fan right on them (that's the case's only intake fan).  The 5.25" bay is in a cooling air null (really meant for an optical drive).

 

Second is I built this almost completely from parts on hand, and had the 3 6TB sitting idle in a box.  If I used one of the 3 for parity, there wouldn't be much of any room left to back up any future data.  I'm not spending more money to buy bigger drives at this time.

 

Lastly, the data it stores isn't irreplaceable.  It is more for the convenience if something really funky happens to the media server array (or server itself).  I can restore the data back to the main server, or even just install Plex on the backup if the main server is out of service for awhile.  The backup will sleep nearly all of the time, and with it holding just media files it isn't the end of the world if there are issues with it.  Even 90% of something is much better than 100% of nothing.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...