Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

6 Neutral

About xhaloz

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yeah I can provide my borg script here. If you need help on it let me know. But borg makes a local backup and rsync clones it off site. This gives you 3 copies of your data and 2 of them local. Also the script will not re-run if rsync hasn't finished its last operation (slow internet) or if parity sync is running. The key factor in not having everything being constantly checked by Borg is the files-cache=mtime,size. I was noticing everytime I ran Borg it would index files that haven't changed. This command fixed that which has to do with unRAID's constant changing inode values. The borg docs are very good (https://borgbackup.readthedocs.io/en/stable/) Let me know if you get stuck. Obviously this script wont work until you setup your repository. #!/bin/sh LOGFILE="/boot/logs/TDS-Log.txt" LOGFILE2="/boot/logs/Borg-RClone-Log.txt" # Close if rclone/borg running if pgrep "borg" || pgrep "rclone" > /dev/null then echo "$(date "+%m-%d-%Y %T") : Backup already running, exiting" 2>&1 | tee -a $LOGFILE exit exit fi # Close if parity sync running #PARITYCHK=$(/root/mdcmd status | egrep STARTED) #if [[ $PARITYCHK == *"STARTED"* ]]; then # echo "Parity check running, exiting" # exit # exit #fi #This is the location your Borg program will store the backup data to export BORG_REPO='/mnt/disks/Backups/Borg/' #This is the location you want Rclone to send the BORG_REPO to export CLOUDDEST='GDrive:/Backups/borg/TDS-Repo-V2/' #Setting this, so you won't be asked for your repository passphrase: export BORG_PASSPHRASE='<MYENCRYPTIONKEYPASSWORD>' #or this to ask an external program to supply the passphrase: (I leave this blank) #export BORG_PASSCOMMAND='' #I store the cache on the cache instead of tmp so Borg has persistent records after a reboot. export BORG_CACHE_DIR='/mnt/user/appdata/borg/cache/' export BORG_BASE_DIR='/mnt/user/appdata/borg/' #Backup the most important directories into an archive (I keep a list of excluded directories in the excluded.txt file) SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Borg backup has started" 2>&1 | tee -a $LOGFILE borg create \ --verbose \ --info \ --list \ --filter AMEx \ --files-cache=mtime,size \ --stats \ --show-rc \ --compression lz4 \ --exclude-caches \ --exclude-from /mnt/disks/Backups/Borg/Excluded.txt \ \ $BORG_REPO::'{hostname}-{now}' \ \ /mnt/user/Archive \ /mnt/disks/Backups/unRAID-Auto-Backup \ /mnt/user/Backups \ /mnt/user/Nextcloud \ /mnt/user/system/ \ >> $LOGFILE2 2>&1 backup_exit=$? # Use the `prune` subcommand to maintain 7 daily, 4 weekly and 6 monthly # archives of THIS machine. The '{hostname}-' prefix is very important to # limit prune's operation to this machine's archives and not apply to # other machines' archives also: #echo "$(date "+%m-%d-%Y %T") : Borg pruning has started" 2>&1 | tee -a $LOGFILE borg prune \ --list \ --prefix '{hostname}-' \ --show-rc \ --keep-daily 7 \ --keep-weekly 4 \ --keep-monthly 6 \ >> $LOGFILE2 2>&1 prune_exit=$? #echo "$(date "+%m-%d-%Y %T") : Borg pruning has completed" 2>&1 | tee -a $LOGFILE # use highest exit code as global exit code global_exit=$(( backup_exit > prune_exit ? backup_exit : prune_exit )) # Execute if no errors if [ ${global_exit} -eq 0 ]; then borgstart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Borg backup completed in $(($borgstart/ 3600))h:$(($borgstart% 3600/60))m:$(($borgstart% 60))s" | tee -a >> $LOGFILE 2>&1 #Reset timer SECONDS=0 echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync has started" >> $LOGFILE rclone sync $BORG_REPO $CLOUDDEST -P --stats 1s -v 2>&1 | tee -a $LOGFILE2 rclonestart=$SECONDS echo "$(date "+%m-%d-%Y %T") : Rclone Borg sync completed in $(($rclonestart/ 3600))h:$(($rclonestart% 3600/60))m:$(($rclonestart% 60))s" 2>&1 | tee -a $LOGFILE # All other errors else echo "$(date "+%m-%d-%Y %T") : Borg has errors code:" $global_exit 2>&1 | tee -a $LOGFILE fi exit ${global_exit}
  2. Super late reply but yes you can get discord notifications via slack. You would need to check the discord docs for that but its super easy. The command line to fire off the notification ############ WEBH_URL="https://discordapp.com/api/webhooks/<MYDISCORDWEBHOOKNUMBER>/<MYOTHERDISCORDWEBHOOKNUMBER>/slack" APP_NAME="unRAID Server" TITLE="$1" MESSAGE="$2" ############ TITLE=$(echo -e "$TITLE") MESSAGE=$(echo -e "$MESSAGE") curl -X POST --header 'Content-Type: application/json' \ -d "{\"username\": \"$APP_NAME\", \"text\": \"*$TITLE* \n $MESSAGE\"}" $WEBH_URL 2>&1
  3. Just curious, did you all check wireguard? That was the issue on my machine, soon as I disabled it all the errors stopped.
  4. So they fixed the password thing with the hotfix yesterday, however clients need to be updated too. So a lot of people are currently having login issues.
  5. Mine is also crashing, looks like the "reports" plugin was installed twice. I can see a lot of people having an issue after this update.
  6. Does anyone know how to get the pihole docker to resolve a local subdomain.domain.com to a local IP address? I am hosting a small webserver via unRAID and pihole does not know to resolve to it locally if I am accessing the website from a LAN device. Everytime I go to the site, I am seen as "external" traffic. If I can resolve this, I could also resolve Plex issues. I just don't know how to tell pihole to point the domain to the internal unraid IP. I've tried variables such as extra_hosts etc. I am exhausted. Edit I got the resolution by adding 02-custom.conf in the /mnt/user/appdata/dnsmasq.d folder. The format for that .conf file is subdomain.domain.com 192.168.1.X where X is the unraid server IP. However the website stops working when I do this. I even can do nslookup subdomain.domain.com and it shows the 192.168.1.X address. I am using nginx/letsencrypt to host the proxy portion of the site. This basically forwards the subdomain to an internal IP of another docker. Any help would be appreciated.
  7. Hey jowi, Can you ping your netgear from pihole's docker? meaning you terminal into unraid and type "docker exec -it pihole ping 192.168.1.X" where the X is the last octet of your netgear device. See if you get replies. They may not be communicating.
  8. Does anyone know how to point a local domain "subdomain.domain.com" to a local IP address? Whenever I reach my subdomain, pihole thinks I am coming from an external network. I want pihole to resolve any devices from the LAN going to this subdomain to stay internal. There are lots of answers online to resolve this but they all involve pihole running on a raspberry_pi and not docker. I have exhausted many of my resources to figure this out. Thanks in advance.
  9. So I am trying to setup reverse proxy with this. I can access the docker on LAN via the reverse proxy address. However, if I try to access that same link on WAN, I get "Forbidden". To do some testing, I want to enable https via the default port. However when I manually add it to the docker it does not resolve via HTTPS://LOCALIP:8920 Any ideas? Further why wasn't the HTTPS: port included? Any particular reason? Thanks Binhex!
  10. Makes sense, yes borg is installed. I will take a look! Edit: Let me just say thank you, you always reply and always add/update packages. You're pure gold and I thank you so much.
  11. Python 3 wont uninstall. Every other package uninstalls if I have it selected to Off. However python 3 doesn't do anything when I hit apply. It refreshes the page and it says ON again even though I hit off.
  12. Is there a reason the"fuzzy image searching functionality is not included in the docker? I see exif data and content scans only.
  13. Ah I see what happened, I re-read everything now I understand. I think the only problem I Have is the .duplicacy file being inside all the folders that are backed up. Makes more sense to have a massive directory backed up with filters now.
  14. Hey, sorry nobody has replied to you. I've become very interested in the topic of the "best backup" solution for unRAID. Duplicati on the surface seems pretty awesome. However, a lot of users including myself had nothing but errors. It still needs a lot of heavy development. I wrote a guide on combining RCLONE+BORG. Borg does a really great job at creating your repos while utilzing compression, deduplication, encryption, pruning etc. And you would then use RCLONE to push that BORG repo up to your offsite storage. The guide is here https://www.reddit.com/r/unRAID/comments/9md2hh/tutorial_rclone_borg_for_your_awesome_backup_needs/ Lately though, I stumbled across Duplicacy. This piece of software is pretty awesome as it's faster than BORG in some benchmarks AND it uploads to the cloud or locally without the need of using RCLONE. @walle wrote a pretty cool guide here. https://forums.unraid.net/topic/73796-solved-install-duplicacy-install-binary/?tab=comments#comment-687737&searchlight=1 Let me know if you have any specific questions.