Jump to content

sdub

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by sdub

  1. I installed the "neofetch" application using the Nerd pack but was bummed to see that Unraid wasn't supported as a detectable OS. I couldn't find one anywhere else, so I created my own: To replicate this you simply need to do the following: Install Nerd Pack from CA Appstore Install "neofetch" package Copy the attached "config.conf" and "unraid_ascii.txt" to ~/.config/neofetch If you want this to display automatically at login, add the line "neofetch" to the bottom of ~/.bash_profile unraid_ascii.txt config.conf
  2. It would be great to have an option to specify the number of visible dockers in the container/VM preview. It seems hard-coded at 5. For wide screens, you could easily fit more if you wanted to.
  3. Yeah I just used xfs…. It’s not as universal as FAT but it’s still pretty universal and there are ways to mount an xfs partition under windows if you had to.
  4. Funny you mention that, because I was just thinking about this "issue" earlier this week... of course an extra backup isn't really an "issue", just wasted space. I've slowly moved almost all of the content I care about off the computers on the network... they almost exclusively access data stored on the Unraid NAS or via Nextcloud, which makes this issue largely go away. The one exception is Adobe Lightroom... that catalog really wants to be on a local disk. I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great). If I include that share in my Unraid backup, I'll have way more copies of the data than I really need. Don't need an incremental backup of an incremental backup. I think I'm moving away from the "funnel" mentality. Instead I'll have Unraid/Borg create local and remote repos for its important data. I'll have Windows/Borg create local and remote repos for it's important data. The Windows "local" repo will be kept in an un-backed up Unraid user share or the unassigned drive I use for backup. Going forward, I'll probably do the same on any and all other PC's that have data I care about. The reality is that all of my other family members work almost exclusively out of the cloud (or my Unraid cloud), so there's very little risk of data loss.
  5. The point was independence... you don't want a problem with your array making the data and the backup both corrupt. As long as you're backing up to a 3rd offsite location, the risk is realistically pretty low. You could specifically exclude the disk from all of your user shares and disable cache writes to minimize exposure to an Unraid bug, but the parity writes will slow down the backups which can take several hours for a multi-terabyte backup.
  6. Thanks for the heads up... I'll take a look. It's too bad that b3vis has fallen behind on versions. Is there a specific feature in 1.5.13+ that you're looking for, or just don't want to fall behind the latest version too far? This modem7 fork is brand new, so I might not jump ship just yet... definitely something to keep an eye on.
  7. Probably a question for the borg forum like you said, but it could be several things when encryption is on. Some encryption modes are faster than others, and it could be Docker throttling CPU or memory usage that is the root cause of the bottleneck, even if the host isn’t pegged at 100%. Also the encryption is single threaded probably so your overall CPU utilization might not be representative if a single core is at 100% have you tried creating a test repo with no encryption for comparison?
  8. I've been having similar issues... for some reason, the docker image never updated from 19.0.3 automatically from the Unraid docker dashboard. I just went through a process to update to 22.0.0... I'll explain below. I found that these issues were usually because I didn't give Nextcloud enough time to start. Watch the logfile for the following line before trying to execute occ commands. [cont-init.d] 01-envfile: exited 0. Didn't scour through all 199 pages of this support thread, but here was my solution... it doesn't want to upgrade more than one major version at a time, so do just that... one major version at a time. Basically, I followed the instructions in the stickied post, option 3, except for the version change indicated below: ##Turn on maintenance mode docker exec -it nextcloud occ maintenance:mode --on ##Backup current nextcloud install docker exec -it nextcloud mv /config/www/nextcloud /config/www/nextcloud-backup ##Update to latest v19 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-19.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-19.tar.bz2 -C /config/www ##Copy across old config.php from backup docker exec -it nextcloud cp /config/www/nextcloud-backup/config/config.php /config/www/nextcloud/config/config.php ##Now Restart docker container docker restart nextcloud ##Perform upgrade docker exec -it nextcloud occ upgrade ##Turn off maintenance mode docker exec -it nextcloud occ maintenance:mode --off ## Now Restart docker container docker restart nextcloud After updating, you should go into Nextcloud /settings/admin/overview page and address any security & setup warnings that are found before proceeding. Then repeat the whole process except changing this step: ##Update to latest v20 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-20.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-20.tar.bz2 -C /config/www address any warnings as above, Then repeat the whole process except changing this step: ##Update to latest v21 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-21.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-21.tar.bz2 -C /config/www address any warnings as above, then finally repeat with the steps as indicated in the sticky to get to v22: ##Grab newest nextcloud release and unpack it docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest.tar.bz2 -C /config/www Once everything is working, you can remove the downloads and backup folders: ##Remove backup folder docker exec -it nextcloud rm -rf /config/www/nextcloud-backup ##Remove ALL Nextcloud tar files docker exec -it nextcloud rm /config/latest*.tar.bz2
  9. You’ll need to be able to SSH from the Docker container into the remote server with SSH keys (no password). Then you can init the remote repo with: borg init user@hostname:/path/to/repo Where /path/to/repo is on the remote machine. Note that the remote server needs to have borg installed since the commands get executed remotely. In the case where you have no local repo, just this remote one, you can leave that path variable blank or delete it from the Docker template. I’d encourage you to have both a local and remote backup though… as they say, one is none! Sent from my iPhone using Tapatalk
  10. That’s the local path in Unraid where you want the borg repo to live. It can be inside your array or on a dedicated volume. It gets mapped into the /mnt/borg-repository folder INSIDE the Docker container. Once you open an SSH shell inside the Docker container you’ll initialize with a command like borg init --encryption=repokey-blake2 /mnt/borg-repository Regardless of the location you chose for the repo outside of Docker. Once you initialize with this command, you should see the repo files in the location you put in the "Borg Repo (Backup Destination):" variable. Sent from my iPhone using Tapatalk
  11. Another reason I chose not to use the CA backup is that it didn’t allow for de-duplication with large zip files. It made my Borg archive grow and grow. Sent from my iPhone using Tapatalk
  12. As others have said the issue is with the file-based databases like SQLite that many dockers employ. There’s a small chance that the backup will not be valid if it happens mid-write on a database. I don’t like stopping Docker to do backups and prefer to have it be part of my continuous Borg backup however so my approach is: 1. Wherever possible opt for Postgres or some other database that can be properly backed up in Borg. 2. The backup is probably going to be ok unless it happens to occur mid-write on the database. 3. If this does happen and I need to restore, the odds of successive backups both being corrupt is infinitesimal. That’s the advantage of rolling differential backups. 4. Most Docker applications that use SQLite databases have a built-in scheduled database backup functionality that’s going to be valid. I make sure to turn this on wherever I can (Plex, etc.) and make sure that backup path is included in the Borg backup. I think with these multiple lines of defense, my databases are sufficiently protected using Borg backup, not using CA backup, and not stopping the Docker containers. If anyone sees a flaw in this logic I’d really like to know! Sent from my iPhone using Tapatalk
  13. The specific reason for your error appears to be that "/mnt/borg-repository" doesn't exist in the environment where you're running the borgmatic command. Everything should be done from within the borgmatic container... as a matter of fact you don't even need to install the nerdpack borg tools at all in Unraid. This docker container is the only thing you need. The whole point of this CA package was to allow you to keep a "stock" Unraid installation, which does not include borg or borgmatic.
  14. There are hooks in the Borgmatic config file for “before_backup” and “after_backup” that could be used to invoke an SSH command to tell the Unraid parent to mount/unmount volumes. For example (where Unraid's IP is 192.168.1.6) before_backup: ssh 192.168.1.6 mount /dev/sdj1 /mnt/disks/borg_backup after_backup: ssh 192.168.1.6 umount /mnt/disks/borg_backup
  15. @T0a It occurred to me that you could also accomplish this without using HA Dockermon or curl by just executing an "ssh root@host docker stop CONTAINER" command directly.
  16. I'm far from an rclone expert, but I'm not sure how you're proposing to use fuse... in borg, it's used to expose the contents of a Borg repo to extract files. If you indeed want to do this for whatever reason, you can follow the borg mount documentation. Mount the repo to /mnt/fuse and you should be able to see it from the Unraid host via the "Fuse mount point" path in the borgmatic container config. What I assume you really want to do is sync the borg repo to a cloud storage provider. I'm afraid rclone is not part of this container, so you will need to do it separately. I can think of 3 options: 1. There's a good SpaceinvaderOne tutorial out there if you want to use the rclone plugin available in the CA store (waseh), but that's basically no different than installing a user script (not my preference). If you go this route, you could invoke rclone to start on the Unraid host from the docker container via an ssh command. Something like "ssh root@hostname rclone sync [opts]" 2. (Preferred) If you really want to avoid installing stuff to your base image altogether, I'd recommend either installing a dedicated rclone docker like "pfidr34/docker-rclone" or the one available in the CA store (thomast_88). You could then perform the rclone asynchonously on it's own cron schedule, but you need to be careful that they don't run at the same time. If you want to automatically run rclone using the "after_backup" hook, You could execute a command that invokes the rclone command in another container from within the borgmatic container. Something like "ssh root@hostname docker exec "rclone" rclone sync [opts]". 3. A final option is to install a single docker container with both borgmatic and rclone installed. There isn't one in the CA store, so you'll need to install from docker hub with a custom template, but it's totally doable. Here's one that looks like it would work: https://hub.docker.com/r/knthmn/borgmatic-rclone/
  17. OK, curl is now part of the docker. You will need to go to the Docker tab and do a "force update" to get it.
  18. I submitted a feature request to the docker maintainer for this... seems pretty straightforward. I could fork the docker, but I'd rather stay tied to the base image.
  19. Sorry for not answering sooner... my post notifications were accidentally disabled. That's a great point that I could start/stop docker using the hooks, but I don't want my system down for 4 hrs a day (I run an hour long local and hour long remote backup 2x daily) Not having docker downtime was a significant reason I didn't want to use the CA backup solution. In theory I could minimize this by having a separate borg archive for JUST appdata so the backup would be quicker but with Plex I have a huge number of small files, so it's still longer than I prefer. My rationalization for my approach is twofold... I'm not sure what the odds are that the file in the backup could get corrupted, but it's somewhere between "unlikely" and "possible". The only files that I'd be really worried about are the filesystem-based databases like SQLite. Since I'm doing 2x daily backups, the odds of having consecutive corrupted files backed up seems very, very small. Most of the programs that use SQLite (Plex, 'arrs, etc.) have the option in-app for periodic database backups. Those backups get ingested into the archives, and I don't have to worry about them being corrupted. About the only one that doesn't have this is my Grafana/InfluxDB docker, but I'm not particularly concerned about losing this data. If I were concerned, I'm sure I could find a way to have it dump periodic DB images. If this seems dubious to you, please let me know why... it's just my thought process. I'm not really sure either... I understand that flash backups will be coming in Unraid 6.9 when it releases, so I'll probably just take my chances until then.
  20. Hopefully you got your SSH issues sorted... it sounds like laur had the right advice. For anyone else that finds this, I'd recommend opening a shell into the borgmatic container and try SSH'ing from there. If password-less login doesn't work from there, Borgmatic isn't going to work either.
  21. Sorry for the slow replies... for some reason I stopped getting post notifications. just turned them back on. Yes, I realize that... just listed that as an option for those using a remote cloud service without borg installed. The only real option for the borg recommended solution is to backup to something like rsync.net/borgbase or to backup to a family/friend's server runnnig borg. For everyone else, the only option is to use rsync/rclone and hope you don't propagate errors. I personally backup to a server I set up at a family member's house for my remote backups. I based that on the original tutorial from ds-unraid. I suppose the rationale is that you care more about when the files contents have been modified vs when the file properties have changed. ctime is a superset of mtime, so I suppose you could use that and it should also work, though I'm not sure why you'd want to re-backup a file whose contents haven't been modified. I'm sure there is a scenario where that makes more sense though.
  22. [Tutorial] Borgmatic now available in CA Appstore (aka the NEW best method to back up your data) : unRAID (reddit.com)
  23. Here are example crontab and config files with some descriptions. Both files should be placed in the appdata/borgmatic/config folder Example crontab: Twice Daily backups @ 1a, 1p Repo & archives checked weekly Wed @ 6a My repo is rather large (~5TB, 1M files) so it was sensible to separate the prune/create and checks to separate schedules The prune/create tasks take about 1 hr per repo to complete with minimal changes (for reference) The repo/archive check tasks takes about 9hr per repo to complete (for reference) crontab.txt: 0 1,13 * * * borgmatic prune create -v 1 --stats 2>&1 0 6 * * 3 borgmatic check -v 1 2>&1 Example Borgmatic config: Several source directories are included (read-only): Flash drive and appdata are incrementally backed up (alternative to CA backup utility) Backup share acts like a funnel for other data to be backed up Other machines on my network back themselves up to an unRAID "backup" share (Windows 10 backup, time machine, etc.) Docker images that use mysqlite are configured to place their DB backups in the "backup" share Other irreplaceable user shares Two repos are updated in succession: /mnt/borg-repository - Docker mapped volume NOT part of my array remote.mydomain.net:/mnt/disks/borg_remote/repo - A repo that resides on a family member's Linux box with borg installed Files cache set to use "mtime,size" - Very important as unRAID does not have persistent inode values Folders with a ".nobackup" file are ignored, "cache" and "trash" folders are ignored. There are many options for how to maintain your repo passphrase/keys. I opted for a simple passphrase that I specify in the config file Compression options are available, but I don't bother since 95% of my data is binary compressed data (MP4, JPG, etc) If you're backing up to a remote repo, you'll need to make sure that your SSH keypairs are working for password-less login. Don't forget to set the SSH folder permissions properly, or your keyfiles won't work. I have a MariaDB that runs as a database for Nextcloud and Bookstack. A full database dump is included in every backup Healthchecks.io monitors the whole thing and notifies me if a backup doesn't complete My retention policy is 2 hourly, 7 daily, 4 weekly, 12 monthly, 10 yearly I deleted the comments for brevity in the example below, but I recommend you start with the official reference template and make your edits from there. config.yaml: location: source_directories: - /boot - /mnt/user/appdata - /mnt/user/backup - /mnt/user/nextcloud - /mnt/user/music - /mnt/user/pictures repositories: - /mnt/borg-repository - remote.mydomain.net:/mnt/disks/borg_remote/repo one_file_system: true files_cache: mtime,size patterns: - '- [Tt]rash' - '- [Cc]ache' exclude_if_present: - .nobackup - .NOBACKUP storage: encryption_passphrase: "MYREPOPASSWORD" compression: none ssh_command: ssh -i /root/.ssh/id_rsa archive_name_format: 'backup-{now}' retention: keep_hourly: 2 keep_daily: 7 keep_weekly: 4 keep_monthly: 12 keep_yearly: 10 prefix: 'backup-' consistency: checks: - repository - archives prefix: 'backup-' hooks: before_backup: - echo "Starting a backup." after_backup: - echo "Finished a backup." on_error: - echo "Error during prune/create/check." mysql_databases: - name: all hostname: 192.168.200.37 password: MYSQLPASSWD healthchecks: https://hc-ping.com/MYUUID
  24. Application: borgmatic Docker Hub: https://hub.docker.com/r/b3vis/borgmatic Github: https://github.com/b3vis/docker-borgmatic Template's repo: https://github.com/Sdub76/unraid_docker_templates An Alpine Linux Docker container for witten's borgmatic by b3vis. Protect your files with client-side encryption. Backup your databases too. Monitor it all with integrated third-party services. Getting Started: It is recommended that your Borg repo and cache be located on a drive outside of your array (via unassigned devices plugin) Before you backup to a new repo, you need to initialize it first. Examples at https://borgbackup.readthedocs.io/en/stable/usage/init.html Place your crontab.txt and config.yaml in the "Borgmatic config" folder specified in the docker config. See examples below. A mounted repo can be accessed within Unraid using the "Fuse mount point" folder specified in the docker config. Example of how to mount a Borg archive at https://borgbackup.readthedocs.io/en/stable/usage/mount.html Support: Your best bet for Borg/Borgmatic support is to refer to the following links, as the template author does not maintain the application Borgmatic Source: https://github.com/witten/borgmatic Borgmatic Reference: https://torsion.org/borgmatic Borgmatic Issues: https://projects.torsion.org/witten/borgmatic/issues BorgBackup Reference: https://borgbackup.readthedocs.io Why use this image? Borgmatic is a simple, configuration-driven front-end to the excellent BorgBackup. BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption. The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets. Other Unraid/Borg solutions require installation of software to the base Unraid image. Running these tools along with their dependencies is what Docker was built for. This particular image does not support rclone, but does support remote repositories via SSH. This docker can be used with the Unraid rclone plugin if you wish to mirror your repo to a supported cloud service.
×
×
  • Create New...