sdub

Members
  • Posts

    34
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sdub's Achievements

Newbie

Newbie (1/14)

11

Reputation

  1. Yeah I just used xfs…. It’s not as universal as FAT but it’s still pretty universal and there are ways to mount an xfs partition under windows if you had to.
  2. Funny you mention that, because I was just thinking about this "issue" earlier this week... of course an extra backup isn't really an "issue", just wasted space. I've slowly moved almost all of the content I care about off the computers on the network... they almost exclusively access data stored on the Unraid NAS or via Nextcloud, which makes this issue largely go away. The one exception is Adobe Lightroom... that catalog really wants to be on a local disk. I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great). If I include that share in my Unraid backup, I'll have way more copies of the data than I really need. Don't need an incremental backup of an incremental backup. I think I'm moving away from the "funnel" mentality. Instead I'll have Unraid/Borg create local and remote repos for its important data. I'll have Windows/Borg create local and remote repos for it's important data. The Windows "local" repo will be kept in an un-backed up Unraid user share or the unassigned drive I use for backup. Going forward, I'll probably do the same on any and all other PC's that have data I care about. The reality is that all of my other family members work almost exclusively out of the cloud (or my Unraid cloud), so there's very little risk of data loss.
  3. The point was independence... you don't want a problem with your array making the data and the backup both corrupt. As long as you're backing up to a 3rd offsite location, the risk is realistically pretty low. You could specifically exclude the disk from all of your user shares and disable cache writes to minimize exposure to an Unraid bug, but the parity writes will slow down the backups which can take several hours for a multi-terabyte backup.
  4. Thanks for the heads up... I'll take a look. It's too bad that b3vis has fallen behind on versions. Is there a specific feature in 1.5.13+ that you're looking for, or just don't want to fall behind the latest version too far? This modem7 fork is brand new, so I might not jump ship just yet... definitely something to keep an eye on.
  5. Probably a question for the borg forum like you said, but it could be several things when encryption is on. Some encryption modes are faster than others, and it could be Docker throttling CPU or memory usage that is the root cause of the bottleneck, even if the host isn’t pegged at 100%. Also the encryption is single threaded probably so your overall CPU utilization might not be representative if a single core is at 100% have you tried creating a test repo with no encryption for comparison?
  6. I've been having similar issues... for some reason, the docker image never updated from 19.0.3 automatically from the Unraid docker dashboard. I just went through a process to update to 22.0.0... I'll explain below. I found that these issues were usually because I didn't give Nextcloud enough time to start. Watch the logfile for the following line before trying to execute occ commands. [cont-init.d] 01-envfile: exited 0. Didn't scour through all 199 pages of this support thread, but here was my solution... it doesn't want to upgrade more than one major version at a time, so do just that... one major version at a time. Basically, I followed the instructions in the stickied post, option 3, except for the version change indicated below: ##Turn on maintenance mode docker exec -it nextcloud occ maintenance:mode --on ##Backup current nextcloud install docker exec -it nextcloud mv /config/www/nextcloud /config/www/nextcloud-backup ##Update to latest v19 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-19.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-19.tar.bz2 -C /config/www ##Copy across old config.php from backup docker exec -it nextcloud cp /config/www/nextcloud-backup/config/config.php /config/www/nextcloud/config/config.php ##Now Restart docker container docker restart nextcloud ##Perform upgrade docker exec -it nextcloud occ upgrade ##Turn off maintenance mode docker exec -it nextcloud occ maintenance:mode --off ## Now Restart docker container docker restart nextcloud After updating, you should go into Nextcloud /settings/admin/overview page and address any security & setup warnings that are found before proceeding. Then repeat the whole process except changing this step: ##Update to latest v20 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-20.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-20.tar.bz2 -C /config/www address any warnings as above, Then repeat the whole process except changing this step: ##Update to latest v21 docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-21.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest-21.tar.bz2 -C /config/www address any warnings as above, then finally repeat with the steps as indicated in the sticky to get to v22: ##Grab newest nextcloud release and unpack it docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config docker exec -it nextcloud tar -xvf /config/latest.tar.bz2 -C /config/www Once everything is working, you can remove the downloads and backup folders: ##Remove backup folder docker exec -it nextcloud rm -rf /config/www/nextcloud-backup ##Remove ALL Nextcloud tar files docker exec -it nextcloud rm /config/latest*.tar.bz2
  7. You’ll need to be able to SSH from the Docker container into the remote server with SSH keys (no password). Then you can init the remote repo with: borg init user@hostname:/path/to/repo Where /path/to/repo is on the remote machine. Note that the remote server needs to have borg installed since the commands get executed remotely. In the case where you have no local repo, just this remote one, you can leave that path variable blank or delete it from the Docker template. I’d encourage you to have both a local and remote backup though… as they say, one is none! Sent from my iPhone using Tapatalk
  8. That’s the local path in Unraid where you want the borg repo to live. It can be inside your array or on a dedicated volume. It gets mapped into the /mnt/borg-repository folder INSIDE the Docker container. Once you open an SSH shell inside the Docker container you’ll initialize with a command like borg init --encryption=repokey-blake2 /mnt/borg-repository Regardless of the location you chose for the repo outside of Docker. Once you initialize with this command, you should see the repo files in the location you put in the "Borg Repo (Backup Destination):" variable. Sent from my iPhone using Tapatalk
  9. Another reason I chose not to use the CA backup is that it didn’t allow for de-duplication with large zip files. It made my Borg archive grow and grow. Sent from my iPhone using Tapatalk
  10. As others have said the issue is with the file-based databases like SQLite that many dockers employ. There’s a small chance that the backup will not be valid if it happens mid-write on a database. I don’t like stopping Docker to do backups and prefer to have it be part of my continuous Borg backup however so my approach is: 1. Wherever possible opt for Postgres or some other database that can be properly backed up in Borg. 2. The backup is probably going to be ok unless it happens to occur mid-write on the database. 3. If this does happen and I need to restore, the odds of successive backups both being corrupt is infinitesimal. That’s the advantage of rolling differential backups. 4. Most Docker applications that use SQLite databases have a built-in scheduled database backup functionality that’s going to be valid. I make sure to turn this on wherever I can (Plex, etc.) and make sure that backup path is included in the Borg backup. I think with these multiple lines of defense, my databases are sufficiently protected using Borg backup, not using CA backup, and not stopping the Docker containers. If anyone sees a flaw in this logic I’d really like to know! Sent from my iPhone using Tapatalk
  11. The specific reason for your error appears to be that "/mnt/borg-repository" doesn't exist in the environment where you're running the borgmatic command. Everything should be done from within the borgmatic container... as a matter of fact you don't even need to install the nerdpack borg tools at all in Unraid. This docker container is the only thing you need. The whole point of this CA package was to allow you to keep a "stock" Unraid installation, which does not include borg or borgmatic.
  12. There are hooks in the Borgmatic config file for “before_backup” and “after_backup” that could be used to invoke an SSH command to tell the Unraid parent to mount/unmount volumes. For example (where Unraid's IP is 192.168.1.6) before_backup: ssh 192.168.1.6 mount /dev/sdj1 /mnt/disks/borg_backup after_backup: ssh 192.168.1.6 umount /mnt/disks/borg_backup
  13. @T0a It occurred to me that you could also accomplish this without using HA Dockermon or curl by just executing an "ssh root@host docker stop CONTAINER" command directly.
  14. I'm far from an rclone expert, but I'm not sure how you're proposing to use fuse... in borg, it's used to expose the contents of a Borg repo to extract files. If you indeed want to do this for whatever reason, you can follow the borg mount documentation. Mount the repo to /mnt/fuse and you should be able to see it from the Unraid host via the "Fuse mount point" path in the borgmatic container config. What I assume you really want to do is sync the borg repo to a cloud storage provider. I'm afraid rclone is not part of this container, so you will need to do it separately. I can think of 3 options: 1. There's a good SpaceinvaderOne tutorial out there if you want to use the rclone plugin available in the CA store (waseh), but that's basically no different than installing a user script (not my preference). If you go this route, you could invoke rclone to start on the Unraid host from the docker container via an ssh command. Something like "ssh root@hostname rclone sync [opts]" 2. (Preferred) If you really want to avoid installing stuff to your base image altogether, I'd recommend either installing a dedicated rclone docker like "pfidr34/docker-rclone" or the one available in the CA store (thomast_88). You could then perform the rclone asynchonously on it's own cron schedule, but you need to be careful that they don't run at the same time. If you want to automatically run rclone using the "after_backup" hook, You could execute a command that invokes the rclone command in another container from within the borgmatic container. Something like "ssh root@hostname docker exec "rclone" rclone sync [opts]". 3. A final option is to install a single docker container with both borgmatic and rclone installed. There isn't one in the CA store, so you'll need to install from docker hub with a custom template, but it's totally doable. Here's one that looks like it would work: https://hub.docker.com/r/knthmn/borgmatic-rclone/