sdub

Members
  • Posts

    88
  • Joined

  • Last visited

Everything posted by sdub

  1. I am not at my server right now but I can say that my backups are still working and I haven’t touched the config in months. the Unicode error makes me wonder… did you possibly edit your config file in windows such that it has a Windows text file format instead of UNIX?
  2. No I don’t. I use the database backup functions for things like MariaDB and Postgres and the built-in database backup function for file based databases like Plex. For everything else the risk is realistically minimal. Especially with scheduled backups and a multitude of retained versions. In the off-chance the daily backup happens to be bad, there are other daily, weekly, monthly Archives I can draw from. Losing a day or two of data isn’t a big concern of mine.
  3. Looks like the root issue is in that '/root/.cache/borg' folder. If you open a shell into borg and browse to that folder, does everyhing appear normal? Could be that folder mapping is screwed up in your docker config.
  4. you can also specify the config file with the borgmatic -c /path/to/config syntax. I personally have a local.yaml and remote.yaml that both include a common.yaml for common options. I call the local and remote backups through separate lines in my crontab
  5. Your repo should be “/mnt/borg-repository” not “/mnt/user/disks/easystore264D”. You have mounted “/mnt/user” as a read-only path, so you can’t write to the repo via that path.
  6. It looks to me like maybe you are mounting the share to Borgmatic as read-only… not that it’s read only in the host filesystem. What’s your Borgmatic docker config look like?
  7. To reduce data transmission, Borg requires the borg binary to be present on both the local machine and the remote machine when performing remote backups. That way the remote machine can do the work of comparing what's changed from the cache. If you don't want to (or can't) install borg to the host OS, you can run nold360's borgserver container on the remote machine. This image has just the plain "borg" binary and an SSH server to allow it to act as a borg "target". No borgmatic, but allows you to connect and perform a backup "to" that remotely running docker container. If you were running a backup scheme where computer A backs up to computer B and vice versa, an ideal scenario would be just having the 2 "borgmatic" docker containers talk to each other. Unfortunately, the official borgmatic container lacks an SSH server so you can't. It's been requested to add SSH server support but they've declined to do so to keep the image simple. This CA application uses the "official" borgmatic container published by the borgmatic team. The alternative to using the "borgserver" container in Unraid is to use the Nerdpack to install borg directly to the host Unraid OS, since it's already running the requred SSH server. If you were just using vanilla borg and not borgmatic, you certainly could run the "borgserver" container on both the local and remote systems and need nothing else. Hope that clarifies.
  8. OK, @Greygoose and @stridemat, I updated the CA template to point to a static location for the borgmatic icon. Future pulls should get this, but I'm not sure if it will automatically update for you. To manually fix, go to advanced view in the Borgmatic docker container config. Change "Icon URL" to https://raw.githubusercontent.com/Sdub76/unraid_docker_templates/main/images/borgmatic.png This is a static copy that I have in the CA repo, so it shouldn't change unless github changes it's static URLs.
  9. I’m afraid I’ve never gotten this to work in the docker container. I’ve tried passing the fuse device in, but I can’t access the share externally. I may have the permissions wrong or maybe it’s something else. what I’ve always done is use this image to create the backup image then use the Borgmatic binary on the host system (either Unraid or some other host OS that can access the borg repo on the LAN) to browse and extract the files I need. Borg and llfuse can be installed from the Nerdpack in Unraid. if someone else has been able to expose a mounted repo from outside the docker container I’d love to hear how.
  10. Sorry I missed this issue. I’ll take a look and get back to you… for most issues it’s more effective to ask in the Borgmatic GitHub support page. last year they moved the Borgmatic repo from b3vis’s repo to the main “Borgmatic-collective” repo. I updated the template but maybe something else changed. that specific error at first glance looks like the link to the icon just changed. Is it causing Borgmatic to not update/work for you?
  11. I haven’t updated the CA container in some time just because it hasn’t been necessary. The docker container it pulls from is still very actively developed and widely used, so yes it’s still active. If needed I’ll update the CA template.
  12. Borg creates a lock to make sure it has exclusive access to the backup archive and cache to avoid corruption. When a backup gets interrupted, that lock is sometimes left in place, blocking subsequent backups. Whenever you see an error about not being able to acquire a lock, the first thing you need to do is break the lock if you are sure that no borg process is running against that repo. From inside the borg container (docker exec -it borgmatic /bin/sh), run following command and try again. borg break-lock /mnt/borg-repository
  13. It's going to be hard to help debug this without some specifics from the error logs. Does it simply not run, or does it occasionally fail? If it fails, how does it fail? If the folks on this thread can't help, maybe a support ticket over at Borgmatic docker github could help. Remember I only maintain the docker template in the CA Appstore. @witten is the actual developer of the docker image. The automatic docker updates in Unraid are handled by the CA APPLICATION AUTO UPDATE plugin. As long as your repository is set to "ghcr.io/borgmatic-collective/borgmatic" it should work. Note that this location changed somewhat recently, but shouldn't explain why "auto-update" vs "forced update" behaves differently. Maybe consult that support thread for help.
  14. Thanks... it looks like they're still pushing the updates to dockerhub (at least for now), but I updated the template as suggested. (I think you meant to say they moved from dockerhub to github) I'm not sure if template updates affect existing containers. To move existing containers to the new address, open the borgmatic container settings, go to advanced and change the following lines: Repository: ghcr.io/borgmatic-collective/borgmatic Docker Hub URL: https://ghcr.io/borgmatic-collective/borgmatic Icon URL: https://github.com/borgmatic-collective/borgmatic/raw/master/docs/static/borgmatic.png
  15. that won’t work since that container doesn’t have the borg binary. you could try something like the nold360/borgserver container, which is basically just borg and ssh. Just feed in the path to your borg repo.
  16. The Borgmatic image does not have ssh built in, so you’d need install Borgmatic using nerd tools plug-in, then you could use “ssh://user@host/path/to/repo” no built in gui because the config files are pretty straight forward to configure, and something intended to be set and forget.
  17. Did you read the last few posts before posting?
  18. Yes I’m seeing it now too. Best thing to do in this case is to specify a specific older version until the container is fixed. In the docker config change “Repository:” to read “b3vis/borgmatic:1.7.5”. This will pin it to that specific version. Once it’s fixed change it back to :latest or just leave that part off. In this specific instance the workaround described at the link seems easy enough but isn’t persistent if the docker container updates itself. Assuming the fix is in the next docker image, it doesn’t really matter.
  19. I think that's I think the options are: 1. Install a docker container with sshd and borgbackup, like borgserver (here's one that appears to be more frequently updated) 2. Install borgbackup on just about any Linux or FreeBSD VM under Unraid 3. Install the borgbackup binary somewhere on your host system, either manually in /boot/extra or using NerdTools in 6.11+. Either way, you'll need the associated SSH daemon exposed on a port that's accessible via LAN or VPN if you don't want it open to the internet, which I would never recommend. I'm not sure any particular method is preferred, though I suppose options 1 or 2 could be marginally more secure and keep your Unraid install closer to "stock" if you're concerned about that. I personally have borgbackup installed through NerdTools
  20. no it’s not necessary. The idea was to have a mount point for mounting archives somewhere you could get to them outside of the container and /mnt seems like the proper place. Either ignore the error, delete the mount point, or pick another mount point that fox common problems doesn’t complain about.
  21. Thanks for this… I updated the first post to remove the path statement from the suggested crontab config. Underlines the importance of monitoring your backups with something like Healthchecks (can be self hosted) so you know if something breaks ASAP!
  22. Is there a workflow to request packages be added? I'd love to get "bat" added, unless there's a better alternative already in there for colorizing logs. https://github.com/sharkdp/bat Thanks
  23. 3-2-1 backup means 3 copies, 2 types of media, at least one offsite. When I wrote that my reasoning was just to get as much diversity as possible between your data and the backup in lieu of 2 types of media. Preferably it would be on a different system with a different Os, filesystem, etc. There’s no “right” or “wrong”, just better or worse.
  24. The first one is very slow. Not sure there’sa lot you can do about it. I did my initial remote backup locally then copied it to the remote sever in person. Aka sneaker-net. if you felt like borg network performance was really the issue you could create the repo locally like I did then use a lightweight protocol like rsync to copy the initial repo across. When you change how you access the repo it will warn you once but allow it.