sdub

Members
  • Posts

    64
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

sdub's Achievements

Rookie

Rookie (2/14)

36

Reputation

  1. The first one is very slow. Not sure there’sa lot you can do about it. I did my initial remote backup locally then copied it to the remote sever in person. Aka sneaker-net. if you felt like borg network performance was really the issue you could create the repo locally like I did then use a lightweight protocol like rsync to copy the initial repo across. When you change how you access the repo it will warn you once but allow it.
  2. I backup my server to another server where my connection is ~15Mbps up and the remote server is 200Mbps down. I performed a backup last night that consisted of 5.8GB against a 7.1TB archive. It took 82 minutes. The equivalent local backup took 55 minutes. … so if it all scales linearly I’d say my backups take about 5 minutes per TB (total archive) plus copy time. If it’s local that’s about 1 minute per GB and remote (with my line speed) that’s about 7 minutes per GB changed. Remember that a lot do the time during the backup, it’s scanning files on the remote machine using borg in server mode and not transmitting data.
  3. The filename makes no difference at all. What matters is the binary content of the files. Borg breaks up files into 2MB "chunks" (by default) that are deduplicated irrespective of metadata like filename or path. In this way even large files that have been partially changed have the possibility of partial deduplication, and renamed files will be fully deduplicated. In your example, different files with the same file name would both be broken into chunks and fully retained in the backup. When the last backup archive that uses a chunk is purged, the chunk is purged also. Reading up on the "chunker" might help you get an appreciation. I've found (as a borg user) it's very efficient. https://borgbackup.readthedocs.io/en/stable/internals/data-structures.html#chunker-details
  4. I struggled with something similar and i think the issue is once it’s in the cache it won’t stop backing it up. Only applies going forward for new folders. I bet if you do a simple test on a new repo with that folder it will work as expected. Might want to ask the guys over at the borg GitHub repo since it’s a borg behavior that’s causing it.
  5. I think the issue is that you created the borg repo then moved it, or are trying to access it using a different path. Run this borg command manually and confirm that you’re ok with the move. Then try running Borgmatic again. borg list 10.10.1.184:/mnt/user/borg
  6. I’ve never successfully exposed a fuse mount in one docker container to another. I usually perform browse and restore operations from the command line within the Borgmatic docker A couple of options that may work are: 1) install vorta container, which gives a borg gui so you can at least visually browse the archive, even if it’s not really a file manager. 2) install borg and llfuse to the base Unraid image using nerdpack. I would think this should work but I’ve never gotten the fuse mount to appear via samba or a file manager docker container. Not sure why because I think it should work. 3) install an Ubuntu VM with borg and llfuse and try to mount it there, browsing the archive visually through VNC. I haven’t actually tried this one yet. Let us know if you figure out a working solution!
  7. I wonder if there is some lower level error happening with the NFS mount. I would start with running: borg prune --verbose --keep-hourly 2 --keep-daily 7 --keep-weekly 4 --keep-monthly 12 --keep-yearly 10 --prefix backup- /mnt/borg-repository/NAS2 to see if there are any more detailed error messages (note the --verbose) you can get from borg (bypassing the borgmatic script altogether) This might be a better question to ask in the borg github issues page to see if they can help understand the specific borg error.
  8. From the command line: docker image remove df9b9388f04a but I seriously doubt the image is corrupted. A little more context on the error would be helpful. It's probably an error in the config.yaml, the repo can't be contacted (especially if it's a remote repo), or there's a stale lock. Can you execute the following command from within the container? borg list /path/to/repo
  9. If you’re mapping in /mnt/user it should already be there… you just need to add to the config.yaml otherwise yes just add a new path mapping into the container and add that container path to borgmatic’s config.yaml
  10. Agree… no reason you can’t back up /boot and /mnt/use/appdata with borg. I don’t like CA appdata because the backup files are huge and not very friendly with deduplication. I’ve used my borg archive to restore Plex multiple times without issue. With daily backups the risk of corruption on successive backups is limited, especially if you’re using the built-in DB backup capabilities of the various docker containers.
  11. In general, I like to keep Unraid as "stock" as possible and install any applications as containers. This is generally viewed as a best practice. Note, however that the borgmatic container is intended to be a backup client... it does not have an SSH server built into it. If you're running borgmatic or vorta on your laptop (via docker or otherwise), your easiest path will be to just install the borg binary from the nerdpack so your unraid box can act as a borg server (note that borg doesn't run as a server daemon, it's just a binary that gets invoked over SSH in "server mode" when necessary) If you go that path, you'll need to set up an SSH keypair between your laptop and unraid box that has nothing to do with borg. Once you can SSH in passwordless to the Unraid box, you'll simply add "1.2.3.4:/mnt/disks/backup_disk/laptop_repo" as a destination in your borgmatic config file, where 1.2.3.4 is your Unraid IP and /mnt/disks/backup_disk/laptop_repo" is the path on the Unraid box where you want your backup to reside. You can try running a borgserver docker as an alternative, but you'll have to manage SSH keys within that, pick an alternative SSH port, and expose this port to your LAN. Doable, but a bit more complicated.
  12. The package maintainer declined to add ssh-server to the borgmatic docker container... see the discussion in the link. You'll either need to install this simple borgserver docker or you'll need the borg binary installed on the machine you're trying to remotely backup to. If the target is also running Unraid this is easy enough with nerdpack.
  13. Just wrapped this up with the vorta-docker container owner. The next version of the docker container should include support for mounting archives within Vorta. Once the update is pushed out, you'll need to add device "/dev/fuse" and "--cap-add SYS_ADMIN" under "extra parameters" in the advanced view. After mounting the archive in the GUI, you'll need to open a command line in the vorta docker as the "app" user, since that's the user the vorta application runs as. The default "console" command from the GUI won't do it. Run the following from the Unraid terminal: docker exec -it --user app vorta sh Alternatively, you could just skip mounting the archive from wihtin the gui and open a standard vorta console. MAnually mount the repo with: borg mount /destination/repo /destination/mountpoint Probably easier in the long run.
  14. Mines broken too. If you set the image to modem7/wordle:legacy it works again
  15. I submitted a feature request to include the llfuse module in the docker container today. Is there a way to specify where logfiles are kept? Other than things being sent to the syslog that's viewable in the Docker logfile, I would like to be able to see the more detailed logfiles that are referenced in the GUI. Can't seem to find them anywhere.