Jump to content

sdub

Members
  • Posts

    88
  • Joined

  • Last visited

Posts posted by sdub

  1. I installed the "neofetch" application using the Nerd pack but was bummed to see that Unraid wasn't supported as a detectable OS. 

     

    I couldn't find one anywhere else, so I created my own:

     

    example.thumb.png.8f67769cb84d8c65d34b326a4e8ecbdb.png

     

    To replicate this you simply need to do the following:

    1. Install Nerd Pack from CA Appstore
    2. Install "neofetch" package
    3. Copy the attached "config.conf" and "unraid_ascii.txt" to ~/.config/neofetch
    4. If you want this to display automatically at login, add the line "neofetch" to the bottom of ~/.bash_profile

     

    unraid_ascii.txt

    config.conf

    • Like 5
    • Thanks 1
  2. 34 minutes ago, manHands said:

     

    Genious! Sound like I have my Windows backup solution now.

     

    One other quick question on your unRAID borg backup. How do you have the unassigned drive formatted? Did you just stick with XFS, or use something else for greater compatibility just in case you had to put the disk in another machine as part of a restore?

    Yeah I just used xfs…. It’s not as universal as FAT but it’s still pretty universal and there are ways to mount an xfs partition under windows if you had to. 

    • Thanks 1
  3. 39 minutes ago, manHands said:

    Does that make sense? Anything I might be missing with this approach or any other thoughts you may have? I appreciate the help.

     

     

    Funny you mention that, because I was just thinking about this "issue" earlier this week... of course an extra backup isn't really an "issue", just wasted space.

     

    I've slowly moved almost all of  the content I care about off the computers on the network... they almost exclusively access data stored on the Unraid NAS or via Nextcloud, which makes this issue largely go away.  The one exception is Adobe Lightroom...  that catalog really wants to be on a local disk.

     

    I just set up Borgmatic in a Docker container under Windows to backup the Lightroom and User folders from my Windows PC to an Unraid share that is NOT backed up by Borg (pretty easy and works great).  If I include that share in my Unraid backup, I'll have way more copies of the data than I really need.  Don't need an incremental backup of an incremental backup.

     

    I think I'm moving away from the "funnel" mentality.  Instead I'll have Unraid/Borg create local and remote repos for its important data.  I'll have Windows/Borg create local and remote repos for it's important data.  The Windows "local" repo will be kept in an un-backed up Unraid user share or the unassigned drive I use for backup.

     

    Going forward, I'll probably do the same on any and all other PC's that have data I care about.  The reality is that all of my other family members work almost exclusively out of the cloud (or my Unraid cloud), so there's very little risk of data loss.  

     

    • Thanks 1
  4. 2 hours ago, manHands said:

     

    Why is the repo recommended to be on an unassigned drive? What are the downsides to using a disk in the array that is only used for backups? You then also get the parity protection. Generally curious on this one. Thanks.

     

    The point was independence... you don't want a problem with your array making the data and the backup both corrupt.  As long as you're backing up to a 3rd offsite location, the risk is realistically pretty low. 

     

    You could specifically exclude the disk from all of your user shares and disable cache writes to minimize exposure to an Unraid bug, but the parity writes will slow down the backups which can take several hours for a multi-terabyte backup. 

    • Thanks 1
  5. 3 hours ago, hrv231 said:

    Just letting you all know that there is a fork from b3vis that it seems more up-to-date.

    https://hub.docker.com/r/modem7/borgmatic-docker
    https://www.modem7.com/books/docker-backup/page/backup-docker-using-borgmatic

     

     

    Thanks for the heads up... I'll take a look.  It's too bad that b3vis has fallen behind on versions.   Is there a specific feature in 1.5.13+ that you're looking for, or just don't want to fall behind the latest version too far?

     

    This modem7 fork is brand new, so I might not jump ship just yet... definitely something to keep an eye on.

  6. 1 hour ago, touz said:

     

    Could it be that SSH transfers slowly, or borgmatic encryption slows down the process significantly? Both machines indicates low CPU consumption.

     

     


    Probably a question for the borg forum like you said, but it could be several things when encryption is on. Some encryption modes are faster than others, and it could be Docker throttling CPU or memory usage that is the root cause of the bottleneck, even if the host isn’t pegged at 100%. Also the encryption is single threaded probably so your overall CPU utilization might not be representative if a single core is at 100%

     

    have you tried creating a test repo with no encryption for comparison?  

  7. I've been having similar issues... for some reason, the docker image never updated from 19.0.3 automatically from the Unraid docker dashboard.  I just went through a process to update to 22.0.0... I'll explain below.

     

    On 7/4/2021 at 2:16 PM, Richamc01 said:

    Then I found your post and I tried the first command you mention and I get:

    
    
    sudo: unknown user: abc
    sudo: unable to initialize policy plugin

     

     

    I found that these issues were usually because I didn't give Nextcloud enough time to start.  Watch the logfile for the following line before trying to execute occ commands.

    [cont-init.d] 01-envfile: exited 0.

     

     

    On 7/4/2021 at 2:16 PM, Richamc01 said:

    Exception: Updates between multiple major versions and downgrades are unsupported. Update failed Maintenance mode is kept active Resetting log level

     

    Didn't scour through all 199 pages of this support thread, but here was my solution...  it doesn't want to upgrade more than one major version at a time, so do just that... one major version at a time.  Basically, I followed the instructions in the stickied post, option 3, except for the version change indicated below:

     

    ##Turn on maintenance mode
    docker exec -it nextcloud occ maintenance:mode --on
    
    ##Backup current nextcloud install
    docker exec -it nextcloud mv /config/www/nextcloud /config/www/nextcloud-backup
    
    ##Update to latest v19 
    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-19.tar.bz2 -P /config
    docker exec -it nextcloud tar -xvf /config/latest-19.tar.bz2 -C /config/www
    
    ##Copy across old config.php from backup
    docker exec -it nextcloud cp /config/www/nextcloud-backup/config/config.php /config/www/nextcloud/config/config.php
    
    ##Now Restart docker container
    docker restart nextcloud
    
    ##Perform upgrade
    docker exec -it nextcloud occ upgrade
    
    ##Turn off maintenance mode
    docker exec -it nextcloud occ maintenance:mode --off
    
    ## Now Restart docker container
    docker restart nextcloud

     

    After updating, you should go into Nextcloud /settings/admin/overview page and address any security & setup warnings that are found before proceeding.

     

    Then repeat the whole process except changing this step:

    ##Update to latest v20
    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-20.tar.bz2 -P /config
    docker exec -it nextcloud tar -xvf /config/latest-20.tar.bz2 -C /config/www

     

    address any warnings as above, Then repeat the whole process except changing this step:

    ##Update to latest v21 
    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest-21.tar.bz2 -P /config
    docker exec -it nextcloud tar -xvf /config/latest-21.tar.bz2 -C /config/www

     

    address any warnings as above, then finally repeat with the steps as indicated in the sticky to get to v22:

    ##Grab newest nextcloud release and unpack it
    docker exec -it nextcloud wget https://download.nextcloud.com/server/releases/latest.tar.bz2 -P /config
    docker exec -it nextcloud tar -xvf /config/latest.tar.bz2 -C /config/www

     

    Once everything is working, you can remove the downloads and backup folders:

    ##Remove  backup folder
    docker exec -it nextcloud rm -rf /config/www/nextcloud-backup
    
    ##Remove ALL Nextcloud tar files
    docker exec -it nextcloud rm /config/latest*.tar.bz2

     

  8. Thank you for your answer. I understand that's how it works if I would be backing up to an unassigned disk or inside the array, however I'm trying to backup to a remote local machine using ssh. My understanding is that there's no local mount to be set in unraid for that, so I don't really understand which value to put for that path (correct me if I'm wrong). 
     
    Thanks again [emoji3]

    You’ll need to be able to SSH from the Docker container into the remote server with SSH keys (no password). Then you can init the remote repo with:

    borg init user@hostname:/path/to/repo

    Where /path/to/repo is on the remote machine.

    Note that the remote server needs to have borg installed since the commands get executed remotely.

    In the case where you have no local repo, just this remote one, you can leave that path variable blank or delete it from the Docker template.

    I’d encourage you to have both a local and remote backup though… as they say, one is none!


    Sent from my iPhone using Tapatalk
  9. I have the same question. I got SSH working, but I don't know what to put in "Borg Repo (Backup Destination):" to initialize the repo. Did you finally get it working? If so, what was the solution you found?
     
    Thanks!

    That’s the local path in Unraid where you want the borg repo to live. It can be inside your array or on a dedicated volume. It gets mapped into the /mnt/borg-repository folder INSIDE the Docker container.

    Once you open an SSH shell inside the Docker container you’ll initialize with a command like

    borg init --encryption=repokey-blake2 /mnt/borg-repository

    Regardless of the location you chose for the repo outside of Docker.

    Once you initialize with this command, you should see the repo files in the location you put in the "Borg Repo (Backup Destination):" variable.

    Sent from my iPhone using Tapatalk

  10. At the moment I'm using CA Backup to backup my appdata folder which is located on my cache drive. But for some other schedules it is really annoying that all containers has to be stopped during the backup. I already know that there is a way not to stop some containers but I'm searching for a easier solution. Therefore I would like to backup my appdata folder with borgmatic. 
     
    But there are a few things I have not yet understood like why Docker containers are not allowed to run during a backup of the appdata folder. What can happen if they run in worst case and why?
     
    I need this information to contrast the potentially risk with the advantages of backing up the appdata Folder with borgmatic without stoping the containers.



    As others have said the issue is with the file-based databases like SQLite that many dockers employ. There’s a small chance that the backup will not be valid if it happens mid-write on a database.

    I don’t like stopping Docker to do backups and prefer to have it be part of my continuous Borg backup however so my approach is:

    1. Wherever possible opt for Postgres or some other database that can be properly backed up in Borg.

    2. The backup is probably going to be ok unless it happens to occur mid-write on the database.

    3. If this does happen and I need to restore, the odds of successive backups both being corrupt is infinitesimal. That’s the advantage of rolling differential backups.

    4. Most Docker applications that use SQLite databases have a built-in scheduled database backup functionality that’s going to be valid. I make sure to turn this on wherever I can (Plex, etc.) and make sure that backup path is included in the Borg backup.

    I think with these multiple lines of defense, my databases are sufficiently protected using Borg backup, not using CA backup, and not stopping the Docker containers.

    If anyone sees a flaw in this logic I’d really like to know!


    Sent from my iPhone using Tapatalk
    • Thanks 1
  11. 5 hours ago, TrikkStar said:

    I'm slightly confused about how I'm supposed to init the borg repo. Do I do it from within the borgmatic container, or do I do it in UnRaid through the Nerdpack tools?

    I'm trying to do a manual backup from within the container after initing the repo in UnRaid itself, but I'm getting the following errors:

     

     

     

    The specific reason for your error appears to be that "/mnt/borg-repository" doesn't exist in the environment where you're running the borgmatic command.

     

    Everything should be done from within the borgmatic container... as a matter of fact you don't even need to install the nerdpack borg tools at all in Unraid.  This docker container is the only thing you need. 

     

    The whole point of this CA package was to allow you to keep a "stock" Unraid installation, which does not include borg or borgmatic.

     

     

    • Thanks 1
    • Upvote 1
  12. 1 hour ago, HagenS said:
    Hi,
    is it possible to mount an unassigned device just before borg starts and unmount the drive after all is finished? I have a separate drive for backups and would like (security/safety) to have access just for backup/restore run.
    I tried from inside container, but it seems there are not enough rights for mounting the drive from there. Hope there is a workaround.
    Thanks...


    There are hooks in the Borgmatic config file for “before_backup” and “after_backup” that could be used to invoke an SSH command to tell the Unraid parent to mount/unmount volumes.

    For example (where Unraid's IP is 192.168.1.6)

    before_backup:
    ssh 192.168.1.6 mount /dev/sdj1 /mnt/disks/borg_backup

    after_backup:
    ssh 192.168.1.6 umount /mnt/disks/borg_backup

     

    • Like 1
    • Thanks 1
  13. 1 hour ago, sdub said:

    OK, curl is now part of the docker.  You will need to go to the Docker tab and do a "force update" to get it.

    @T0a It occurred to me that you could also accomplish this without using HA Dockermon or curl by just executing an "ssh root@host docker stop CONTAINER" command directly.

  14. On 12/19/2020 at 9:31 AM, Greygoose said:

    I have this working using a local machine as a backup location. I now want to implement rclone, to have borgmatic upload to the remote folder, how can I implement this using the inbuilt fuse mount capabilities of borgmatic instead of using the USERSCRIPTS pluging to mount rclone.

     

    I'm far from an rclone expert, but I'm not sure how you're proposing to use fuse... in borg, it's used to expose the contents of a Borg repo to extract files.  If you indeed want to do this for whatever reason, you can follow the borg mount documentation.  Mount the repo to /mnt/fuse and you should be able to see it from the Unraid host via the "Fuse mount point" path in the borgmatic container config.  What I assume you really want to do is sync the borg repo to a cloud storage provider. 

     

    I'm afraid rclone is not part of this container, so you will need to do it separately.  I can think of 3 options:

     

    1.  There's a good SpaceinvaderOne tutorial out there if you want to use the rclone plugin available in the CA store (waseh), but that's basically no different than installing a user script (not my preference).  If you go this route, you could invoke rclone to start on the Unraid host from the docker container via an ssh command.  Something like "ssh root@hostname rclone sync [opts]"

     

    2.  (Preferred) If you really want to avoid installing stuff to your base image altogether, I'd recommend either installing a dedicated rclone docker like "pfidr34/docker-rclone" or the one available in the CA store (thomast_88).  You could then perform the rclone asynchonously on it's own cron schedule, but you need to be careful that they don't run at the same time.  If you want to automatically run rclone using the "after_backup" hook, You could execute a command that invokes the rclone command in another container from within the borgmatic container.  Something like "ssh root@hostname docker exec "rclone" rclone sync [opts]".

     

     

     

    3.  A final option is to install a single docker container with both borgmatic and rclone installed.  There isn't one in the CA store, so you'll need to install from docker hub with a custom template, but it's totally doable.  Here's one that looks like it would work: https://hub.docker.com/r/knthmn/borgmatic-rclone/

     

     

     

     

  15. On 12/16/2020 at 1:04 PM, T0a said:

    The last piece missing is stopping the docker container I mentioned above. The plan is to use "HA dockermon"  from within the borgmatic container. Would you mind adding curl to the docker container for me? 

     

    OK, curl is now part of the docker.  You will need to go to the Docker tab and do a "force update" to get it.

  16. 1 hour ago, T0a said:

    The last piece missing is stopping the docker container I mentioned above. The plan is to use "HA dockermon"  from within the borgmatic container. Would you mind adding curl to the docker container for me? Then, I would be able to stop any container via:

    
    curl -v -X POST <ha_dockermon_ip>:8126/container/container_name --header 'content-type: application/octet-stream' --data '{"state": "stop"}'

     

    I submitted a feature request to the docker maintainer for this... seems pretty straightforward.  I could fork the docker, but I'd rather stay tied to the base image.  

  17.  

    Sorry for not answering sooner... my post notifications were accidentally disabled.  

     

    On 11/22/2020 at 3:51 PM, T0a said:

    How do you make sure that files are not getting written by your docker containers while the backup is running? The CA backup stops containers to prevent file corruption AFAIK. I cannot see such a mechanism in your solution. Technically, this would be possible with the before_backup and after_backup hooks.

     

    That's a great point that I could start/stop docker using the hooks, but I don't want my system down for 4 hrs a day (I run an hour long local and hour long remote backup 2x daily)  Not having docker downtime was a significant reason I didn't want to use the CA backup solution.  In theory I could minimize this by having a separate borg archive for JUST appdata so the backup would be quicker but with Plex I have a huge number of small files, so it's still longer than I prefer.  

     

    My rationalization for my approach is twofold... 

    1. I'm not sure what the odds are that the file in the backup could get corrupted, but it's somewhere between "unlikely" and "possible".  The only files that I'd be really worried about are the filesystem-based databases like SQLite.  Since I'm doing 2x daily backups, the odds of having consecutive corrupted files backed up seems very, very small.
    2. Most of the programs that use SQLite (Plex, 'arrs, etc.) have the option in-app for periodic database backups.  Those backups get ingested into the archives, and I don't have to worry about them being corrupted.  About the only one that doesn't have this is my Grafana/InfluxDB docker, but I'm not particularly concerned about losing this data.  If I were concerned, I'm sure I could find a way to have it dump periodic DB images.

    If this seems dubious to you, please let me know why... it's just my thought process.

     

    Quote

    Not sure, if any further/similar steps needs to be taken into account for the flash drive. May be worth looking into the CA backup code to review the protection mechanisms.

    I'm not really sure either... I understand that flash backups will be coming in Unraid 6.9 when it releases, so I'll probably just take my chances until then.  

     

  18. On 11/25/2020 at 2:22 PM, cheesemarathon said:

    Still having issues:

     

    Hopefully you got your SSH issues sorted... it sounds like laur had the right advice.  For anyone else that finds this, I'd recommend opening a shell into the borgmatic container and try SSH'ing from there.  If password-less login doesn't work from there, Borgmatic isn't going to work either.

     

  19. Sorry for the slow replies... for some reason I stopped getting post notifications.  just turned them back on. 

     

    On 11/24/2020 at 6:50 AM, laur said:

    > This docker can be used with the Unraid rclone plugin if you wish to mirror your repo to a supported cloud service.

    Note this goes against borg recommendation.

     

    Yes, I realize that... just listed that as an option for those using a remote cloud service without borg installed.  The only real option for the borg recommended solution is to backup to something like rsync.net/borgbase or to backup to a family/friend's server runnnig borg.  For everyone else, the only option is to use rsync/rclone and hope you don't propagate errors.  I personally backup to a server I set up at a family member's house for my remote backups.

     

    Quote

    > Files cache set to use "mtime,size" - Very important as unRAID does not have persistent inode values
    That's a great point! Will amend my setup. Why did you change the default 'ctime' to 'mtime' though?

    I based that on the original tutorial from ds-unraid.  I suppose the rationale is that you care more about when the files contents have been modified vs when the file properties have changed.  ctime is a superset of mtime, so I suppose you could use that and it should also work, though I'm not sure why you'd want to re-backup a file whose contents haven't been modified.  I'm sure there is a scenario where that makes more sense though.  

     

  20. Here are example crontab and config files with some descriptions.  Both files should be placed in the appdata/borgmatic/config folder

     

    Example crontab:

    • Twice Daily backups @ 1a, 1p
    • Repo & archives checked weekly Wed @ 6a
    • My repo is rather large (~5TB, 1M files) so it was sensible to separate the prune/create and checks to separate schedules
      • The prune/create tasks take about 1 hr per repo to complete with minimal changes (for reference)
      • The repo/archive check tasks takes about 9hr per repo to complete (for reference)

    crontab.txt:

    0 1,13 * * * borgmatic prune create -v 1 --stats 2>&1
    0 6    * * 3 borgmatic check -v 1 2>&1

     

     

    Example Borgmatic config:

    • Several source directories are included (read-only):
      • Flash drive and appdata are incrementally backed up (alternative to CA backup utility)
      • Backup share acts like a funnel for other data to be backed up
        • Other machines on my network back themselves up to an unRAID "backup" share (Windows 10 backup, time machine, etc.)
        • Docker images that use mysqlite are configured to place their DB backups in the "backup" share
      • Other irreplaceable user shares
    • Two repos are updated in succession:
      • /mnt/borg-repository - Docker mapped volume NOT part of my array
      • remote.mydomain.net:/mnt/disks/borg_remote/repo - A repo that resides on a family member's Linux box with borg installed
    • Files cache set to use "mtime,size" - Very important as unRAID does not have persistent inode values
    • Folders with a ".nobackup" file are ignored, "cache" and "trash" folders are ignored.
    • There are many options for how to maintain your repo passphrase/keys.  I opted for a simple passphrase that I specify in the config file
    • Compression options are available, but I don't bother since 95% of my data is binary compressed data (MP4, JPG, etc)
    • If you're backing up to a remote repo, you'll need to make sure that your SSH keypairs are working for password-less login.  Don't forget to set the SSH folder permissions properly, or your keyfiles won't work.  
    • I have a MariaDB that runs as a database for Nextcloud and Bookstack.  A full database dump is included in every backup
    • Healthchecks.io monitors the whole thing and notifies me if a backup doesn't complete
    • My retention policy is 2 hourly, 7 daily, 4 weekly, 12 monthly, 10 yearly

    I deleted the comments for brevity in the example below, but I recommend you start with the official reference template and make your edits from there.

     

    config.yaml:

    location:
        source_directories:
            - /boot
            - /mnt/user/appdata
            - /mnt/user/backup
            - /mnt/user/nextcloud
            - /mnt/user/music
            - /mnt/user/pictures
        repositories:
            - /mnt/borg-repository
            - remote.mydomain.net:/mnt/disks/borg_remote/repo
        one_file_system: true
        files_cache: mtime,size
        patterns:
            - '- [Tt]rash'
            - '- [Cc]ache'
        exclude_if_present:
            - .nobackup
            - .NOBACKUP
    
    storage:
        encryption_passphrase: "MYREPOPASSWORD"
        compression: none
        ssh_command: ssh -i /root/.ssh/id_rsa
        archive_name_format: 'backup-{now}'
    
    retention:
        keep_hourly: 2
        keep_daily: 7
        keep_weekly: 4
        keep_monthly: 12
        keep_yearly: 10
        prefix: 'backup-'
    
    consistency:
        checks:
            - repository
            - archives
        prefix: 'backup-'
    
    hooks:
        before_backup:
            - echo "Starting a backup."
        after_backup:
            - echo "Finished a backup."
        on_error:
            - echo "Error during prune/create/check."
        mysql_databases:
            - name: all
              hostname: 192.168.200.37
              password: MYSQLPASSWD
        healthchecks: https://hc-ping.com/MYUUID

     

    • Thanks 5
  21. Application: borgmatic

    Docker Hub: https://hub.docker.com/r/b3vis/borgmatic

    Github: https://github.com/b3vis/docker-borgmatic

    Template's repo: https://github.com/Sdub76/unraid_docker_templates

     

    An Alpine Linux Docker container for witten's borgmatic by b3vis. Protect your files with client-side encryption. Backup your databases too. Monitor it all with integrated third-party services.

     

    Getting Started:

    • It is recommended that your Borg repo and cache be located on a drive outside of your array (via unassigned devices plugin)
    • Before you backup to a new repo, you need to initialize it first. Examples at https://borgbackup.readthedocs.io/en/stable/usage/init.html
    • Place your crontab.txt and config.yaml in the "Borgmatic config" folder specified in the docker config. See examples below.
    • A mounted repo can be accessed within Unraid using the "Fuse mount point" folder specified in the docker config. Example of how to mount a Borg archive at https://borgbackup.readthedocs.io/en/stable/usage/mount.html

     

    Support:

    Your best bet for Borg/Borgmatic support is to refer to the following links, as the template author does not maintain the application

     

    Why use this image?

    Borgmatic is a simple, configuration-driven front-end to the excellent BorgBackup.  BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption.  The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets.

     

    Other Unraid/Borg solutions require installation of software to the base Unraid image.  Running these tools along with their dependencies is what Docker was built for.  This particular image does not support rclone, but does support remote repositories via SSH.  This docker can be used with the Unraid rclone plugin if you wish to mirror your repo to a supported cloud service.

    • Like 3
×
×
  • Create New...