Jump to content

sdub

Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

2 Neutral

About sdub

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. [Tutorial] Borgmatic now available in CA Appstore (aka the NEW best method to back up your data) : unRAID (reddit.com)
  2. Here are example crontab and config files with some descriptions. Both files should be placed in the appdata/borgmatic/config folder Example crontab: Twice Daily backups @ 1a, 1p Repo & archives checked weekly Wed @ 6a My repo is rather large (~5TB, 1M files) so it was sensible to separate the prune/create and checks to separate schedules The prune/create tasks take about 1 hr per repo to complete with minimal changes (for reference) The repo/archive check tasks takes about 9hr per repo to complete (for reference) crontab.txt: 0 1,13 * * * PATH=$PATH:/usr/bin /usr/bin/borgmatic prune create -v 1 --stats 2>&1 0 6 * * 3 PATH=$PATH:/usr/bin /usr/bin/borgmatic check -v 1 2>&1 Example Borgmatic config: Several source directories are included (read-only): Flash drive and appdata are incrementally backed up (alternative to CA backup utility) Backup share acts like a funnel for other data to be backed up Other machines on my network back themselves up to an unRAID "backup" share (Windows 10 backup, time machine, etc.) Docker images that use mysqlite are configured to place their DB backups in the "backup" share Other irreplaceable user shares Two repos are updated in succession: /mnt/borg-repository - Docker mapped volume NOT part of my array remote.mydomain.net:/mnt/disks/borg_remote/repo - A repo that resides on a family member's Linux box with borg installed Files cache set to use "mtime,size" - Very important as unRAID does not have persistent inode values Folders with a ".nobackup" file are ignored, "cache" and "trash" folders are ignored. There are many options for how to maintain your repo passphrase/keys. I opted for a simple passphrase that I specify in the config file Compression options are available, but I don't bother since 95% of my data is binary compressed data (MP4, JPG, etc) If you're backing up to a remote repo, you'll need to make sure that your SSH keypairs are working for password-less login. Don't forget to set the SSH folder permissions properly, or your keyfiles won't work. I have a MariaDB that runs as a database for Nextcloud and Bookstack. A full database dump is included in every backup Healthchecks.io monitors the whole thing and notifies me if a backup doesn't complete My retention policy is 2 hourly, 7 daily, 4 weekly, 12 monthly, 10 yearly I deleted the comments for brevity in the example below, but I recommend you start with the official reference template and make your edits from there. config.yaml: location: source_directories: - /boot - /mnt/user/appdata - /mnt/user/backup - /mnt/user/nextcloud - /mnt/user/music - /mnt/user/pictures repositories: - /mnt/borg-repository - remote.mydomain.net:/mnt/disks/borg_remote/repo one_file_system: true files_cache: mtime,size patterns: - '- [Tt]rash' - '- [Cc]ache' exclude_if_present: - .nobackup - .NOBACKUP storage: encryption_passphrase: "MYREPOPASSWORD" compression: none ssh_command: ssh -i /root/.ssh/id_rsa archive_name_format: 'backup-{now}' retention: keep_hourly: 2 keep_daily: 7 keep_weekly: 4 keep_monthly: 12 keep_yearly: 10 prefix: 'backup-' consistency: checks: - repository - archives prefix: 'backup-' hooks: before_backup: - echo "Starting a backup." after_backup: - echo "Finished a backup." on_error: - echo "Error during prune/create/check." mysql_databases: - name: all hostname: 192.168.200.37 password: MYSQLPASSWD healthchecks: https://hc-ping.com/MYUUID
  3. Application: borgmatic Docker Hub: https://hub.docker.com/r/b3vis/borgmatic Github: https://github.com/b3vis/docker-borgmatic Template's repo: https://github.com/Sdub76/unraid_docker_templates An Alpine Linux Docker container for witten's borgmatic by b3vis. Protect your files with client-side encryption. Backup your databases too. Monitor it all with integrated third-party services. Getting Started: It is recommended that your Borg repo and cache be located on a drive outside of your array (via unassigned devices plugin) Before you backup to a new repo, you need to initialize it first. Examples at https://borgbackup.readthedocs.io/en/stable/usage/init.html Place your crontab.txt and config.yaml in the "Borgmatic config" folder specified in the docker config. See examples below. A mounted repo can be accessed within Unraid using the "Fuse mount point" folder specified in the docker config. Example of how to mount a Borg archive at https://borgbackup.readthedocs.io/en/stable/usage/mount.html Support: Your best bet for Borg/Borgmatic support is to refer to the following links, as the template author does not maintain the application Borgmatic Source: https://github.com/witten/borgmatic Borgmatic Reference: https://torsion.org/borgmatic Borgmatic Issues: https://projects.torsion.org/witten/borgmatic/issues BorgBackup Reference: https://borgbackup.readthedocs.io Why use this image? Borgmatic is a simple, configuration-driven front-end to the excellent BorgBackup. BorgBackup (short: Borg) is a deduplicating backup program. Optionally, it supports compression and authenticated encryption. The main goal of Borg is to provide an efficient and secure way to backup data. The data deduplication technique used makes Borg suitable for daily backups since only changes are stored. The authenticated encryption technique makes it suitable for backups to not fully trusted targets. Other Unraid/Borg solutions require installation of software to the base Unraid image. Running these tools along with their dependencies is what Docker was built for. This particular image does not support rclone, but does support remote repositories via SSH. This docker can be used with the Unraid rclone plugin if you wish to mirror your repo to a supported cloud service.
  4. On the Docker config click advanced, then change the WebUI line http://[IP]:[PORT:8888]/
  5. Did as I mentioned previously... removed the drive from the array, used the parity emulation to move the data off. While the drive was out, I did SMART tests and ran DriveSpeed. No problems at all. Threw it back into the array and started the rebuild of that disk. The behavior was very similar... for the first hour, it was full speed... 100+MBps. After that it started slowing, and by hour 4 it was down to 200kBps. I think this drive is just garbage. Never again will I buy a Seagate. Here’s a somewhat related article that gives me deja Vu. https://forums.tomshardware.com/threads/seagate-barracuda-2tb-slow-issue.3443725/ I’m running a preclear on it, just begging it to actually give an error, but I doubt it will. I may pull it out and throw it in my Windows machine to do more testing with CrystalDiskMark or something just to be sure it’s not a bad controller channel or something. Sent from my iPhone using Tapatalk
  6. I suppose the source (cache drive) could have been the problem and not the Seagate SATA destination drive... With the drive still offline I’ll try to rerun the mover. This seemed unlikely as the cache is a brand new Samsung 970 EVO Plus, and it’s been otherwise fine.
  7. I ran the diskspeed tool on the drive as an unassigned device and no problems.... >100MBps across the board. Maybe I’ll try a preclear to see if it has any problems. Very strange.
  8. That makes more sense... new plan: Restart array, marking Disk 7 as "missing", letting the parity emulate the drive Move all of the parity-emulated Drive 7 data off to other drives Restart array, marking Disk 7 as "available", allowing the parity to rebuild it (as empty) Go into the share settings and prohibit any share from using "drive 7" Run the diskspeed diagnostics you linked above If I'm able to get it fixed, allow drive 7 to be re-included in the shares. If not, I guess either replace it with a new 8TB HDD or define a new config without it.
  9. To get back to a working system, what's the best way to get that data off "drive 7"? I was thinking of the following procedure: Removing Drive 7 Let the parity create it virtually Move all of the parity-emulated Drive 7 data off to other drives Reformat the physical Drive 7 in Windows or elsewhere Bring it back into the array, allowing Unraid to clear it and resynchronize it (no data anymore) Go into the share settings and prohibit any share from using "drive 7" Run the diskspeed diagnostics you linked above If I'm able to get it fixed, allow drive 7 to be re-included in the shares. If not, I guess either replace it with a new 8TB HDD or define a new config without it. Does that make sense?
  10. No... Drive 7 is still slow as a Glacier and I'm not sure how to fix it, or diagnose what's going on.
  11. Thanks... wasn't aware of that. I hope they add a stop button in the UI in 6.9... seems like it would be pretty straight forward.
  12. Yes, I stopped the parity check to move the data off once I saw it would be 300+ days to complete
  13. I've been an Unraid user for about a month now... see attached diagnostics. My array consists of 8 WD SATA drives ranging from 4 to 12TB, one 8TB Seagate drive, and one 12TB WD Parity drive. I'm using a single 1TB Samsung NVMe drive for Cache. It's worth noting that the Seagate drive is a freshly shucked STEB8000100, purchased in May 2018. I never had issues with it when it was connected via USB to my Windows box. All SMART diagnostics on all drives show healthy. Write caching to the drives IS enabled. Fix common problems plugin shows no warnings. The migration of all 40TB of my data into my Array went slowly but very smoothly... write speeds of about 80MBps with parity enabled during the entire copy. When the data was copied into the array, "Drive 7" the Seagate drive, didn't end up with any data by chance. The system has been up and running fully for about a week. This week, as new data was being written to the cache drive and moved to the various disks overnight, data started getting written to "Drive 7". My first sign of trouble was that the "mover" was never finishing. Coincidentally, I'd turned on mover logging, so I was able to watch what was going on with "tail -f /var/log/syslog | grep -i move". Several gigabytes would get moved over at full speed, then it would slow to a crawl.... upwards of 7hr for a 4GB file - about 150kBps. I also noticed that my CPU iowait was upwards of 10%. Since there's no way to gracefully kill the mover script, I did a "ps -aux | grep move" and did a "kill -9" on the PID's, in an attempt to do a graceful shutdown and reboot the server. This was not successful... the top level /usr/local/sbin/move script died in the "D" state, requiring a hard reboot. Upon reboot Unraid detected the unclean shutdown and initiated a parity check. The reboot did not magically fix my "Drive 7" IO problems, however... the parity check was claiming it woulstake 306 days, so I canceled it. Overnight, the mover script kicked off again, and is running into the same brick wall. I killed the mover sub-process successfully, but this time didn't try to kill the "move" script. I'm trying to move the files off the drive manually, but it's not going any faster. For what it's worth, I copied a file off another drive, and got the expected 160+MBps on my 10GBE connection. Any suggestions on what to do? yellowstone-diagnostics-20201009-0857.zip
  14. Port forwarding seems to be broken across the board with PIA right now. Your options are: 1) try another server (replace the opn file in your OpenVPN folder with another one from PIA) 2) disable vpn in the container for now (insecure) 3) disable port forwarding in the container settings for now (makes bt very slow) I’ve tried all of the severe and none are working for me. Occasionally I’ll not get a port forwarding error (maybe one out of 10 times), but it doesn’t really work. I personally disabled port forwarding, living with slow speeds, and I’m just keeping my eye on an eventual fix or workaround once PIA finishes moving to their new servers.