Meles Meles

Members
  • Posts

    50
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Meles Meles's Achievements

Rookie

Rookie (2/14)

6

Reputation

  1. Funny you should mention this, I actually *like* the fact that it tells me that it's Google Authenticator. Then I don't need to check my Microsoft one first! 😁
  2. sorry, just noticed this! No idea really... I just did, and it worked! My theory was that the /identify page returned less data so was less "work" for the server to do (for when 16 cores/32 threads isn't enough?)
  3. I run a docker container called "autoheal" (willfarrell/autoheal:1.2.0 is the version you want to use as the "latest" one is regenerated daily which is a PITA) just give each container you want it to monitor a label of "autoheal=true" it also needs some sort of healthcheck command (if there's not one included in the image itself) - here's my plex one (goes in "extra parameters" on the advanced view --health-cmd 'curl --connect-timeout 15 --silent --show-error --fail http://localhost:32400/identity' *Remember - the port in this URL is the one from INSIDE the container, not where it's mapped to on the server * you can do similar for most containers, although a trap for young players is that some of the images don't have curl, so you need to use wget (and alter parameters to suit) I've attached my template for autoheal (from /boot/config/plugins/dockerMan/templates-user) my-autoheal.xml
  4. It appears that running your browser using 2 seperate profiles is enough to fool it. I've got it working here with 2 instances of firefox
  5. I'm hoping it's just me, but the webUI doesn't seem to want to perform multiple tasks concurrently (in seperate tabs/browsers). For instance, if i'm adding a docker container (doing the pull etc) in one tab - another window with something else in it will just hang (and eventually give me a timeout) when i try to refresh/navigate. Is there some nginx setting I need to tweak, or is this by design? beast-diagnostics-20220128-0825.zip
  6. you'd put it just before consold8 find "/mnt/user/Media/Movies" -mindepth 1 -maxdepth 1 -type d -print0 | xargs -0 -n 1 bash consld8 -f
  7. Or you can just "bash" them into submission..... so instead of just doing diskmv blah blah2 (which will fail as no execute permission on diskmv) do bash diskmv blah blah2
  8. For posterity here's what i've done to fix it... at the bottom of /boot/config/go i added rm /root/.ssh cp -R /boot/config/ssh/root/ /root/.ssh and then i've done a User Script (scheduled hourly) to sync the data back to the flash drive rsync -au /root/.ssh/ /boot/config/ssh/root/
  9. Because I am going unRAID -> unRAID via SSH i'm getting the issue with the "hostfile_replace_entries" error (notwithstanding the workaround you suggested above). As such the setting of "last_backup" is not working properly (as the errors are coming out in the rsync listing. # obtain last backup if last_backup=$(rsync --dry-run --recursive --itemize-changes --exclude="*/*/" --include="[0-9]*/" --exclude="*" "$dst_path/" "$empty_dir" 2>&1); then last_backup=$(echo "$last_backup" | grep -v Operation | grep -oP "[0-9_/]*" | sort -r | head -n1) putting the "| grep -v Operation " in there cures this. Can you incorporate this into your next version please. Cheers!
  10. Also... i'm backing up from one unRAID server to another (via ssh), and for some reason I get errors pop up every time it makes a SSH connection. so that I can get rid of these in the (final) log files, i'm "sed"ding them out just before I move the log to the destination # hostfile_replace_entries: link /root/.ssh/known_hosts to /root/.ssh/known_hosts.old: Operation not permitted # update_known_hosts: hostfile_replace_entries failed for /root/.ssh/known_hosts: Operation not permitted sed -i '/hostfile_replace_entries/d' "$log_file" sed -i '/update_known_hosts/d' "$log_file" # move log file to destination rsync --remove-source-files "$log_file" "$dst_path/$new_backup/" || rsync --remove-source-files "$log_file" "$dst_path/.$new_backup/" could you either a) pop this into your code or b) give me some sort of clue as to why the blazes i'm getting this message! ta
  11. I got myself moderately tangled up changing my parameters when copying over your latest version (user stupidity on my half), so do you fancy making these changes so that it can be config based (on a yaml file) if you call the script with no parameters then it uses whatever values are hardcoded in the script. Otherwise it'll parse the YAML file you specify and use those values. It also allows you to more easily have multiple backup sets running in parallel with a single backup script # ##################################### # Settings # ##################################### # rsync options which are used while creating the full and incremental backup rsync_options=( # --dry-run --archive # same as --recursive --links --perms --times --group --owner --devices --specials --human-readable # output numbers in a human-readable format --itemize-changes # output a change-summary for all updates --exclude="[Tt][Ee][Mm][Pp]/" # exclude dirs with the name "temp" or "Temp" or "TEMP" --exclude="[Tt][Mm][Pp]/" # exclude dirs with the name "tmp" or "Tmp" or "TMP" --exclude="[Cc][Aa][Cc][Hh][Ee]/" # exclude dirs with the name "Cache" or "cache" ) # set empty dir empty_dir="/tmp/${0//\//_}" if [ "${1}" == "" ]; then backupBase="root@10.1.2.2:/mnt/user/backup/beast" # backup source to destination backup_jobs=( # source # destination "/boot" "$backupBase/boot" "/mnt/user/scan" "$backupBase/scan" ) # keep backups of the last X days keep_days=14 # keep multiple backups of one day for X days keep_days_multiple=1 # keep backups of the last X months keep_months=12 # keep backups of the last X years keep_years=3 # keep the most recent X failed backups keep_fails=3 # notify if the backup was successful (1 = notify) notification_success=0 # notify if last backup is older than X days notification_backup_older_days=30 # create destination if it does not exist create_destination=1 # backup does not fail if files vanished during transfer https://linux.die.net/man/1/rsync#:~:text=vanished skip_error_vanished_source_files=1 # backup does not fail if source path returns "host is down". # This could happen if the source is a mounted SMB share, which is offline. skip_error_host_is_down=1 # backup does not fail if file transfers return "host is down" # This could happen if the source is a mounted SMB share, which went offline during transfer skip_error_host_went_down=1 # backup does not fail, if source path does not exist, which for example happens if the source is an unmounted SMB share skip_error_no_such_file_or_directory=1 # a backup fails if it contains less than X files backup_must_contain_files=2 # a backup fails if more than X % of the files couldn't be transfered because of "Permission denied" errors permission_error_treshold=20 else if ! [ -f yaml.sh ]; then wget https://raw.githubusercontent.com/jasperes/bash-yaml/master/script/yaml.sh fi if ! [ -f "${1}" ]; then echo "File \"${1}\" not found" exit 1 fi source yaml.sh create_variables "${1}" empty_dir+="." empty_dir+=$(basename "${1}") fi my YAML file backup_jobs: -/boot -root@10.1.2.2:/mnt/user/backup/beast/boot -/mnt/user/scan -root@10.1.2.2:/mnt/user/backup/beast/scan keep: days: 14 days_multiple: 14 months: 12 years: 3 fails: 3 notification: # notify if the backup was successful (1 = notify) success: 0 # notify if last backup is older than X days backup_older_days: 30 # create destination if it does not exist create_destination: 1 # a backup fails if it contains less than X files backup_must_contain_files: 2 # a backup fails if more than X % of the files couldn't be transfered because of "Permission denied" errors permission_error_treshold: 20 skip_error: # backup does not fail if files vanished during transfer https://linux.die.net/man/1/rsync#:~:text=vanished vanished_source_files: 1 # backup does not fail, if source path does not exist, which for example happens if the source is an unmounted SMB share no_such_file_or_directory: 1 host: # backup does not fail if source path returns "host is down". # This could happen if the source is a mounted SMB share, which is offline. is_down: 1 # backup does not fail if file transfers return "host is down". # This could happen if the source is a mounted SMB share, which went offline during transfer went_down: 1 also # make script race condition safe if [[ -d "${empty_dir}" ]] || ! mkdir "${empty_dir}"; then echo "Script is already running!" && exit 1; fi; trap 'rmdir "${empty_dir}"' EXIT; and obv remove the setting of "empty_dir" just above the loop
  12. whilst i'm on a roll... rsync cracks the poops if you ask it to create more than one folder deep at once (at least when operating via ssh), so i've made it work by putting the following in (just above "# obtain most recent backup") if [[ $dst_path == *"@"*":"* ]]; then # this is a remote destination mkdirDir=$(cut -d ":" -f2 <<< $dst_path) sshDest=$(cut -d ":" -f1 <<< $dst_path) ssh $sshDest -f "mkdir -p '$mkdirDir' && exit" else mkdir -p $dst_path fi
  13. Also... surely the "shortening" of dst_path "if" statement needs an else (for when you're backing up somewhere other than "/mnt/..../" (i.e. /boot) else dst_path="$backup_path${source_path}"