UnKwicks

Members
  • Posts

    69
  • Joined

  • Last visited

Everything posted by UnKwicks

  1. I realized that my fix above is not a clean solution, because even with ZFS the script creates a snapshot that is not getting deleted if I unset the variable. So my fix for now is to just remove the else where the container needs to get startet: # final container steps (unraid exclusive) if [[ ${#container_ids[@]} -gt 0 ]]; then # remove snapshot (containers are already running) if [[ $snapshot_path ]]; then rm -r "$snapshot_path" unset snapshot_path # start containers echo "Start containers (slow method):" docker container start "${container_ids[@]}" fi fi So if containers were stopped and if a snapshot_path is set (so we are in the loop for appdata right now) the snapshot is deleted and the containers get started. For the loop that backups other backup locations container do not get stopped, no snapshot is being created so no need to start the container. Not sure if this is how @mgutt meant the script to work, but maybe you can bring light into the dark? As well to answere the question if it is even possible/recommended to run the script when appdata is on a ZFS share. Thanks!
  2. Is ZFS still not supported? My pool where my appdata folder is, is a zfs pool. The script runs fine (I guess) but after completing the backup the container do not start up automatically. EDIT: I guess I found a bug in the script. When appdata is on a ZFS pool then there is the following error: rm: cannot remove '/mnt/cache/.appdata_snapshot': No such file or directory As far as I understand this is because $snapshot_path is always set: else # set snapshot path snapshot_dirname=$(dirname "$src_path") snapshot_basename=$(basename "$src_path") snapshot_path="$snapshot_dirname/.${snapshot_basename}_snapshot" and because of this it always tries to delete the snapshot instead of starting the container # remove snapshot (containers are already running) if [[ $snapshot_path ]]; then rm -r "$snapshot_path" # start containers else echo "Start containers (slow method):" docker container start "${container_ids[@]}" fi Am I right? EDIT 2: It seems I am right. I added a unset snapshot_path here, and it works now: else notify "Backup causes long docker downtime!" "The source path $src_path is located on a filesystem which does not support reflink copies!" unset snapshot_path This should be fixed in the next version. OK, the script is running now, but only in "slow mode" because of ZFS. May I better switch to btrfs? I was hoping I can use automatic snapshots for my pool and array.
  3. EDIT I figured it out again.... documenting it here for any other having this problem. The issue is that when I ssh to my remote server I get the following error message on the console: hostfile_replace_entries: link /root/.ssh/known_hosts to /root/.ssh/known_hosts.old: Operation not permitted update_known_hosts: hostfile_replace_entries failed for /root/.ssh/known_hosts: Operation not permitted Because of these errors the rsync command in the backup script fails. Maybe it is possible to catch this issue in a future script version? This issue and solution is covered here: so doing a ssh-keyscan -H TARGET_HOST >> ~/.ssh/known_hosts solves the errors and the script runs fine then since the last_backup date can be set with rsync. ----- FORMER Problem // SOLVED ---- Ok, I need help. For the script, please see the post above. I am able to successfully login via ssh to my remote server using a ssh key. Also rsync works from console. But when I run the script I get the following error: # ##################################### last_backup: '_' date: invalid date ‘_’ /tmp/user.scripts/tmpScripts/incremental_remote_backup/script: line 232: (1704284573 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ") I seems like the script is not able to get the last backup date via the rsync command. Is there anything else I have to configure? I set the aliases and also added a config ssh file in the meantime because I read that the alias ssh command set in the script is never used? Appreciate any advice what else I have to set in the script to do a remote backup via ssh/rsync
  4. I fixed it for now. rsync was disabled on remote host 🙄 I get another error now but I check this first ---- FIXED Hi, thanks again for your awesome script. I am struggeling by running the script for backup to a remote server via ssh. I configured authentication with a ssh key file and I can successfully login to my remote server via ssh passwordless from a terminal session. No I added my remote location and configured the user-defined ssh and rsync commands. But I get this error: Error: ()! I have no idea what I am doing wrong. I added my script below. I changed nothing beside the remote destination and the user defined aliases. Thanks for any advice! incremental_remote_Backup.sh
  5. Und ich dachte schon mein Supermicro war mit 350€ teuer 🫣
  6. Since my USB failed again a few hours after the last reboot I put the Drive into one of the back USB 2 ports. Lets see if this helps.
  7. The board has an internal USB3 port which I am using. I can plug it directly into the board. But may the usb3 port be this unstable that it loses connection? I could try a usb 2 port on the back of the board, but feels a bit bad not being able to use the port right on the mainboard itself.
  8. Hello I have a problem after some time my USB boot device loses connection to my unraid server. I get the following errors Feb 8 05:43:20 Towerbunt kernel: usb 2-4: USB disconnect, device number 2 Feb 8 05:43:20 Towerbunt kernel: xhci_hcd 0000:00:14.0: WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state. Feb 8 05:43:20 Towerbunt kernel: usb 2-4: new SuperSpeed USB device number 3 using xhci_hcd Feb 8 05:43:20 Towerbunt kernel: usb-storage 2-4:1.0: USB Mass Storage device detected Feb 8 05:43:20 Towerbunt kernel: scsi host10: usb-storage 2-4:1.0 Feb 8 05:43:21 Towerbunt emhttpd: Unregistered - flash device error (ENOFLASH3) May this more be a problem with the USB device or with the board itself? I have a x11sch-ln4f mainboard with IPMI and KVM console is also not available after some time the server runs. With a reboot the USB device gets connected again and the server runs fine.
  9. I have the same issue. I added "Homer" as docker container from the CAS. Using "bridge" as network for the docker does not start the container and gives me: Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 entered promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered forwarding state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 left promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state When I give the container an IPv4 address from my custom network on interface br0 it starts up without errors. So I guess its somehow related with the bridging. Edit: Maybe I may add: This is the only container that does not start using bridge as interface. I have several other containers running well with bridge. Edit 2: OK I guess I found it. For me it was port related. I had another docker container using port 8080. Unraid did not warn me that the port is already in use (I thought it did in the past, maybe a bug??). But I used another port and now the container starts.
  10. ah, ja. Ich hatte nicht hoch genug gescrolled. Ich dachte das steht beim entsprechenden RAM Modul. Aber da seht nur: Error Information Handle: Not Provided Weiter oben habe ich dann aber auch: Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Single-bit ECC Maximum Capacity: 64 GB Error Information Handle: Not Provided Number Of Devices: 4 Sieht also gut aus 🙂 Bin sehr happy mit dem Board.
  11. I flashed a new boot drive now with the old backup and recofigured my changes. Server is running fine so far, but it did yesterday as well. I did not delete the ._ files yet. When the drive is working now, maybe someone has an advice how to get rid of all these ._ files. But lets make sure the server runs fine for 24 hours
  12. Hello I did a migration to a new usb drive last night. So I did a flash backup and wrote it to my new usb drive with the creator tool. All went fine and my server boots up well. When I checked the flash drive file structure on my server I noticed that every file has a copy with ._ als prefix like „._go“. So I went to delete all these files because I used macOS and I thought mac created the copies. After deleting all copeies recursively with find my server complained the usb drive is corrupt. A reboot works fine but after some time the message appears again. I guess it was a bad idea just to delete the files? Dumb me I did not another backup before deleting but I added a new cache pool to my config. Can I fix the drive somehow without flashing again the old backup? If not, what happens with my new cache pool I added after the last flash backup? Could I use the old flash backup and reconfigure the cache pool? How do I get rid of the ._ files after that?
  13. Ich habe mit das Board nun ebenfalls gekauft und mangels verfügbarer 9300 ebenfalls einen 9100 eingesetzt. Die 12W bekomme ich ebenso mit 1xM.2 SSD, 2x32GB RAM, aktiver CPU-Kühler, 1xUSB für Debian live, ebenfalls 2 Nics disabled. Was mich noch wundert ist, wie ich prüfen kann ob ECC nun wirklich funktioniert. >>dmidecode -t memory gibt mir leider keinen Output der darauf schliessen lässt das ECC erkannt und aktiv ist. Hast du das hinbekommen?
  14. May it be possible to trigger CA auto update plugin while docker containers are stopped? This would save a second downtime
  15. Is autostart set for all containers on the docker tab in unraid?
  16. I have a custom Backup script running that stops all docker containers before backing up appdata. May it be possible to trigger the auto update from this script right before I start docker again?
  17. Ich finde den Wert schon echt gut. Ist IPMI bereits an? Welchen passiven kühler nutzt du für den i3 und warum hast du den 9100 statt 9300 gewählt?
  18. Konntest du bereits weitere power tests machen? Ich liebäugel auch sehr mit dem Board.
  19. Thanks, this helps me a lot. I am running a test now with a new script: backup_jobs=( # source # destination "/mnt/cache/appdata" "/mnt/user/Backups/Unraid/appdata" "/boot" "/mnt/user/Backups/Unraid/flash" ) The "keep backup"-Settings in the script I set all to 1 and create symlinks at the end of the script: # Get last appdata backup lastAppdataBackup=$(ls -t /mnt/user/Backups/Unraid/appdata/ | head -n 1) echo "Last appdata backup: $lastAppdataBackup" # Get last flash backup lastFlashBackup=$(ls -t /mnt/user/Backups/Unraid/flash/ | head -n 1) echo "Last flash backup: $lastFlashBackup" # create symlink to last appdata backup ln -sfn /mnt/user/Backups/Unraid/appdata/$lastAppdataBackup /mnt/user/Backups/Unraid/appdata/last echo "Symlink to last appdata backup created" # create symlink to last flash backup ln -sfn /mnt/user/Backups/Unraid/flash/$lastFlashBackup /mnt/user/Backups/Unraid/flash/last echo "Symlink to last flash backup created" /mnt/user/Backups/Unraid/appdata/last and /mnt/user/Backups/Unraid/flash/last are the sources I will use then for my main Backup script to the external drive as well as for the duplicati Backup. When I overthought this right it might work this way without any redundancy. Let's see
  20. I guess I have to think a bit about the best process. If I do a backup first to a local share and use this as source I might have doubled incremental backups because the script as well as the next one copying to the external disk is doing incrementals as well. Also the foldername (date) changes every time and then duplicati might do a full backup every time. So not that ideal.... Not sure how to get around this. If it would be possible to copy the appdata to a second destination while the main script is running it may be the cleanest solution.
  21. The target of the main script is an external drive which gets unmounted when the script is done. I do a duplicati backup to a cloud storage as well so I used CA appdata Backuo in the past to do a backup to an unraid share and upload from there to the cloud. But this means a second (even longer) downtime as well. This is why it would be great to have a second copy to my share for the upload via duplicati.
  22. Thanks, its working fine now. Donation incoming 🥳 May it be possible to copy the appdata folder to a second target in the same run? This would prevent a second downtime from a second script.
  23. Thanks for your awesome script @mgutt which I am currently testing. Just running my first backup. Since I added "/mnt/user/appdata" as source path I thought my docker downtime would be only a few seconds, but it took as long as all appdata files were copied. Do I have to use "/mnt/cache/appdata" instead of "/mnt/user/appdata" to make the snapshot feature work? What happens when I run the script again at the same day after it finished the first run? Thank you very much!!
  24. Not sure if its because of the update but its partial solved now. When I detach the device it spins down after about 30 min. I can go with this