UnKwicks

Members
  • Posts

    71
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

UnKwicks's Achievements

Rookie

Rookie (2/14)

0

Reputation

1

Community Answers

  1. Hi I plan to use @SpaceInvaderOne‘s ZFS snapshot and replication script, to replace my current backup setup. I have two pools 1. data (HDD pool for main data) 2. cache (SSD pool for appdata etc) For data I want to - Snapshot my data pool - Replicate the data snapshots to a remote server (zfs) This is possible straightforward with the script For cache I want to - Snapshot my cache pool - Replicate my cache pool to the data pool (locally) - AND replicate my cache pool the remote server (zfs) As far as I understand the script, I have to choose if I want to replicate locally OR replicate to a remote destination. Can it be configured to replicate to both destinations? Another scenario could be pretty handy. When I attach a local USB disk I want to only replicate my latest snapshots feom data AND cache to the USB disk without creating a new snapshot. But this may be a whole new script I guess
  2. Ich stimme deiner Aussage zu, dass man mit Snapshot Replication eine konventionelles Backup ersetzen kann. Aber ganz korrekt ist die Formulierung oben nicht, oder? Das script legt zwar lokal einen Snapshot an (auf dem gleichen Pool) jedoch muss man sich entscheiden ob lokal repliziert werden soll ODER auf einen remote Server. Für eine saubere 3-2-1 Backup strategie würde ich mir wünschen ich könnte mit dem Script Lokal auf einen anderen Pool UND remote replizieren.
  3. I realized that my fix above is not a clean solution, because even with ZFS the script creates a snapshot that is not getting deleted if I unset the variable. So my fix for now is to just remove the else where the container needs to get startet: # final container steps (unraid exclusive) if [[ ${#container_ids[@]} -gt 0 ]]; then # remove snapshot (containers are already running) if [[ $snapshot_path ]]; then rm -r "$snapshot_path" unset snapshot_path # start containers echo "Start containers (slow method):" docker container start "${container_ids[@]}" fi fi So if containers were stopped and if a snapshot_path is set (so we are in the loop for appdata right now) the snapshot is deleted and the containers get started. For the loop that backups other backup locations container do not get stopped, no snapshot is being created so no need to start the container. Not sure if this is how @mgutt meant the script to work, but maybe you can bring light into the dark? As well to answere the question if it is even possible/recommended to run the script when appdata is on a ZFS share. Thanks!
  4. Is ZFS still not supported? My pool where my appdata folder is, is a zfs pool. The script runs fine (I guess) but after completing the backup the container do not start up automatically. EDIT: I guess I found a bug in the script. When appdata is on a ZFS pool then there is the following error: rm: cannot remove '/mnt/cache/.appdata_snapshot': No such file or directory As far as I understand this is because $snapshot_path is always set: else # set snapshot path snapshot_dirname=$(dirname "$src_path") snapshot_basename=$(basename "$src_path") snapshot_path="$snapshot_dirname/.${snapshot_basename}_snapshot" and because of this it always tries to delete the snapshot instead of starting the container # remove snapshot (containers are already running) if [[ $snapshot_path ]]; then rm -r "$snapshot_path" # start containers else echo "Start containers (slow method):" docker container start "${container_ids[@]}" fi Am I right? EDIT 2: It seems I am right. I added a unset snapshot_path here, and it works now: else notify "Backup causes long docker downtime!" "The source path $src_path is located on a filesystem which does not support reflink copies!" unset snapshot_path This should be fixed in the next version. OK, the script is running now, but only in "slow mode" because of ZFS. May I better switch to btrfs? I was hoping I can use automatic snapshots for my pool and array.
  5. EDIT I figured it out again.... documenting it here for any other having this problem. The issue is that when I ssh to my remote server I get the following error message on the console: hostfile_replace_entries: link /root/.ssh/known_hosts to /root/.ssh/known_hosts.old: Operation not permitted update_known_hosts: hostfile_replace_entries failed for /root/.ssh/known_hosts: Operation not permitted Because of these errors the rsync command in the backup script fails. Maybe it is possible to catch this issue in a future script version? This issue and solution is covered here: so doing a ssh-keyscan -H TARGET_HOST >> ~/.ssh/known_hosts solves the errors and the script runs fine then since the last_backup date can be set with rsync. ----- FORMER Problem // SOLVED ---- Ok, I need help. For the script, please see the post above. I am able to successfully login via ssh to my remote server using a ssh key. Also rsync works from console. But when I run the script I get the following error: # ##################################### last_backup: '_' date: invalid date ‘_’ /tmp/user.scripts/tmpScripts/incremental_remote_backup/script: line 232: (1704284573 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ") I seems like the script is not able to get the last backup date via the rsync command. Is there anything else I have to configure? I set the aliases and also added a config ssh file in the meantime because I read that the alias ssh command set in the script is never used? Appreciate any advice what else I have to set in the script to do a remote backup via ssh/rsync
  6. I fixed it for now. rsync was disabled on remote host 🙄 I get another error now but I check this first ---- FIXED Hi, thanks again for your awesome script. I am struggeling by running the script for backup to a remote server via ssh. I configured authentication with a ssh key file and I can successfully login to my remote server via ssh passwordless from a terminal session. No I added my remote location and configured the user-defined ssh and rsync commands. But I get this error: Error: ()! I have no idea what I am doing wrong. I added my script below. I changed nothing beside the remote destination and the user defined aliases. Thanks for any advice! incremental_remote_Backup.sh
  7. Und ich dachte schon mein Supermicro war mit 350€ teuer 🫣
  8. Since my USB failed again a few hours after the last reboot I put the Drive into one of the back USB 2 ports. Lets see if this helps.
  9. The board has an internal USB3 port which I am using. I can plug it directly into the board. But may the usb3 port be this unstable that it loses connection? I could try a usb 2 port on the back of the board, but feels a bit bad not being able to use the port right on the mainboard itself.
  10. Hello I have a problem after some time my USB boot device loses connection to my unraid server. I get the following errors Feb 8 05:43:20 Towerbunt kernel: usb 2-4: USB disconnect, device number 2 Feb 8 05:43:20 Towerbunt kernel: xhci_hcd 0000:00:14.0: WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state. Feb 8 05:43:20 Towerbunt kernel: usb 2-4: new SuperSpeed USB device number 3 using xhci_hcd Feb 8 05:43:20 Towerbunt kernel: usb-storage 2-4:1.0: USB Mass Storage device detected Feb 8 05:43:20 Towerbunt kernel: scsi host10: usb-storage 2-4:1.0 Feb 8 05:43:21 Towerbunt emhttpd: Unregistered - flash device error (ENOFLASH3) May this more be a problem with the USB device or with the board itself? I have a x11sch-ln4f mainboard with IPMI and KVM console is also not available after some time the server runs. With a reboot the USB device gets connected again and the server runs fine.
  11. I have the same issue. I added "Homer" as docker container from the CAS. Using "bridge" as network for the docker does not start the container and gives me: Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 entered promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered forwarding state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state Feb 6 16:49:53 SERVER kernel: device veth9adb836 left promiscuous mode Feb 6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state When I give the container an IPv4 address from my custom network on interface br0 it starts up without errors. So I guess its somehow related with the bridging. Edit: Maybe I may add: This is the only container that does not start using bridge as interface. I have several other containers running well with bridge. Edit 2: OK I guess I found it. For me it was port related. I had another docker container using port 8080. Unraid did not warn me that the port is already in use (I thought it did in the past, maybe a bug??). But I used another port and now the container starts.
  12. ah, ja. Ich hatte nicht hoch genug gescrolled. Ich dachte das steht beim entsprechenden RAM Modul. Aber da seht nur: Error Information Handle: Not Provided Weiter oben habe ich dann aber auch: Physical Memory Array Location: System Board Or Motherboard Use: System Memory Error Correction Type: Single-bit ECC Maximum Capacity: 64 GB Error Information Handle: Not Provided Number Of Devices: 4 Sieht also gut aus 🙂 Bin sehr happy mit dem Board.
  13. I flashed a new boot drive now with the old backup and recofigured my changes. Server is running fine so far, but it did yesterday as well. I did not delete the ._ files yet. When the drive is working now, maybe someone has an advice how to get rid of all these ._ files. But lets make sure the server runs fine for 24 hours
  14. Hello I did a migration to a new usb drive last night. So I did a flash backup and wrote it to my new usb drive with the creator tool. All went fine and my server boots up well. When I checked the flash drive file structure on my server I noticed that every file has a copy with ._ als prefix like „._go“. So I went to delete all these files because I used macOS and I thought mac created the copies. After deleting all copeies recursively with find my server complained the usb drive is corrupt. A reboot works fine but after some time the message appears again. I guess it was a bad idea just to delete the files? Dumb me I did not another backup before deleting but I added a new cache pool to my config. Can I fix the drive somehow without flashing again the old backup? If not, what happens with my new cache pool I added after the last flash backup? Could I use the old flash backup and reconfigure the cache pool? How do I get rid of the ._ files after that?
  15. Ich habe mit das Board nun ebenfalls gekauft und mangels verfügbarer 9300 ebenfalls einen 9100 eingesetzt. Die 12W bekomme ich ebenso mit 1xM.2 SSD, 2x32GB RAM, aktiver CPU-Kühler, 1xUSB für Debian live, ebenfalls 2 Nics disabled. Was mich noch wundert ist, wie ich prüfen kann ob ECC nun wirklich funktioniert. >>dmidecode -t memory gibt mir leider keinen Output der darauf schliessen lässt das ECC erkannt und aktiv ist. Hast du das hinbekommen?