UnKwicks

Members
  • Posts

    69
  • Joined

  • Last visited

Posts posted by UnKwicks

  1. On 4/7/2024 at 10:09 PM, UnKwicks said:

    This should be fixed in the next version.

    I realized that my fix above is not a clean solution, because even with ZFS the script creates a snapshot that is not getting deleted if I unset the variable.

    So my fix for now is to just remove the else where the container needs to get startet:

    # final container steps (unraid exclusive)
      if [[ ${#container_ids[@]} -gt 0 ]]; then
    	
        # remove snapshot (containers are already running)
        if [[ $snapshot_path ]]; then
          rm -r "$snapshot_path"
    	  unset snapshot_path
          # start containers
    	  echo "Start containers (slow method):"
    	  docker container start "${container_ids[@]}"
        fi
        
      fi

     

    So if containers were stopped and if a snapshot_path is set (so we are in the loop for appdata right now) the snapshot is deleted and the containers get started. For the loop that backups other backup locations container do not get stopped, no snapshot is being created so no need to start the container.

     

    Not sure if this is how @mgutt  meant the script to work, but maybe you can bring light into the dark?

    As well to answere the question if it is even possible/recommended to run the script when appdata is on a ZFS share.

     

    Thanks!

  2. On 7/30/2023 at 1:00 PM, mgutt said:

    The target filesystem has to be BTRFS or XFS or ZFS. The source filesystem does not matter except it contains the appdata directory. Then it should be BTRFS or XFS (At the moment ZFS does not support reflink files, which is a feature used by my script.

    Is ZFS still not supported?

    My pool where my appdata folder is, is a zfs pool.

    The script runs fine (I guess) but after completing the backup the container do not start up automatically.

     

    EDIT:

    I guess I found a bug in the script. When appdata is on a ZFS pool then there is the following error:

    rm: cannot remove '/mnt/cache/.appdata_snapshot': No such file or directory
    

     

    As far as I understand this is because $snapshot_path is always set:

           else
              # set snapshot path
              snapshot_dirname=$(dirname "$src_path")
              snapshot_basename=$(basename "$src_path")
              snapshot_path="$snapshot_dirname/.${snapshot_basename}_snapshot"

    and because of this it always tries to delete the snapshot instead of starting the container

        # remove snapshot (containers are already running)
        if [[ $snapshot_path ]]; then
          rm -r "$snapshot_path"
        # start containers
        else
          echo "Start containers (slow method):"
          docker container start "${container_ids[@]}"
        fi

     

    Am I right?

     

    EDIT 2:

    It seems I am right. I added a unset snapshot_path here, and it works now:

    else
       notify "Backup causes long docker downtime!" "The source path $src_path is located on a filesystem which does not support reflink copies!"
       unset snapshot_path

    This should be fixed in the next version.

    OK, the script is running now, but only in "slow mode" because of ZFS. May I better switch to btrfs? I was hoping I can use automatic snapshots for my pool and array.

  3. EDIT

     

    I figured it out again.... documenting it here for any other having this problem.

    The issue is that when I ssh to my remote server I get the following error message on the console:

    hostfile_replace_entries: link /root/.ssh/known_hosts to /root/.ssh/known_hosts.old: Operation not permitted
    update_known_hosts: hostfile_replace_entries failed for /root/.ssh/known_hosts: Operation not permitted

    Because of these errors the rsync command in the backup script fails.

    Maybe it is possible to catch this issue in a future script version?

     

    This issue and solution is covered here:

     

    so doing a 

    ssh-keyscan -H TARGET_HOST >> ~/.ssh/known_hosts

    solves the errors and the script runs fine then since the last_backup date can be set with rsync.

     

    ----- FORMER Problem // SOLVED ----

     

    Ok, I need help.

    For the script, please see the post above.

    I am able to successfully login via ssh to my remote server using a ssh key. Also rsync works from console.

    But when I run the script I get the following error:

    # #####################################
    last_backup: '_'
    date: invalid date ‘_’
    /tmp/user.scripts/tmpScripts/incremental_remote_backup/script: line 232: (1704284573 - ) / 86400 : syntax error: operand expected (error token is ") / 86400 ")

    I seems like the script is not able to get the last backup date via the rsync command.

    Is there anything else I have to configure? I set the aliases and also added a config ssh file in the meantime because I read that the alias ssh command set in the script is never used?

    Appreciate any advice what else I have to set in the script to do a remote backup via ssh/rsync

  4. I fixed it for now. rsync was disabled on remote host 🙄

    I get another error now but I check this first

     

    ---- FIXED

     

    Hi, thanks again for your awesome script. 

     

    I am struggeling by running the script for backup to a remote server via ssh.

    I configured authentication with a ssh key file and I can successfully login to my remote server via ssh passwordless from a terminal session.

    No I added my remote location and configured the user-defined ssh and rsync commands. But I get this error:

    Error: ()!

     

    I have no idea what I am doing wrong. I added my script below. I changed nothing beside the remote destination and the user defined aliases.

    Thanks for any advice!

    incremental_remote_Backup.sh

  5. 8 minutes ago, itimpi said:

    Are you using one of the USB2 ports on your motherboard (possibly via internal header) for the flash drive as they tend to be more reliable than USB3 ports.  Alternatively can you use a USB2 flash drive as you get no noticeable performance advantage from using a USB3 one.

    The board has an internal USB3 port which I am using. I can plug it directly into the board. 
    But may the usb3 port be this unstable that it loses connection? I could try a usb 2 port on the back of the board, but feels a bit bad not being able to use the port right on the mainboard itself. 

  6. Hello

     

    I have a problem after some time my USB boot device loses connection to my unraid server. I get the following errors 

    Feb  8 05:43:20 Towerbunt kernel: usb 2-4: USB disconnect, device number 2
    Feb  8 05:43:20 Towerbunt kernel: xhci_hcd 0000:00:14.0: WARN Set TR Deq Ptr cmd failed due to incorrect slot or ep state.
    Feb  8 05:43:20 Towerbunt kernel: usb 2-4: new SuperSpeed USB device number 3 using xhci_hcd
    Feb  8 05:43:20 Towerbunt kernel: usb-storage 2-4:1.0: USB Mass Storage device detected
    Feb  8 05:43:20 Towerbunt kernel: scsi host10: usb-storage 2-4:1.0
    Feb  8 05:43:21 Towerbunt  emhttpd: Unregistered - flash device error (ENOFLASH3)

     

    May this more be a problem with the USB device or with the board itself? I have a x11sch-ln4f mainboard with IPMI and KVM console is also not available after some time the server runs.

     

    With a reboot the USB device gets connected again and the server runs fine. 

  7. I have the same issue.

    I added "Homer" as docker container from the CAS. Using "bridge" as network for the docker does not start the container and gives me:

    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state
    Feb  6 16:49:53 SERVER kernel: device veth9adb836 entered promiscuous mode
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered blocking state
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered forwarding state
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state
    Feb  6 16:49:53 SERVER kernel: device veth9adb836 left promiscuous mode
    Feb  6 16:49:53 SERVER kernel: docker0: port 5(veth9adb836) entered disabled state

     

    When I give the container an IPv4 address from my custom network on interface br0 it starts up without errors.

     

    So I guess its somehow related with the bridging.

     

    Edit:

    Maybe I may add: This is the only container that does not start using bridge as interface. I have several other containers running well with bridge.

     

    Edit 2:

    OK I guess I found it. For me it was port related. I had another docker container using port 8080. Unraid did not warn me that the port is already in use (I thought it did in the past, maybe a bug??). But I used another port and now the container starts.

  8. 8 hours ago, qwerty-no said:

     

    Bisher bin ich ehrlich gesagt davon aus gegangen, dass es funktioniert. :D

    Ich habe gerade auch mal mit dmidecode -t memory gecheckt und soweit ich das sehen kann sieht es gut aus?

     

    Error Correction Type: Single-bit ECC


     

      Reveal hidden contents

     

     

    Handle 0x0022, DMI type 16, 23 bytes
    Physical Memory Array
            Location: System Board Or Motherboard
            Use: System Memory
            Error Correction Type: Single-bit ECC
            Maximum Capacity: 64 GB
            Error Information Handle: Not Provided
            Number Of Devices: 4
    
    ...
    
    Handle 0x0034, DMI type 17, 84 bytes
    Memory Device
            Array Handle: 0x0022
            Error Information Handle: Not Provided
            Total Width: 72 bits
            Data Width: 64 bits
            Size: 32 GB
            Form Factor: DIMM
            Set: None
            Locator: DIMMB2
            Bank Locator: P0_Node0_Channel1_Dimm1
            Type: DDR4
            Type Detail: Synchronous
            Speed: 3200 MT/s
            Manufacturer: Kingston
            Serial Number: XXXXXXXX
            Asset Tag: 9876543210
            Part Number: 9965745-039.A00G    
            Rank: 2
            Configured Memory Speed: 2400 MT/s
            Minimum Voltage: 1.2 V
            Maximum Voltage: 1.2 V
            Configured Voltage: 1.2 V
            Memory Technology: DRAM
            Memory Operating Mode Capability: Volatile memory
            Firmware Version: Not Specified
            Module Manufacturer ID: Bank 2, Hex 0x98
            Module Product ID: Unknown
            Memory Subsystem Controller Manufacturer ID: Unknown
            Memory Subsystem Controller Product ID: Unknown
            Non-Volatile Size: None
            Volatile Size: 32767 MB
            Cache Size: None
            Logical Size: None

     

     

     

    ah, ja. Ich hatte nicht hoch genug gescrolled. Ich dachte das steht beim entsprechenden RAM Modul. Aber da seht nur:

    Error Information Handle: Not Provided

     

    Weiter oben habe ich dann aber auch:

    Physical Memory Array
            Location: System Board Or Motherboard
            Use: System Memory
            Error Correction Type: Single-bit ECC
            Maximum Capacity: 64 GB
            Error Information Handle: Not Provided
            Number Of Devices: 4

    Sieht also gut aus 🙂

    Bin sehr happy mit dem Board.

  9. I flashed a new boot drive now with the old backup and recofigured my changes. 

    Server is running fine so far, but it did yesterday as well. I did not delete the ._ files yet.

     

    When the drive is working now, maybe someone has an advice how to get rid of all these ._ files. But lets make sure the server runs fine for 24 hours ;)

     

     

  10. Hello

     

    I did a migration to a new usb drive last night. So I did a flash backup and wrote it to my new usb drive with the creator tool. 
     

    All went fine and my server boots up well. 
     

    When I checked the flash drive file structure on my server I noticed that every file has a copy with ._ als prefix like „._go“. 
    So I went to delete all these files because I used macOS and I thought mac created the copies. 
    After deleting all copeies recursively with find my server complained the usb drive is corrupt. 
    A reboot works fine but after some time the  message appears again. 

    I guess it was a bad idea just to delete the files? Dumb me I did not another backup before deleting but I added a new cache pool to my config. 
     

    Can I fix the drive somehow without flashing again the old backup?

     

    If not, what happens with my new cache pool I added after the last flash backup? Could I use the old flash backup and reconfigure the cache pool? How do I get rid of the ._ files after that?

  11. On 1/14/2023 at 6:47 PM, qwerty-no said:

     

    Bin noch nicht viel weiter gekommen. powertop --auto-tune bringt leider nicht viel, weil auch schon ohne hohe C states erreicht werden.

    Gerade habe ich einen USB Stick mit Unraid angeschlossen, headless, nur mit CPU, RAM, PSU, 1x SATA SSD und 1x SATA HDD, alles passiv gekühlt, 2 der 4 NICs im BIOS deaktiviert und komme auf 12W im idle

    Ich habe mit das Board nun ebenfalls gekauft und mangels verfügbarer 9300 ebenfalls einen 9100 eingesetzt.

    Die 12W bekomme ich ebenso mit 1xM.2 SSD, 2x32GB RAM, aktiver CPU-Kühler, 1xUSB für Debian live, ebenfalls 2 Nics disabled. 

     

    Was mich noch wundert ist, wie ich prüfen kann ob ECC nun wirklich funktioniert.

    >>dmidecode -t memory gibt mir leider keinen Output der darauf schliessen lässt das ECC erkannt und aktiv ist.

     

    Hast du das hinbekommen?

     

  12. 14 minutes ago, Marc Heyer said:

    Hello, im using the script to backup my docker appdata with great success. It´s very nice to have everything in one place. But im still having trouble that some of my containers dont get started when the snapshot it taken. 

    What does ("echo "Start containers (fast method):") fast method mean? How can i try a other method of starting my containers? Maybe this could solve my problem. I looked up the docker command documentation but i only found "docker start container". And this method is used by the script.

    Greetings

    Marc

    Is autostart set for all containers on the docker tab in unraid?

  13. On 8/16/2022 at 3:54 PM, je82 said:

    Hi,

    Supermicro has this feature where they have dedicated SATA ports on some of their motherboard that support some kind of device that acts like a harddrive but is a small flash drive but faster then a ssd, they call this "SuperDOM" (https://www.supermicro.com/products/nfo/SATADOM.cfm)

     

    I am interested in hearing if i can use a device like this for my unraid install rather then a slow flash usb drive? What say you?

    Did you try that?

    This sound pretty awesome. 

  14. 1 hour ago, mgutt said:

    - create backup to local share like /mnt/disk3/Backup/appdata and at the end of the script:

    - obtain most recent backup path with last_backup=$(ls -t /mnt/disk3/Backup/appdata/ | head -n 1)  and create/update symlink ln -sfn /mnt/disk3/Backup/appdata/$last_backup /mnt/disk3/Duplicati/appdata

    Thanks, this helps me a lot.

    I am running a test now with a new script:

    backup_jobs=(
      # source                          # destination
      "/mnt/cache/appdata"              "/mnt/user/Backups/Unraid/appdata"
      "/boot"              "/mnt/user/Backups/Unraid/flash"
    )

    The "keep backup"-Settings in the script I set all to 1

    and create symlinks at the end of the script:

    # Get last appdata backup
    lastAppdataBackup=$(ls -t /mnt/user/Backups/Unraid/appdata/ | head -n 1)
    echo "Last appdata backup: $lastAppdataBackup"
    # Get last flash backup
    lastFlashBackup=$(ls -t /mnt/user/Backups/Unraid/flash/ | head -n 1)
    echo "Last flash backup: $lastFlashBackup"
    
    # create symlink to last appdata backup
    ln -sfn /mnt/user/Backups/Unraid/appdata/$lastAppdataBackup /mnt/user/Backups/Unraid/appdata/last
    echo "Symlink to last appdata backup created"
    
    # create symlink to last flash backup
    ln -sfn /mnt/user/Backups/Unraid/flash/$lastFlashBackup /mnt/user/Backups/Unraid/flash/last
    echo "Symlink to last flash backup created"

     

    /mnt/user/Backups/Unraid/appdata/last and /mnt/user/Backups/Unraid/flash/last

    are the sources I will use then for my main Backup script to the external drive as well as for the duplicati Backup. 

    When I overthought this right it might work this way without any redundancy.

    Let's see :)

  15. 17 hours ago, mgutt said:

    first backup to a local share and then use this as your source

    I guess I have to think a bit about the best process.

    If I do a backup first to a local share and use this as source I might have doubled incremental backups because the script as well as the next one copying to the external disk is doing incrementals as well. Also the foldername (date) changes every time and then duplicati might do a full backup every time.

    So not that ideal....

    Not sure how to get around this. If it would be possible to copy the appdata to a second destination while the main script is running it may be the cleanest solution.

  16. 5 minutes ago, mgutt said:

    Isn't it possible to create a second script and use the first destination as the source for the second destination ?

    The target of the main script is an external drive which gets unmounted when the script is done.

    I do a duplicati backup to a cloud storage as well so I used CA appdata Backuo in the past to do a backup to an unraid share and upload from there to the cloud. But this means a second (even longer) downtime as well. 

    This is why it would be great to have a second copy to my share for the upload via duplicati. 

  17. On 10/30/2022 at 11:53 PM, mgutt said:

    3.) If the source path is set to /mnt/cache/appdata or /mnt/diskX/appdata, the script will create a snapshot to /mnt/*/.appdata_snapshot before creating the backup. This reduces docker container downtime to several seconds (!).

    Thanks for your awesome script @mgutt which I am currently testing. Just running my first backup. Since I added "/mnt/user/appdata" as source path I thought my docker downtime would be only a few seconds, but it took as long as all appdata files were copied.

     

    Do I have to use "/mnt/cache/appdata" instead of "/mnt/user/appdata" to make the snapshot feature work?

     

    What happens when I run the script again at the same day after it finished the first run?

     

    Thank you very much!!