Jump to content

walle

Members
  • Posts

    39
  • Joined

  • Last visited

Posts posted by walle

  1. I looked into if it was possible to run autossh as a demon on my Unifi console, it seems to not be possible (without do a bunch of hacky stuff). :(

     

    Put a Raspberry PI on the network is most likely workaround I will probably do if it can't be solved with Unraid.

  2. Humm, good idea. My router is a Unifi console and maybe possible to run autossh with a demon on that. Otherwise my plan B is to run a Raspberry PI with Rasbian + autossh and use the configuration I mentioned in my first post. In case of autossh fails on my Unraid server, I can still login via the pi to the network.

  3. My use case is my Unraid server is behind a CG-NAT, aka. the public IP number is shared, so it isn't possible to SSH to the server directly over the internet. The workaround I have for this is to let the server connect via a tunnel to a VPS. When I need to access my server remotely, I do a reverse tunnel connection via the VPS to my server. In other words, If the tunnel goes down, I can no longer access the server.

     

    In order to keep the tunnel alive, I currently use autossh and trigger it in the GO-file. But this doesn't seem to be enough, sense I have seen autossh process die time to time. So I need some kind of solution that can monitor autossh and restart it when needed. I don't think, as far as I know, either cron or user scripts plugin can do that.

     

    In regards to Docker, it's normally my go-to solution to solve most of my problems and could maybe partly solves the issue with health checks. But I don't think it's a good fit in this case for two reasons. First of all, I don't want to SSH into the container and I think there is no good way to "break out" from it in order to access the host. Second, Docker will not run unless the array have started. I need to have remote access to server even if the array goes down or the array can't start for some reason.

  4. Thank you for your rely Apandey.

     

    Currently I have this bottom of the go-file:

    # Autossh relay
    /usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTfi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected]

     

    The thing is autossh instance sometimes dies without any apparent reason. I have seen behavior for a few of my other servers where I have initiated autossh with a cronjob. This is quite worrisome for me sense this may be the only way I can remotely connect to the server.  The other servers I have runs autossh with systemd have worked flawlessly.

     

    This is why I want to run autossh  as a service/demon or whatever else that is similar to systemd and that works even if the array haven't started. So I don't think either user scripts or Docker based solutions will not work for me.

  5. On my Debian based servers, I use systemd to make sure my reverse SSH tunnel starts at boot and make sure it's running. Now I want to do similar to my Unraid server.

     

    What is the equivalent to this file

    [Unit]
    Description=My AutoSSH tunnel service
    After=network.target
    
    [Service]
    Environment="AUTOSSH_GATETIME=0"
    ExecStart=/usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected]
    
    [Install]
    WantedBy=multi-user.target

    ?

  6. 1 hour ago, sonic6 said:

    @walle thanks for sharing your script, i will use it. i am not very into scripting.

     

    maybe theres a way to backup the Network settings from /mnt/user/system/docker/container/network/ ?

    Unfortunately no what I'm aware of, all information is contained in the /mnt/user/system/docker/docker.img file and the easiest way to get any meta information from this file is running docker CLI commands. If you read the content of my script, it's basically the same docker commands that you would normally write in a terminal. I don't think there is any similar way than this unless someone either creates a plugin or be part of some backup plugin/tool. Anyhow, just install User script plugin, if you not already have it, and copy paste the script, change path and you are done. :)

    • Thanks 1
  7. On 1/1/2021 at 4:44 PM, trurl said:

    You can examine the template XML of each of your dockers to see the name of the custom network it was using. Your templates are on flash in config/plugins/dockerMan/templates-user 

    Yeah, but in my case I also need to know what driver I did use and that's why I needed the network config. But thanks anyway.


    Anyhow after another server crash (🤮), fix the most likely cause of the error, recover the data again and lastly fixed mayor issues with some containers I finally had time to reconfigure the networking. It was a bit of a pain, but if this happens again I have now at least metadata backup of the config thanks to this simple scheduled user script I run every night:

    #!/bin/bash
    BACKUP_PATH=/mnt/user/backup/system/metadata
    
    docker network ls > "${BACKUP_PATH}/docker_network.txt"
    docker images --digests > "${BACKUP_PATH}/docker_images.txt"
    docker ps > "${BACKUP_PATH}/docker_ps.txt"
    

    Besides the networking info, I also added other useful info that can come in handy such as be able to fetch specific image version instead of try to fix the latest image.

    • Thanks 1
  8. 25 minutes ago, trurl said:

    The custom docker networks need to be recreated the same way you created them originally. Once the custom networks exist your templates will use them. How did you know what to do when you created them before?

    That’s the basically the issue, I don’t remember what networks I did setup and what guides I followed. I will probably able to figure that out soon as I get some sleep (currently 4 am). I still have the corrupted docker.img, so if it’s possible to recover the information from it I will give it a try.

     

    Also, if there is a way to back up Docker metadata other than backup docker.img file, let me know. Or if it’s ok to back it up anyway even if it’s mostly unnecessary and space inefficient to do so.

  9. For some reason my cache pool got corrupted today (thx 2020) and I ended up with using btrfs restore, format and restore the pool. When I tried to start the Docker service again got the following message:

    Quote

    Your existing Docker image file needs to be recreated due to an issue from an earlier beta of Unraid 6. Failure to do so may result in your docker image suffering corruption at a later time. Please do this NOW!

    So I deleted the docker.img-file, started the service and I'm currently adding the containers from my templates.

     

    However it seems that my Docker network settings are gone and therefor a few containers can't run properly because of missing my custom network types.

    Is there a way to restore these network settings, or at least a way to recreate it? Also it would be nice if I can get my hands on the metadata of the Docker containers such as autostart settings etc, but the main part is networking settings.

  10. The balance operation is done, and I think this looks like that everything is in working order (correct me if I'm wrong):

    Label: none  uuid: fd9abfd5-7e13-487f-ba5d-419b90608d6b
            Total devices 2 FS bytes used 262.61GiB
            devid    1 size 465.76GiB used 293.03GiB path /dev/mapper/sdg1
            devid    2 size 465.75GiB used 293.03GiB path /dev/mapper/sdf1
    
    Label: none  uuid: bf870768-3cdb-4f9e-836b-4b1ed2c4c253
            Total devices 1 FS bytes used 384.00KiB
            devid    1 size 238.47GiB used 1.02GiB path /dev/sdk1
    
    Label: none  uuid: e15f3b51-09b3-4cab-bbee-13670824960d
            Total devices 1 FS bytes used 10.89GiB
            devid    1 size 30.00GiB used 20.02GiB path /dev/loop2

    Thank you for your help @johnnie.black!

  11. I think I found the problem.

     

    I use a script from this topic in order to have the encryption key stored on another server:

    I think the issue is that the unlock encryption key was removed too early in the process and therefor couldn't start the balance operation.  After I disabled the key deletion script and re-added the drive to the pool, then the balance operation with the start of the array. Now I just have to wait and see if the balance operation can be completed successfully or not.

     

     

  12. I did that, it seams that it's still the same issue. I have attached fresh diagnostics zip.

     

    But something I noted was that when I stopped and unassigned the drive and started the array again, the docker service failed to start. I connected to the server via SSH and saw that the /mnt/user didn't exist. This was fixed by starting and stopping the array without any changes. When I later re-assigned cache2 it happen again, and was solved in the same way. Do you think this odd behavior have something to do with including cache2 into the cache pool?

    walleserver-diagnostics-20200318-1142.zip

  13. I have recently bought a new 500 GB SSD in order to create a cache pool with my existing encrypted btrfs 500 GB SSD cache device I installed a while back.

     

    What I have basically done is to shut down the server, installed the drive, start the server and lastly followed this guide:

    The thing is that I don't think it started to balance the drives when I started the server. I had to trigger a full balance manually and after balancing I'm not sure if the pool is working or not. See attachment for screenshot. As you can see, it still have "new drive" status and have basically no writes to it. I also SSH to the machine and ran "btrfs filesystem show" with the following output:

    Label: none  uuid: fd9abfd5-7e13-487f-ba5d-419b90608d6b
            Total devices 1 FS bytes used 263.25GiB
            devid    1 size 465.76GiB used 264.03GiB path /dev/mapper/sdg1
    
    Label: none  uuid: bf870768-3cdb-4f9e-836b-4b1ed2c4c253
            Total devices 1 FS bytes used 384.00KiB
            devid    1 size 238.47GiB used 1.02GiB path /dev/sdk1
    
    Label: none  uuid: e15f3b51-09b3-4cab-bbee-13670824960d
            Total devices 1 FS bytes used 10.86GiB
            devid    1 size 30.00GiB used 20.02GiB path /dev/loop2

    How can I test or otherwise verify the pool is working? If not, how can I fix it?

     

    Screenshot 2020-03-17 at 23.52.25.png

  14. Sorry about the late reply. I didn't have time to get back to you.

    Quote

    Where does duplicacy store the files being uploaded to an offsite storage?   Is there a temporary directory or something? Or is that the cache folder you were speaking of?  My upload speed is slow so I am concerned about filling up my ram faster than it can upload to a cloud.  What does your preferences look like in those folders?

    You can read about the cache folder here: https://forum.duplicacy.com/t/cache-usage-details/1079

    It uploads during backup, it only uploads chunks of the files. https://forum.duplicacy.com/t/chunk-size-details/1082

    What I do is I backup locally on the server, then using the copy command to offsite storage. It's much more efficient than run same backup for each offsite backup storage and upload it each time.

     

    Quote

    How much do I have to pay you for a docker hehe. 

    I think don't need to do that. Sense my last post, Duplicacy have announced beta testing of there new Web UI client (https://forum.duplicacy.com/t/duplicacy-web-edition-0-2-10-beta-is-now-available/1606). There are Docker images right now that looks promising (eg. https://hub.docker.com/r/saspus/duplicacy-web). I think the Web UI approach makes more sens for Unraid rather than using the CLI version. But it needs to become more stable before I dear to use it for my real backups. There are Docker images with Web UI that are progressing nicely. From what I can tell looking at some of the Docker images, all that needs to be done to work with Unraid is to create a Docker template (takes minutes to do) and test it.

     

    One potential downside with Web UI is it will probably require a license to use. But looking at what the current GUI client costs ($20 first year and $5 for year 2 and forward, https://duplicacy.com/buy.html) and assume it will have the same price, it will probably be worth it.

  15. 22 hours ago, nerdbot said:

    Hi xhaloz, thanks for the response.  I went back to working on my rsnapshot script to backup the devices in the house to the Unraid server, then got sidetracked with some other issues with Unraid as well as just busy in my regular day-to-day, so I haven't reached the off-site portion of my backup plan yet.  I'll definitely look into the links you provided.  Re: Duplicacy, I would just need the CLI license, which is $20/year for the license?

     

    Quote

    Free for personal use or commercial trial

    Source: https://github.com/gilbertchen/duplicacy#license

     

    Just download the binary and you are good to go. My post about my installation doesn't include how to work with Duplicacy, but there are guides like this one that gives an idea how to work with it.

  16. Please note that this is not a guide, this is just a short(-ish) explanation of how I currently using Duplicacy. I assume you are familiar with how Duplicacy works and somewhat comfortable to work with a terminal. I'm aware of this could be done much simpler, with eg. doing a Docker container and therefore make it more accessible to others to use. But in my case, I needed a quick and dirty setup just to start to do backups again. I maybe will do a Docker container out of this some day.

     

    In my case, I have created /boot/custom/bin/ folder where I save additional binaries like Duplicacy

    wget -O /boot/custom/bin/duplicacy https://github.com/gilbertchen/duplicacy/releases/download/v2.1.0/duplicacy_linux_x64_2.1.0

    What I add to my /boot/config/go file

    ## Copy Duplicacy binary
    cp -f /boot/custom/bin/duplicacy /usr/local/bin/duplicacy
    chmod 0755 /usr/local/bin/duplicacy
    
    ## Duplicacy backup
    cp -rf /boot/custom/duplicacy /usr/local
    chmod 0755 /usr/local/duplicacy/

    /boot/custom/duplicacy is the folder I use to save backup preferences for each main folder I backup. I copy this folder to RAM in order to minimize wear on the flash drive. Duplicacy is using this preferences folder to temporary write cache files.

     

    Folders I backup (plus private shares):

    • /boot
    • /mnt/user/appdata
    • /mnt/user/system/libvirt

    To add a folder to backup, I cd to that folder (eg. `cd /boot`) and run duplicacy init command:

    duplicacy init -pref-dir /boot/custom/duplicacy/boot my-snapshot-id /mnt/user/backup/duplicacy

    To brake the command down a bit:

    • /boot/custom/duplicacy/boot - Path to preference folder. I have a separate folder for each main backup folder.
    • /mnt/user/backup/duplicacy - My local backup share. Can be replaced with a remote storage (read Duplicacy documentation).

    If you want to add remote storage, add filters and do adjustments in preferences folder, do so before editing the .duplicacy file (eg. /boot/.duplicacy) and point it to the ram location.

    Example:

    • From: /boot/custom/duplicacy/boot
    • To: /usr/local/duplicacy/boot

    Do the same for rest of the backup folder, after that either run commands in the go-file or restart the server.

    Test to backup by running the backup command (eg. cd /boot; duplicacy backup -threads 1) and copy command for remote storage.

     

    I use User scripts plugin to run the backup and copy commands nightly.

     

    If I need to add additional remote storage or do other changes that I by mistake was saved to RAM instead of the flash memory, I run this command to sync back the changes to flash:

    rsync -avh --exclude=logs/ --exclude=cache/ --exclude=.git/ /usr/local/duplicacy/ /boot/custom/duplicacy/

    How do this differently

    Instead of adding main folders to backup with the init command, it should be possible to just run it once at / and use filters to include and exclude folders/files to backup. The reason I don't tested or want this setup is because I need to have the flexibility to have separate snapshot IDs for each folder in order to control what remote backup locations show have backup of what. For example I may want to send the /boot backup to Amazon s3 and to a friends server, but I don't want to send my family videos to s3 because it will be too expensive.

     

    Any questions @xhaloz?

×
×
  • Create New...