walle

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by walle

  1. I recently added a 1 tb SSD to my existing 2x 500 gb SSD cache pool. As I understand it, I should have 1 tb useful space for file storage(source: the btrfs disk usage calculator). However GUI indicates I only have round 570 GB (432+139) useful storage. Have I missed something? What's the reason I don't get 1 tb with raid 1?
  2. I looked into if it was possible to run autossh as a demon on my Unifi console, it seems to not be possible (without do a bunch of hacky stuff). Put a Raspberry PI on the network is most likely workaround I will probably do if it can't be solved with Unraid.
  3. Humm, good idea. My router is a Unifi console and maybe possible to run autossh with a demon on that. Otherwise my plan B is to run a Raspberry PI with Rasbian + autossh and use the configuration I mentioned in my first post. In case of autossh fails on my Unraid server, I can still login via the pi to the network.
  4. My use case is my Unraid server is behind a CG-NAT, aka. the public IP number is shared, so it isn't possible to SSH to the server directly over the internet. The workaround I have for this is to let the server connect via a tunnel to a VPS. When I need to access my server remotely, I do a reverse tunnel connection via the VPS to my server. In other words, If the tunnel goes down, I can no longer access the server. In order to keep the tunnel alive, I currently use autossh and trigger it in the GO-file. But this doesn't seem to be enough, sense I have seen autossh process die time to time. So I need some kind of solution that can monitor autossh and restart it when needed. I don't think, as far as I know, either cron or user scripts plugin can do that. In regards to Docker, it's normally my go-to solution to solve most of my problems and could maybe partly solves the issue with health checks. But I don't think it's a good fit in this case for two reasons. First of all, I don't want to SSH into the container and I think there is no good way to "break out" from it in order to access the host. Second, Docker will not run unless the array have started. I need to have remote access to server even if the array goes down or the array can't start for some reason.
  5. Thank you for your rely Apandey. Currently I have this bottom of the go-file: # Autossh relay /usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTfi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected] The thing is autossh instance sometimes dies without any apparent reason. I have seen behavior for a few of my other servers where I have initiated autossh with a cronjob. This is quite worrisome for me sense this may be the only way I can remotely connect to the server. The other servers I have runs autossh with systemd have worked flawlessly. This is why I want to run autossh as a service/demon or whatever else that is similar to systemd and that works even if the array haven't started. So I don't think either user scripts or Docker based solutions will not work for me.
  6. On my Debian based servers, I use systemd to make sure my reverse SSH tunnel starts at boot and make sure it's running. Now I want to do similar to my Unraid server. What is the equivalent to this file [Unit] Description=My AutoSSH tunnel service After=network.target [Service] Environment="AUTOSSH_GATETIME=0" ExecStart=/usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected] [Install] WantedBy=multi-user.target ?
  7. It would be nice if Autossh cloud be added: https://slackware.pkgs.org/current/slackers/autossh-1.4g-x86_64-2cf.txz.html
  8. I'm really happy to see a Nerdpack replacement! Good work! The only package I missing now is mtr or mtr-tiny. For everyone that doesn't know, mtr is similar traceroute but with additional useful information that's really useful tool for troubleshooting network issues. You can read more about it here: https://www.linode.com/docs/guides/troubleshooting-basic-connection-issues/#mtr
  9. Unfortunately no what I'm aware of, all information is contained in the /mnt/user/system/docker/docker.img file and the easiest way to get any meta information from this file is running docker CLI commands. If you read the content of my script, it's basically the same docker commands that you would normally write in a terminal. I don't think there is any similar way than this unless someone either creates a plugin or be part of some backup plugin/tool. Anyhow, just install User script plugin, if you not already have it, and copy paste the script, change path and you are done.
  10. I would like to have the option to install autossh with Nredpack if possible.
  11. Yeah, but in my case I also need to know what driver I did use and that's why I needed the network config. But thanks anyway. Anyhow after another server crash (🤮), fix the most likely cause of the error, recover the data again and lastly fixed mayor issues with some containers I finally had time to reconfigure the networking. It was a bit of a pain, but if this happens again I have now at least metadata backup of the config thanks to this simple scheduled user script I run every night: #!/bin/bash BACKUP_PATH=/mnt/user/backup/system/metadata docker network ls > "${BACKUP_PATH}/docker_network.txt" docker images --digests > "${BACKUP_PATH}/docker_images.txt" docker ps > "${BACKUP_PATH}/docker_ps.txt" Besides the networking info, I also added other useful info that can come in handy such as be able to fetch specific image version instead of try to fix the latest image.
  12. That’s the basically the issue, I don’t remember what networks I did setup and what guides I followed. I will probably able to figure that out soon as I get some sleep (currently 4 am). I still have the corrupted docker.img, so if it’s possible to recover the information from it I will give it a try. Also, if there is a way to back up Docker metadata other than backup docker.img file, let me know. Or if it’s ok to back it up anyway even if it’s mostly unnecessary and space inefficient to do so.
  13. For some reason my cache pool got corrupted today (thx 2020) and I ended up with using btrfs restore, format and restore the pool. When I tried to start the Docker service again got the following message: So I deleted the docker.img-file, started the service and I'm currently adding the containers from my templates. However it seems that my Docker network settings are gone and therefor a few containers can't run properly because of missing my custom network types. Is there a way to restore these network settings, or at least a way to recreate it? Also it would be nice if I can get my hands on the metadata of the Docker containers such as autostart settings etc, but the main part is networking settings.
  14. The balance operation is done, and I think this looks like that everything is in working order (correct me if I'm wrong): Label: none uuid: fd9abfd5-7e13-487f-ba5d-419b90608d6b Total devices 2 FS bytes used 262.61GiB devid 1 size 465.76GiB used 293.03GiB path /dev/mapper/sdg1 devid 2 size 465.75GiB used 293.03GiB path /dev/mapper/sdf1 Label: none uuid: bf870768-3cdb-4f9e-836b-4b1ed2c4c253 Total devices 1 FS bytes used 384.00KiB devid 1 size 238.47GiB used 1.02GiB path /dev/sdk1 Label: none uuid: e15f3b51-09b3-4cab-bbee-13670824960d Total devices 1 FS bytes used 10.89GiB devid 1 size 30.00GiB used 20.02GiB path /dev/loop2 Thank you for your help @johnnie.black!
  15. I think I found the problem. I use a script from this topic in order to have the encryption key stored on another server: I think the issue is that the unlock encryption key was removed too early in the process and therefor couldn't start the balance operation. After I disabled the key deletion script and re-added the drive to the pool, then the balance operation with the start of the array. Now I just have to wait and see if the balance operation can be completed successfully or not.
  16. Ah ok, yes I did that also. I didn't help.
  17. I don't think that helped. Seams to be the same issue. See attachment for fresh diagnostics. I was not sure exactly how you wanted me to wipe the device, so what I did in that case was to remove the partition and formatted to unencrypted BTRFS (if that matters at all?). Also it has been consistent and called it sdf. walleserver-diagnostics-20200318-1317.zip
  18. I did that, it seams that it's still the same issue. I have attached fresh diagnostics zip. But something I noted was that when I stopped and unassigned the drive and started the array again, the docker service failed to start. I connected to the server via SSH and saw that the /mnt/user didn't exist. This was fixed by starting and stopping the array without any changes. When I later re-assigned cache2 it happen again, and was solved in the same way. Do you think this odd behavior have something to do with including cache2 into the cache pool? walleserver-diagnostics-20200318-1142.zip
  19. Ok, here you go. walleserver-diagnostics-20200318-0901.zip
  20. I have recently bought a new 500 GB SSD in order to create a cache pool with my existing encrypted btrfs 500 GB SSD cache device I installed a while back. What I have basically done is to shut down the server, installed the drive, start the server and lastly followed this guide: The thing is that I don't think it started to balance the drives when I started the server. I had to trigger a full balance manually and after balancing I'm not sure if the pool is working or not. See attachment for screenshot. As you can see, it still have "new drive" status and have basically no writes to it. I also SSH to the machine and ran "btrfs filesystem show" with the following output: Label: none uuid: fd9abfd5-7e13-487f-ba5d-419b90608d6b Total devices 1 FS bytes used 263.25GiB devid 1 size 465.76GiB used 264.03GiB path /dev/mapper/sdg1 Label: none uuid: bf870768-3cdb-4f9e-836b-4b1ed2c4c253 Total devices 1 FS bytes used 384.00KiB devid 1 size 238.47GiB used 1.02GiB path /dev/sdk1 Label: none uuid: e15f3b51-09b3-4cab-bbee-13670824960d Total devices 1 FS bytes used 10.86GiB devid 1 size 30.00GiB used 20.02GiB path /dev/loop2 How can I test or otherwise verify the pool is working? If not, how can I fix it?
  21. I have a Docker image that needs the hostname paramter. For example: docker run --name backup-container --hostname backup-instance How do I set this parameter in a container template?
  22. Sorry about the late reply. I didn't have time to get back to you. You can read about the cache folder here: https://forum.duplicacy.com/t/cache-usage-details/1079 It uploads during backup, it only uploads chunks of the files. https://forum.duplicacy.com/t/chunk-size-details/1082 What I do is I backup locally on the server, then using the copy command to offsite storage. It's much more efficient than run same backup for each offsite backup storage and upload it each time. I think don't need to do that. Sense my last post, Duplicacy have announced beta testing of there new Web UI client (https://forum.duplicacy.com/t/duplicacy-web-edition-0-2-10-beta-is-now-available/1606). There are Docker images right now that looks promising (eg. https://hub.docker.com/r/saspus/duplicacy-web). I think the Web UI approach makes more sens for Unraid rather than using the CLI version. But it needs to become more stable before I dear to use it for my real backups. There are Docker images with Web UI that are progressing nicely. From what I can tell looking at some of the Docker images, all that needs to be done to work with Unraid is to create a Docker template (takes minutes to do) and test it. One potential downside with Web UI is it will probably require a license to use. But looking at what the current GUI client costs ($20 first year and $5 for year 2 and forward, https://duplicacy.com/buy.html) and assume it will have the same price, it will probably be worth it.
  23. Source: https://github.com/gilbertchen/duplicacy#license Just download the binary and you are good to go. My post about my installation doesn't include how to work with Duplicacy, but there are guides like this one that gives an idea how to work with it.
  24. Please note that this is not a guide, this is just a short(-ish) explanation of how I currently using Duplicacy. I assume you are familiar with how Duplicacy works and somewhat comfortable to work with a terminal. I'm aware of this could be done much simpler, with eg. doing a Docker container and therefore make it more accessible to others to use. But in my case, I needed a quick and dirty setup just to start to do backups again. I maybe will do a Docker container out of this some day. In my case, I have created /boot/custom/bin/ folder where I save additional binaries like Duplicacy wget -O /boot/custom/bin/duplicacy https://github.com/gilbertchen/duplicacy/releases/download/v2.1.0/duplicacy_linux_x64_2.1.0 What I add to my /boot/config/go file ## Copy Duplicacy binary cp -f /boot/custom/bin/duplicacy /usr/local/bin/duplicacy chmod 0755 /usr/local/bin/duplicacy ## Duplicacy backup cp -rf /boot/custom/duplicacy /usr/local chmod 0755 /usr/local/duplicacy/ /boot/custom/duplicacy is the folder I use to save backup preferences for each main folder I backup. I copy this folder to RAM in order to minimize wear on the flash drive. Duplicacy is using this preferences folder to temporary write cache files. Folders I backup (plus private shares): /boot /mnt/user/appdata /mnt/user/system/libvirt To add a folder to backup, I cd to that folder (eg. `cd /boot`) and run duplicacy init command: duplicacy init -pref-dir /boot/custom/duplicacy/boot my-snapshot-id /mnt/user/backup/duplicacy To brake the command down a bit: /boot/custom/duplicacy/boot - Path to preference folder. I have a separate folder for each main backup folder. /mnt/user/backup/duplicacy - My local backup share. Can be replaced with a remote storage (read Duplicacy documentation). If you want to add remote storage, add filters and do adjustments in preferences folder, do so before editing the .duplicacy file (eg. /boot/.duplicacy) and point it to the ram location. Example: From: /boot/custom/duplicacy/boot To: /usr/local/duplicacy/boot Do the same for rest of the backup folder, after that either run commands in the go-file or restart the server. Test to backup by running the backup command (eg. cd /boot; duplicacy backup -threads 1) and copy command for remote storage. I use User scripts plugin to run the backup and copy commands nightly. If I need to add additional remote storage or do other changes that I by mistake was saved to RAM instead of the flash memory, I run this command to sync back the changes to flash: rsync -avh --exclude=logs/ --exclude=cache/ --exclude=.git/ /usr/local/duplicacy/ /boot/custom/duplicacy/ How do this differently Instead of adding main folders to backup with the init command, it should be possible to just run it once at / and use filters to include and exclude folders/files to backup. The reason I don't tested or want this setup is because I need to have the flexibility to have separate snapshot IDs for each folder in order to control what remote backup locations show have backup of what. For example I may want to send the /boot backup to Amazon s3 and to a friends server, but I don't want to send my family videos to s3 because it will be too expensive. Any questions @xhaloz?