walle

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

987 profile views

walle's Achievements

Noob

Noob (1/14)

3

Reputation

  1. I recently added a 1 tb SSD to my existing 2x 500 gb SSD cache pool. As I understand it, I should have 1 tb useful space for file storage(source: the btrfs disk usage calculator). However GUI indicates I only have round 570 GB (432+139) useful storage. Have I missed something? What's the reason I don't get 1 tb with raid 1?
  2. I looked into if it was possible to run autossh as a demon on my Unifi console, it seems to not be possible (without do a bunch of hacky stuff). Put a Raspberry PI on the network is most likely workaround I will probably do if it can't be solved with Unraid.
  3. Humm, good idea. My router is a Unifi console and maybe possible to run autossh with a demon on that. Otherwise my plan B is to run a Raspberry PI with Rasbian + autossh and use the configuration I mentioned in my first post. In case of autossh fails on my Unraid server, I can still login via the pi to the network.
  4. My use case is my Unraid server is behind a CG-NAT, aka. the public IP number is shared, so it isn't possible to SSH to the server directly over the internet. The workaround I have for this is to let the server connect via a tunnel to a VPS. When I need to access my server remotely, I do a reverse tunnel connection via the VPS to my server. In other words, If the tunnel goes down, I can no longer access the server. In order to keep the tunnel alive, I currently use autossh and trigger it in the GO-file. But this doesn't seem to be enough, sense I have seen autossh process die time to time. So I need some kind of solution that can monitor autossh and restart it when needed. I don't think, as far as I know, either cron or user scripts plugin can do that. In regards to Docker, it's normally my go-to solution to solve most of my problems and could maybe partly solves the issue with health checks. But I don't think it's a good fit in this case for two reasons. First of all, I don't want to SSH into the container and I think there is no good way to "break out" from it in order to access the host. Second, Docker will not run unless the array have started. I need to have remote access to server even if the array goes down or the array can't start for some reason.
  5. Thank you for your rely Apandey. Currently I have this bottom of the go-file: # Autossh relay /usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTfi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected] The thing is autossh instance sometimes dies without any apparent reason. I have seen behavior for a few of my other servers where I have initiated autossh with a cronjob. This is quite worrisome for me sense this may be the only way I can remotely connect to the server. The other servers I have runs autossh with systemd have worked flawlessly. This is why I want to run autossh as a service/demon or whatever else that is similar to systemd and that works even if the array haven't started. So I don't think either user scripts or Docker based solutions will not work for me.
  6. On my Debian based servers, I use systemd to make sure my reverse SSH tunnel starts at boot and make sure it's running. Now I want to do similar to my Unraid server. What is the equivalent to this file [Unit] Description=My AutoSSH tunnel service After=network.target [Service] Environment="AUTOSSH_GATETIME=0" ExecStart=/usr/bin/autossh -M 0 -o ServerAliveInterval=60 -o ServerAliveCountMax=3 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -NTi /etc/ssh/id_ed25519 -R 1234:localhost:22 [email protected] [Install] WantedBy=multi-user.target ?
  7. It would be nice if Autossh cloud be added: https://slackware.pkgs.org/current/slackers/autossh-1.4g-x86_64-2cf.txz.html
  8. I'm really happy to see a Nerdpack replacement! Good work! The only package I missing now is mtr or mtr-tiny. For everyone that doesn't know, mtr is similar traceroute but with additional useful information that's really useful tool for troubleshooting network issues. You can read more about it here: https://www.linode.com/docs/guides/troubleshooting-basic-connection-issues/#mtr
  9. Unfortunately no what I'm aware of, all information is contained in the /mnt/user/system/docker/docker.img file and the easiest way to get any meta information from this file is running docker CLI commands. If you read the content of my script, it's basically the same docker commands that you would normally write in a terminal. I don't think there is any similar way than this unless someone either creates a plugin or be part of some backup plugin/tool. Anyhow, just install User script plugin, if you not already have it, and copy paste the script, change path and you are done.
  10. I would like to have the option to install autossh with Nredpack if possible.
  11. Yeah, but in my case I also need to know what driver I did use and that's why I needed the network config. But thanks anyway. Anyhow after another server crash (🤮), fix the most likely cause of the error, recover the data again and lastly fixed mayor issues with some containers I finally had time to reconfigure the networking. It was a bit of a pain, but if this happens again I have now at least metadata backup of the config thanks to this simple scheduled user script I run every night: #!/bin/bash BACKUP_PATH=/mnt/user/backup/system/metadata docker network ls > "${BACKUP_PATH}/docker_network.txt" docker images --digests > "${BACKUP_PATH}/docker_images.txt" docker ps > "${BACKUP_PATH}/docker_ps.txt" Besides the networking info, I also added other useful info that can come in handy such as be able to fetch specific image version instead of try to fix the latest image.
  12. That’s the basically the issue, I don’t remember what networks I did setup and what guides I followed. I will probably able to figure that out soon as I get some sleep (currently 4 am). I still have the corrupted docker.img, so if it’s possible to recover the information from it I will give it a try. Also, if there is a way to back up Docker metadata other than backup docker.img file, let me know. Or if it’s ok to back it up anyway even if it’s mostly unnecessary and space inefficient to do so.
  13. For some reason my cache pool got corrupted today (thx 2020) and I ended up with using btrfs restore, format and restore the pool. When I tried to start the Docker service again got the following message: So I deleted the docker.img-file, started the service and I'm currently adding the containers from my templates. However it seems that my Docker network settings are gone and therefor a few containers can't run properly because of missing my custom network types. Is there a way to restore these network settings, or at least a way to recreate it? Also it would be nice if I can get my hands on the metadata of the Docker containers such as autostart settings etc, but the main part is networking settings.
  14. The balance operation is done, and I think this looks like that everything is in working order (correct me if I'm wrong): Label: none uuid: fd9abfd5-7e13-487f-ba5d-419b90608d6b Total devices 2 FS bytes used 262.61GiB devid 1 size 465.76GiB used 293.03GiB path /dev/mapper/sdg1 devid 2 size 465.75GiB used 293.03GiB path /dev/mapper/sdf1 Label: none uuid: bf870768-3cdb-4f9e-836b-4b1ed2c4c253 Total devices 1 FS bytes used 384.00KiB devid 1 size 238.47GiB used 1.02GiB path /dev/sdk1 Label: none uuid: e15f3b51-09b3-4cab-bbee-13670824960d Total devices 1 FS bytes used 10.89GiB devid 1 size 30.00GiB used 20.02GiB path /dev/loop2 Thank you for your help @johnnie.black!
  15. I think I found the problem. I use a script from this topic in order to have the encryption key stored on another server: I think the issue is that the unlock encryption key was removed too early in the process and therefor couldn't start the balance operation. After I disabled the key deletion script and re-added the drive to the pool, then the balance operation with the start of the array. Now I just have to wait and see if the balance operation can be completed successfully or not.