Thx And Bye

  • Content Count

  • Joined

  • Last visited

Community Reputation

20 Good

About Thx And Bye

  • Rank
  • Birthday January 9


  • URL
  • Location

Recent Profile Visitors

406 profile views
  1. This isn't specifically a UnRAID problem but I'm putting it here for visibility and awareness, as UnRAID v6.9.2 is affected by this bug. I already commented about the problem over here: UNRAID 6.9.2 - DOCKER CONTAINER NOT REACHABLE OVER THE INTERNET WITH IPV6 There is a problem in the networking engine of Docker when using IPv6 with a container that has only a IPv4 assigned in a bridged network. Prior to Docker version 20.10.2 IPv6 traffic was forwarded to the container regardless. This behavior changed with version 20.10.2. This is the pull request that changed this behavio
  2. I too can't access my Nginx via IPv6 anymore after updating to UnRAID 6.9.2. Connecting via IPv4 still works fine. I had no problem with my config in 6.8.3, 6.9.0 and 6.9.1. Is there a way to fix it or is this something that will get fixed in a future UnRAID update? EDIT: I did some digging and came across this pull request that was merged in Docker 20.10.2: As UnRAID 6.9.2 switched from an older version than 20.10.2 to 20.10.5 it also includes this change and as such IPv6 isn't forwarded by default anymore.
  3. It works for me expect for one drive (UnRAID v6.9.2). I suspect it has to do with the identifier in the webUI having a dot that is substituted with a underscore in the smart-one.cfg. WebUI: Samsung_SSD_850_EVO_M.2_250GB_S33CNX0H506075N Conf : Samsung_SSD_850_EVO_M_2_250GB_S33CNX0H506075N It's the dot in "M.2" that seems to cause the problem as changing the underscore to a dot in the config makes the values appear in the WebUI too.
  4. Nice tips, I just wish it would be easier to setup KeysFile authentication and disable password authentication for the SSH. Just placing your pupkey in the UI and setting a checkbox to disable password auth would be nice. I currently have it setup like ken-ji describes here. Then i edited PasswordAuthentication to "no". Also think about a secure by default approach with future updates. Why not force the user to set a secure password on first load? Why even make shares public by default? Why allow "guest" to access SMB shares by default? Why create a share for the fl
  5. Restarting the api solved the issue for me and it connected properly. The "Support Thread" for the plugin links just to the general support forum, so I assumed that I had to post problems in here.
  6. Hi, so I have a couble of issues with the new "My Servers" integration of My setup is: - Unraid 6.9.1 server dedicated hardware, default ports for WebUI changed to 8080 and 4433 (SSL), as 80 and 443 are used by a container. - pfSense 2.5.0 on dedicated hardware the DNS resolver has excluded for the DNS rebind: server: private-domain: "" The problems I had: The first error is when logging in, I get the error "Communication with [UnraidName] has timed out" Problems I solved: 1. In the menu it shows
  7. While I understand that this isn't the intended way to use container on UnRAID and am transitioning over my stacks from docker-compose at some point, I still think that something as basic as checking if a running container has an update available or not should work. Ideally the UnRAID UI would support compose files somewhere in an advanced view directly and then deal with those separately from the rest.
  8. If you want to call from shell scrips, use the complete command, not the alias. Like this: docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v "$path:$path" -w="$path" docker/compose $path has to be the directory where your compose file resides.
  9. As CA Backupv2 doesn't use the --dereference (-h) operator for tar it would only backup the symbolic link itself, not the files it points to. Even with the -h option I'd advise against it tho. Just put all your config folders from your docker containers to the same folder for a clean backup and easy restore.
  10. Okay. Fair decision, your call after all. Yet it still causes a significant downtime for me when the backup runs if compression only runs on one thread. I found a way to use pigz without needing to modify the plugin tho. I installed pigz via nerd tools and replaced the gzip binary with a symlink to pigz, so it's used by tar --auto-compress with the .tar.gz ending like CA Backupv2 invokes it. It yields substantial performance improvements with my 8C/16T CPU. Currently that's a reduction down to 6min from 37min (with verification of the backup file). If anyone stumb
  11. Compatibility problems specifically with CA Backup or with the tar on Unraid? Shouldn't it be as easy as to replace the -z flag of tar with -I pigz when pigz is installed? Also if pigz doesn't work, wouldn't a other compression that supports multicore (like pbzip2) possible?
  12. The plugin works really well for the task but is it intended behavior that only one thread is used for compression? I think on a multicore CPU there could be substantial performance improvements if more cores are used for this task. If there is no better method available on Unraid by default, maybe make it optional if a better compression program is detected (e.g. pigz installed via NerdPack)? I think anyone can profit from this as multi-core CPUs are the norm nowadays.
  13. You can also use /dev/shm to map to RAM. /tmp seems to be mounted to rootfs on unRAID and not a tempfs like /dev/shm. For unRAID this shouldn't make much of a difference (since it runs in RAM anyways) but for other Linux distros /tmp might not be in RAM while /dev/shm (if available) should always map to RAM-
  14. Eh, sure. Effectively you just have to execute the command from my other post. If you don't want to do that manually every time you open a ssh connection then you have to add it to this file: /root/.bash_profile To make it persistent across reboots (that's how I did it, not saying it's the most ideal way): Edit /boot/config/go and add: # Customise bash cat /boot/config/bash_extra.cfg >> /root/.bash_profile Create the /boot/config/bash_extra.cfg (e.g. with nano) and add: #docker-compose as container alias docker-compose='docker run --rm \ -v /var/run/do