Thx And Bye

  • Posts

  • Joined

  • Last visited

About Thx And Bye

  • Birthday January 9


  • URL
  • Location

Recent Profile Visitors

576 profile views

Thx And Bye's Achievements


Newbie (1/14)



  1. I wish there would've been a section about ECC memory but overall, it's a really informative episode. Good job!
  2. After removing the old files and creating a new export it shows "related export file is not present": But the files are present on the flash:
  3. This isn't specifically a UnRAID problem but I'm putting it here for visibility and awareness, as UnRAID v6.9.2 is affected by this bug. I already commented about the problem over here: UNRAID 6.9.2 - DOCKER CONTAINER NOT REACHABLE OVER THE INTERNET WITH IPV6 There is a problem in the networking engine of Docker when using IPv6 with a container that has only a IPv4 assigned in a bridged network. Prior to Docker version 20.10.2 IPv6 traffic was forwarded to the container regardless. This behavior changed with version 20.10.2. This is the pull request that changed this behavior: Fix IPv6 Port Forwarding for the Bridge Driver A fix for this regression was issued 4 days later: Fix regression in docker-proxy but this wasn't implemented into Docker until version 20.10.6. For me this is just a minor issue as I have a full dual-stack connection and switched to only IPv4 for now, but for people using a connection via DS-Lite, this could mean that their docker-containers that are operating in bridged mode aren't accessible from outside of their home network anymore (like PLEX or Nextcloud).
  4. I too can't access my Nginx via IPv6 anymore after updating to UnRAID 6.9.2. Connecting via IPv4 still works fine. I had no problem with my config in 6.8.3, 6.9.0 and 6.9.1. Is there a way to fix it or is this something that will get fixed in a future UnRAID update? EDIT: I did some digging and came across this pull request that was merged in Docker 20.10.2: As UnRAID 6.9.2 switched from an older version than 20.10.2 to 20.10.5 it also includes this change and as such IPv6 isn't forwarded by default anymore. There is a fix merged: but it's not implemented in the 20.10.5 release. It's fixed with 20.10.6 tho, see release notes:
  5. It works for me expect for one drive (UnRAID v6.9.2). I suspect it has to do with the identifier in the webUI having a dot that is substituted with a underscore in the smart-one.cfg. WebUI: Samsung_SSD_850_EVO_M.2_250GB_S33CNX0H506075N Conf : Samsung_SSD_850_EVO_M_2_250GB_S33CNX0H506075N It's the dot in "M.2" that seems to cause the problem as changing the underscore to a dot in the config makes the values appear in the WebUI too.
  6. Nice tips, I just wish it would be easier to setup KeysFile authentication and disable password authentication for the SSH. Just placing your pupkey in the UI and setting a checkbox to disable password auth would be nice. I currently have it setup like ken-ji describes here. Then i edited PasswordAuthentication to "no". Also think about a secure by default approach with future updates. Why not force the user to set a secure password on first load? Why even make shares public by default? Why allow "guest" to access SMB shares by default? Why create a share for the flash in the first place? I get that some of those things make it more convenient, but imo convenience should not compromise security.
  7. Restarting the api solved the issue for me and it connected properly. The "Support Thread" for the plugin links just to the general support forum, so I assumed that I had to post problems in here.
  8. Hi, so I have a couble of issues with the new "My Servers" integration of My setup is: - Unraid 6.9.1 server dedicated hardware, default ports for WebUI changed to 8080 and 4433 (SSL), as 80 and 443 are used by a container. - pfSense 2.5.0 on dedicated hardware the DNS resolver has excluded for the DNS rebind: server: private-domain: "" The problems I had: The first error is when logging in, I get the error "Communication with [UnraidName] has timed out" Problems I solved: 1. In the menu it shows My Servers Error, guest doesn't have permission to access "servers". Why would I want guests to access my server? - Solved by entering "unraid-api restart" on the cli as per the solution from ljm42 2. When trying to provision a certificate I get the error that the router has DNS rebind enabled, is explicitly allowed already and the url does resolve properly. - Solved by entering the domain with port manually and trying to provision again The only other thing that is working is the flash-backup. Is this all due to using non-default ports? Or is there some other problem that I need to resolve? If anything more specific is required, I'll try to provide it.
  9. While I understand that this isn't the intended way to use container on UnRAID and am transitioning over my stacks from docker-compose at some point, I still think that something as basic as checking if a running container has an update available or not should work. Ideally the UnRAID UI would support compose files somewhere in an advanced view directly and then deal with those separately from the rest.
  10. If you want to call from shell scrips, use the complete command, not the alias. Like this: docker run --rm -v /var/run/docker.sock:/var/run/docker.sock -v "$path:$path" -w="$path" docker/compose $path has to be the directory where your compose file resides.
  11. As CA Backupv2 doesn't use the --dereference (-h) operator for tar it would only backup the symbolic link itself, not the files it points to. Even with the -h option I'd advise against it tho. Just put all your config folders from your docker containers to the same folder for a clean backup and easy restore.
  12. Okay. Fair decision, your call after all. Yet it still causes a significant downtime for me when the backup runs if compression only runs on one thread. I found a way to use pigz without needing to modify the plugin tho. I installed pigz via nerd tools and replaced the gzip binary with a symlink to pigz, so it's used by tar --auto-compress with the .tar.gz ending like CA Backupv2 invokes it. It yields substantial performance improvements with my 8C/16T CPU. Currently that's a reduction down to 6min from 37min (with verification of the backup file). If anyone stumbles upon this and thinks they can profit from parallel compression, it's fairly simple to do, just don't expect @Squid to fix any problems that should come up with this configuration and keep it in mind if you should ever run into a problem with CA Backupv2 to revert this before filing any bug report. Just move the original gzip binary (I renamed it to ogzip so if you should ever need it, it's still there) and then create the link: mv /bin/gzip /bin/ogzip ln -s /usr/bin/pigz /bin/gzip 'Official' support via a checkbox if pigz is detected would probably be better but this works for me and can be applied via the /boot/config/go script just fine.
  13. Compatibility problems specifically with CA Backup or with the tar on Unraid? Shouldn't it be as easy as to replace the -z flag of tar with -I pigz when pigz is installed? Also if pigz doesn't work, wouldn't a other compression that supports multicore (like pbzip2) possible?
  14. The plugin works really well for the task but is it intended behavior that only one thread is used for compression? I think on a multicore CPU there could be substantial performance improvements if more cores are used for this task. If there is no better method available on Unraid by default, maybe make it optional if a better compression program is detected (e.g. pigz installed via NerdPack)? I think anyone can profit from this as multi-core CPUs are the norm nowadays.