dehein2

Members
  • Posts

    67
  • Joined

  • Last visited

dehein2's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. Thanks. But is the workaround still required in the current version or could i (after updating) switch all containers back to Bridge and and still access them via ipv6?
  2. ah ok, so the vontainers are still bridged afterwards (have their own ip?)
  3. @JorgeBThanks. And with turnin off you mean in each individual container settings, correct? The effekt would be that the container afterwards does not have it's own ip adress anymore but can be reached with the unraids ip + specific port proxy congis in swag look like this: server { listen 443 ssl; listen [::]:443 ssl; server_name nextcloud.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app nextcloud; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_max_temp_file_size 2048m; } } so i assume they point to the container and not the ip anyway!? Only issue I see is the port which is 443 for swag and nextcloud, which doesn't matter now as they have different ips
  4. ok thanks. But more generally. Is the setzup with the br0 and ipv6 workaround still necessary or could i just switch them all to Bridge (Thats the standard, right?). Only thing i would have to do is change the swag settings - correct?
  5. Sorry, but does anyone have an idea how to solve this the best way?
  6. Hi all, I'm about to update from 6.10.3 to 6.12.6 and was told to change the "Docker custom network type:" to ipvlan before updating. I tried it and i kind of broke a couple of things. I have a setup of a few containers and swag that are set to br01 and have the extra parameter "--sysctl net.ipv6.conf.all.disable_ipv6=0 --sysctl net.ipv6.conf.eth0.use_tempaddr=2" as I had issues with accessing the container via ipv6 in my version and was the workaround. So every container has a local ip. All containers not related to swag are set to Bridge (i guess thats the standard?) So after switching to ipvlan i had issues with my containers in br0: - Nextcloud would not load - photoview somehow loaded but not fully - Shinobi was not working That was the case when accessing them directly (via local ip) or externally via the swag reverse proxy. So my questions: How would you change the setup so I can upgrade? I Don't need the br01 workaround if ipv6 access to bridged containers works just fine in the currrent version. Thanks a lot
  7. Sorry again, now the update failed and im getting a [✘] Delete old files failed core/shipped.json is not available Update failed. To resume or retry just execute the updater again Any ideas what to to. Where would that file be placed on my unraid server / Docker?
  8. Hi all, i updated my docker and now have the message This version of Nextcloud is not compatible with PHP>=8.2. You are currently running 8.2.6. I tried the upgrade command but get an error at the end: Continue update? [y/N] y Info: Pressing Ctrl-C will finish the currently running step and then stops the updater. [✔] Check for expected files [✔] Check for write permissions [✔] Create backup [✔] Downloading [✔] Verify integrity [✔] Extracting [✔] Enable maintenance mode [✔] Replace entry points [✔] Delete old files [✔] Move new files in place [✔] Done Update of code successful. Should the "occ upgrade" command be executed? [Y/n] Y This version of Nextcloud is not compatible with PHP>=8.2.<br/>You are currently running 8.2.6. Keep maintenance mode active? [y/N] ---> EDIT: I managed to get the container back I MESSED UP 1. I had the wrong php Version error and tried to upgrade nextcloud to the latest version foloowing this guide https://github.com/linuxserver/docker-nextcloud/issues/288 2. I changed the container repositiory version to an older one but the update failed and the container was deleted. Now I don't have any nextcloud docker in my overview. Is there anyway to get the settings belonging to that container back? There were quite a few settings made (external folders, ...). Is there anyway to restore the docker with the original settings? Thanks a lot
  9. Hi all, since the last update ( a week ago) a have the issue that swag stops working every night. SO each morning i have to manually restart the container and it works just fine again. I replaced the conf and ssl file as described above after having the same issue. Here's the latest error.log 2023/01/29 15:09:15 [crit] 417#417: *2403 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/29 17:24:29 [crit] 417#417: *3074 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/29 19:08:12 [crit] 417#417: *3552 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/30 01:40:04 [crit] 419#419: *5074 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/30 08:51:41 [error] 417#417: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 1245:1234:21:123:402:5123:1234:4123, server: nextcloud.*, request: "PROPFIND /remote.php/dav/files/dehein/ HTTP/1.1", upstream: "https://192.168.2.200:443/remote.php/dav/files/dehein/", host: "nextcloud.myserver.com" 2023/01/30 08:51:41 [error] 417#417: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 1245:1234:21:123:402:5123:1234:4123, server: nextcloud.*, request: "PROPFIND /remote.php/dav/files/dehein/ HTTP/1.1", upstream: "https://[1234:1234:1234:1234::5]:443/remote.php/dav/files/dehein/", host: "nextcloud.myserver.com" 2023/01/30 08:51:41 [error] 417#417: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 1245:1234:21:123:402:5123:1234:4123, server: nextcloud.*, request: "PROPFIND /remote.php/dav/files/dehein/ HTTP/1.1", upstream: "https://192.168.2.200:443/502.html", host: "nextcloud.myserver.com" 2023/01/30 08:51:41 [error] 417#417: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 1245:1234:21:123:402:5123:1234:4123, server: nextcloud.*, request: "PROPFIND /remote.php/dav/files/dehein/ HTTP/1.1", upstream: "https://[1234:1234:1234:1234::5]:443/502.html", host: "nextcloud.myserver.com" 2023/01/30 14:16:07 [crit] 419#419: *1532 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/30 15:28:52 [crit] 420#420: *1932 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/30 19:08:12 [error] 417#417: *2987 vault could not be resolved (3: Host not found), client: 1245:1234:21:123:402:5123:1234:4123, server: vault.*, request: "GET / HTTP/1.1", host: "vault.myserver.com" 2023/01/30 19:08:12 [error] 417#417: *2987 vault could not be resolved (3: Host not found), client: 1245:1234:21:123:402:5123:1234:4123, server: vault.*, request: "GET / HTTP/1.1", host: "vault.myserver.com" 2023/01/30 21:44:36 [crit] 417#417: *3459 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/30 22:08:55 [crit] 420#420: *3561 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 2023/01/31 09:26:17 [crit] 419#419: *323 SSL_do_handshake() failed (SSL: error:0A00006C:SSL routines::bad key share) while SSL handshaking, client: 1245:1234:21:123:402:5123:1234:4123, server: [::]:443 Thanks helping
  10. maybe one more unrelated question that just came to my mind: - the other pools above (data and cachepool) each have 2 drives with 2TB (8TB) each. The pools have 2tb each - my understanding was that the second drive is a mirror - is that correct?
  11. ok, one issue - the disk is not formatted How can i prepare the disk as i did when i created the pool for the first time