Ben deBoer

Members
  • Posts

    18
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Ben deBoer's Achievements

Noob

Noob (1/14)

1

Reputation

  1. I keep on getting the following error in my log. Any idea about where I should start to resolve it? [richdocuments] Error: Failed to fetch the Collabora capabilities endpoint: Client error: `GET https://**nextcloud_address**/apps/richdocumentscode/proxy.php?req=/hosting/capabilities` resulted in a `404 Not Found` response: **nextcloud_address** is the address for my server I kind of figured it out. The problem is NextCloud office app. It is trying to connect to a CODE server, which is another app. I installed that one, and get different errors. Either way, I don't deal much with office files on my server, it is more for storing pictures, so I just disabled the Nextcloud office app.
  2. I managed to fix it, the issue is with the new version of sabnzbd. Basically it has to do with the way it is identifying itself with SABnzbd. I just added a username and password for sabnzbd, and added that to all my configs and it works now. Thanks
  3. Maybe I am doing something wrong. I have all my *arr stack on one docker network, however, when I go to link them together, I have to use the IP addresses of the docker network, which means that I need them to startup in a certain order. I set sonarr get the bit-torrent program, but as soon as I add binhex-sabnzbdvpn to the name to allow it to go the sab it gives me the error unable to connect. Test was aborted due to an error, HTTP failed forbidden. When I type in the docker IP address, everything works fine. anything I can try?
  4. I am having the same issue as BigMal. did you ever get it resolved? I can access the app internally just fine, however, if I try to use my reverse proxy, it shows the login screen, but doesn't go beyond that, even with the right password Re-read the .env files from the GitHub. Add a new variable TRUSTED_PROXIES = ** ( not the IP of the proxy as I was doing )
  5. Yes, I have confirmed that the web UI is basically freezing/locking up on me. Any other logs or things to list?
  6. I was running a Pi-Hole docker container on this server, and the WebUI was hanging every 5 - 10 minutes, and that message was the only thing scrolling in the logs. I have since removed the Pi-Hole docker container, but I am still getting the messages but a lot less frequently. I haven't experienced loss of the WebUI recently, however, I can't say for sure that it doesn't happen. Whenever the WebUI used to hang, and I get back in, there was always a new string like that. If it is normal that it renames like that all the time, that is fine. I am just wondering if it is failing when it has increased networking traffic, which would be a big problem as I am looking to add Jellyfin to the server.
  7. I recently upgraded to 6.11 from 6.8, and I have noticed that my ethernet has been cutting out sometimes. Has anyone else seen this? Nov 13 22:06:18 Tower kernel: eth0: renamed from vethe781b3d Nov 13 22:06:18 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth69db3b0: link becomes ready Nov 13 22:06:20 Tower avahi-daemon[23070]: Joining mDNS multicast group on interface veth69db3b0.IPv6 with address fe80::c00e:26ff:fe95:afb6. Nov 13 22:06:20 Tower avahi-daemon[23070]: New relevant interface veth69db3b0.IPv6 for mDNS. Nov 13 22:06:20 Tower avahi-daemon[23070]: Registering new address record for fe80::c00e:26ff:fe95:afb6 on veth69db3b0.*. Nov 13 22:06:25 Tower kernel: br-5180425399c9: port 2(veth69db3b0) entered disabled state Nov 13 22:06:25 Tower kernel: vethe781b3d: renamed from eth0 Nov 13 22:06:25 Tower avahi-daemon[23070]: Interface veth69db3b0.IPv6 no longer relevant for mDNS. Nov 13 22:06:25 Tower avahi-daemon[23070]: Leaving mDNS multicast group on interface veth69db3b0.IPv6 with address fe80::c00e:26ff:fe95:afb6. Nov 13 22:06:25 Tower kernel: br-5180425399c9: port 2(veth69db3b0) entered disabled state Nov 13 22:06:25 Tower kernel: device veth69db3b0 left promiscuous mode Nov 13 22:06:25 Tower kernel: br-5180425399c9: port 2(veth69db3b0) entered disabled state Nov 13 22:06:25 Tower avahi-daemon[23070]: Withdrawing address record for fe80::c00e:26ff:fe95:afb6 on veth69db3b0. Nov 13 22:06:26 Tower kernel: br-5180425399c9: port 2(veth05ce39d) entered blocking state Nov 13 22:06:26 Tower kernel: br-5180425399c9: port 2(veth05ce39d) entered disabled state Nov 13 22:06:26 Tower kernel: device veth05ce39d entered promiscuous mode Nov 13 22:06:26 Tower kernel: br-5180425399c9: port 2(veth05ce39d) entered blocking state Nov 13 22:06:26 Tower kernel: br-5180425399c9: port 2(veth05ce39d) entered forwarding state Nov 13 22:06:26 Tower kernel: eth0: renamed from veth686bcb1 Nov 13 22:06:26 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth05ce39d: link becomes ready Nov 13 22:06:28 Tower avahi-daemon[23070]: Joining mDNS multicast group on interface veth05ce39d.IPv6 with address fe80::208f:d4ff:fed1:8f54. Nov 13 22:06:28 Tower avahi-daemon[23070]: New relevant interface veth05ce39d.IPv6 for mDNS. Nov 13 22:06:28 Tower avahi-daemon[23070]: Registering new address record for fe80::208f:d4ff:fed1:8f54 on veth05ce39d.*. tower-diagnostics-20221114-2156.zip
  8. I recently updated to 6.11 from 6.8 I have been having 2 issues that might/seem to be related. The PiHole docker causes the eth0 to constantly disconnect. I have removed it from the system. However, I have noticed that "sometimes" the webUI will just freeze, and I have to constantly reload http://tower in order to get back into the webUI. In the logs, there doesn't seem to be any issues that are logged.
  9. I am having a problem with the latest PiHole setup. I use the Br0 interface and give it a separate IP address from the IP address of the server. This used to work in 6.8 version of unraid, but started to throw massive errors when I updated to 6.11. t 27 12:38:24 Tower kernel: vetha58e1c6: renamed from eth0 Oct 27 12:38:24 Tower avahi-daemon[2174]: Interface vethd89dac8.IPv6 no longer relevant for mDNS. Oct 27 12:38:24 Tower avahi-daemon[2174]: Leaving mDNS multicast group on interface vethd89dac8.IPv6 with address fe80::28c0:30ff:fe29:6f35. Oct 27 12:38:24 Tower kernel: docker0: port 3(vethd89dac8) entered disabled state Oct 27 12:38:24 Tower kernel: device vethd89dac8 left promiscuous mode Oct 27 12:38:24 Tower kernel: docker0: port 3(vethd89dac8) entered disabled state Oct 27 12:38:24 Tower avahi-daemon[2174]: Withdrawing address record for fe80::28c0:30ff:fe29:6f35 on vethd89dac8. Oct 27 12:47:10 Tower kernel: br-5180425399c9: port 2(veth78638f9) entered disabled state This keeps spamming in the logs when I have PiHole container active. Any solutions?
  10. the latest update broke my server as well. I had it setup with SWAG to provide a reverse proxy, but now that proxy is just giving me a 502 error. I have docker setup to communicate with other containers, or did that feature get broken?
  11. When you have time, can you add the new DireWolf20 1.18 pack?
  12. I am having some problems connecting to this docker container from outside the firewall. I can access the container fine inside the firewall. I am using swag/nginx for my proxy. I have that setup with nextcloud on a proxynet network. I have added the openproject.subdomain.conf file to the swag/nginx proxy-conf folder. what do I do next? I am thinking I have to add the openproject container to the proxynet, but then it can't use the same IP, and I get stuck there problem is resolved. I made a new subdomain at cloudflare for my account, my router is already setup to route all incoming ssl and port 80 traffic to my swag container. I used the above conf file with a small change at the resolver, and pointed it at the IP address for my openproject server. the last thing I had to do was allow host access to custom networks on the docker settings since my swag container is on a private subnetwork. and then everything just worked. thanks for the great container
  13. I have setup nextcloud, mariaDB and swag following as close as I can to spaceinvaders one youtube video. My problem is that after I change the config files for swag and nextcloud, I can no longer access the nextcloud webUI. it just brings up the default "welcome to swag" page. Even when I shutdown the swag container, the nextcloud webUI still stays the same. Has anyone had the same issues? edit - solved. when you are using a cache drive, by default the docker container will put the config files on your cache drive. so when he is adjusting the files, type /mnt/cache instead of /mnt/user