Mathervius

Members
  • Posts

    43
  • Joined

  • Last visited

Posts posted by Mathervius

  1. I tried to setup this container today following the new instructions (it failed to work yesterday with the original instructions). Unfortunately, it still has the same error for me as yesterday.

     

    Both containers are on my bro.10 network with static IPs. 

     

    config.yml

    db:
      user: kemal
      password: kemal
      host: 192.168.20.231
      port: 5432
      dbname: invidious

     

    error from invidious log

    from lib/db/src/db/database.cr:57:16 in '->'
    from /usr/share/crystal/src/primitives.cr:266:3 in 'build_resource'
    from lib/db/src/db/pool.cr:47:34 in 'initialize'
    from lib/db/src/db/pool.cr:40:5 in 'new:initial_pool_size:max_pool_size:max_idle_pool_size:checkout_timeout:retry_attempts:retry_delay'
    from lib/db/src/db/database.cr:56:15 in 'initialize'
    from lib/db/src/db/database.cr:49:5 in 'new'
    from lib/db/src/db.cr:155:5 in 'build_database'
    from lib/db/src/db.cr:119:5 in 'open'
    from /usr/share/crystal/src/kernel.cr:386:3 in '???'
    from src/invidious.cr:38:1 in '__crystal_main'
    from /usr/share/crystal/src/crystal/main.cr:110:5 in 'main_user_code'
    from /usr/share/crystal/src/crystal/main.cr:96:7 in 'main'
    from /usr/share/crystal/src/crystal/main.cr:119:3 in 'main'
    from src/env/__libc_start_main.c:94:2 in 'libc_start_main_stage2'
    Caused by: Cannot establish connection (PQ::ConnectionError)
    from lib/pg/src/pq/connection.cr:34:9 in 'initialize'
    from lib/pg/src/pq/connection.cr:19:5 in 'new'
    from lib/pg/src/pg/connection.cr:13:23 in 'initialize'
    from lib/pg/src/pg/connection.cr:7:5 in 'new'
    from lib/pg/src/pg/driver.cr:3:5 in 'build_connection'
    from lib/db/src/db/database.cr:57:16 in '->'
    from /usr/share/crystal/src/primitives.cr:266:3 in 'build_resource'
    from lib/db/src/db/pool.cr:47:34 in 'initialize'
    from lib/db/src/db/pool.cr:40:5 in 'new:initial_pool_size:max_pool_size:max_idle_pool_size:checkout_timeout:retry_attempts:retry_delay'
    from lib/db/src/db/database.cr:56:15 in 'initialize'
    from lib/db/src/db/database.cr:49:5 in 'new'
    from lib/db/src/db.cr:155:5 in 'build_database'
    from lib/db/src/db.cr:119:5 in 'open'
    from /usr/share/crystal/src/kernel.cr:386:3 in '???'
    from src/invidious.cr:38:1 in '__crystal_main'
    from /usr/share/crystal/src/crystal/main.cr:110:5 in 'main_user_code'
    from /usr/share/crystal/src/crystal/main.cr:96:7 in 'main'
    from /usr/share/crystal/src/crystal/main.cr:119:3 in 'main'
    from src/env/__libc_start_main.c:94:2 in 'libc_start_main_stage2'
    Caused by: Hostname lookup for postgres failed: No address found (Socket::Addrinfo::Error)

     

  2. Just throwing my hat in with the same problem. Syslog server hasn't been working properly for me so I can't attach logs.

     

    I rolled back to 6.8.3 for the time being.

     

    Before the update to 6.9.1 I had over 200 days of uptime but after updating I was getting daily crashes and my family uses the server too much for that to happen. I don't have any VMs, no GPU, just about 20 docker containers.

  3. I get the same issue everyday after my Auto Backup finishes and restarts Emby. I've posted on the Emby forums and got nowhere with it. 

     

    What's strange is that it was gone for a while but the bug was reintroduced with an update a while back. I wish I had paid closed attention to when the error started up again :(

  4. Can you try to put in the containers IP rather than the container name and see if that makes a difference?

     

    Did set/change the ombi base URL?

     

    What errors show when you try to load ombi(500/401 etc...)?

     

    Did you remember to restart the letsencrypt container after making adjustments to conf files?

     

    Do any other containers work properly through the reverse proxy?

     

    Anything in firewall logs indicating an issue?

     

    Side note: it's a lot easier to lend a hand if you make your links actual links so people don't have to copy/paste everything to see what you're talking about. It might help you get more responses. Also make sure to read through the support thread for the actual container if it has one. Most have a dedicated thread.

     

  5. You should be able to assign a static IP to the container using an IP within your LAN subnet.

     

    I have a couple networks setup for my docker containers. One uses a VLAN (br1.10) and I manually set static IPs for those containers and then the containers on my LAN just get an IP assigned to them from the server but I could just as easily assign static IPs for them as well.

  6. 23 hours ago, Energen said:

    Why are you going through the hassle of playing with keys and drives and licenses?  Move all of your hard drives over to the new system, use the existing flash drive to boot.  Done.  Your entire system is in-tact.

    While this is mostly true I recently did a complete new build and there was one pretty decent headache involving networking. I went from a quad port NIC (used as two bonded connections) to a dual port NIC. 

     

    When I booted the server there was a mess involving my docker containers networking (they're a mix of VLAN/custom IPs/VPN) and also the MAC address was different so my static mapping from my router wasn't picking up the server properly.

     

    I ended up deleting my network config file and then rebooted into GUI mode and made my network adjustments. Everything works great now but it's definitely something to keep in mind.

  7. 8 minutes ago, bigbangus said:

    Could you post an anonymous sample.conf. I'm fairly new to the game.

    So in your conf file you have something like this:

    location / {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_whatever <IP_ADDRESS>;
            proxy_pass http://$upstream_whatever:<PORT_NUMBER>;
    		
            proxy_set_header Range $http_range;
            proxy_set_header If-Range $http_if_range;
        }

    You will use the containers IP address instead of its name. In your case you would use the VPN containers IP address and whatever port you use to access the service locally.

     

    Don't forget to enable some sort of security if you are exposing these services to the web. Most of the conf files have something like this that you can use to get basic http auth setup at least:

    # enable the next two lines for http auth
    #auth_basic "Restricted";
    #auth_basic_user_file /config/nginx/.htpasswd;

     

    • Thanks 1
  8. This will come down to firewall rules on your pfSense box. I was having a bunch of issues getting communication between VLAN/LAN and every time it came down to firewall rules.

     

    I would suggest posting or searching on the netgate forums as that's where I had the best luck,

  9. 35 minutes ago, sittingmongoose said:

    I have had NZBGET running well for a long time now.  I have like 8 servers to feed it and normally don't have any problems.  I have noticed recently though many of my downloads are stuck with like 1MB left to download.  The only error I get is "[ERROR] Could not read from TLS-Socket: Connection closed by remote host".  My servers all are up and working.  Any help would be awesome!

    Are you using the latest version? I started getting a lot of failed downloads and servers not being able to connect errors with the latest version. I realized it was caused by them removing python 2 and my scripts were relying on it to run properly.

  10. I went ahead and assigned br2 a static IP address. My hope was to add the br2 static IP to an Alias in pfSense and then use that to route that traffic through a VPN. Kind of like if the docker containers were using Host as their network and Host was being router through a VPN.

     

    Unfortunately, that plan does not do what I intended. It works fine but the dockers using br2 do not get router through the VPN as if they were on Host network.

     

    I've managed to just manually give IPs to each container and then add those IPs to the pfSense alias and it works fine.

     

    I'm not sure how some of this works but I thought with br2 on its own NIC that you could sort of replicate the Host network behavior on the docker containers.

  11. Shorter version:

     

    I currently have docker assigned to br2, which has its own NIC and I manually designate IP addresses to containers from the DHCP pool range I set in the docker settings page.

     

    If I change the network setting for br2 from IPv4 address assignment: None to IPv4 address assignment: Static will it mess up my current docker network which utilizes br2? 

  12. I have been trying to eliminate call trace issues on my server and have read through lots of info on the forum. Original post

     

    After a bunch of issues I was able to get things working better. Now I have br0 up and running on one NIC and br2 running on a second NIC.

     

    br0 is the UNRAID network and I have a static IP set for it in pfSense, which is working well.

     

    Following a forum thread I set br2 up with it's IP set to none. I then set my docker network to use br2 and gave it a DHCP pool and that's working perfectly. My VMs are also on br2.

     

    I would like to have a static IP set for br2 within pfSense so that I can add it to an Alias but nothing on br2 network shows up in the DHCP lease section of pfSense. All of the br2 IPs assigned are accessible over the LAN without issue but do not show up in my DHCP leases.

     

    Should I assign an IP address to br2 to do what I am wanting or something else? Any ideas? Thanks for reading!

  13. After reading through a bunch of forum posts I tried putting the docker network onto its own NIC. Anytime I change the network settings I am no longer able to reach the machine over LAN. 

     

    I then deleted the network.cfg and rebooted into GUI mode. I made the suggested adjustments to put docker on its own NIC and once again I lost all network connectivity. 

     

    I have now adjusted eth0 to: Bonding = no, Enable bridge = yes, Bridging members of br0 = eth0.

     

    If I try and set eth0 to a static IP I lose network connectivity again. I have it set with a static IP from pfSense already but previously I had it set as static in UNRAID as well and it worked no problem.

     

    eth1, eth2, and eth3 all show as not configured now. If I make any adjustments to them I lose network connectivity and have to delete the network.cfg file and reboot in order to get connected again.

     

    The dashboard still shows that I'm using bond0, which is what it always showed before. It seems like it would match the network settings page though?

     

    Sorry for the long post but I am genuinely stuck here. 

  14. 6 hours ago, johnnie.black said:

    Macvlan call traces are usually related to having dockers with a custom IP address:

     

     

    OK, I read through that post but my dockers don't have an IP address assigned to them. Mine are Host, Bridge, Proxynet (letsencrypt), and a VPN container.

     

    Could one of those networks cause the macvlan issue? Maybe it's because I have docker set to be able to communicate with the host network (Host access to custom networks)?

  15. Well, the crashes are back....

     

    I was finally able to setup Graylog since my other syslog server (unraid) wasn't capturing the issue.

     

    I've attached the logs. The other issue is that it seems Graylog didn't export everything in the exact order that it came in. Sorry about that...

     

    This came in after rebooting the server and had it running for about an hour:

    May 26 19:17:08 Tower kernel: RIP: 0010:__nf_conntrack_confirm+0xa0/0x69e
    May 26 19:17:08 Tower kernel: Code: 04 e8 56 fb ff ff 44 89 f2 44 89 ff 89 c6 41 89 c4 e8 7f f9 ff ff 48 8b 4c 24 08 84 c0 75 af 48 8b 85 80 00 00 00 a8 08 74 26 <0f> 0b 44 89 e6 44 89 ff 45 31 f6 e8 95 f1 ff ff be 00 02 00 00 48
    May 26 19:17:08 Tower kernel: RSP: 0018:ffff8885a99c3d58 EFLAGS: 00010202
    May 26 19:17:08 Tower kernel: RAX: 0000000000000188 RBX: ffff888574348500 RCX: ffff888ad98fce18
    May 26 19:17:08 Tower kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81e091c4
    May 26 19:17:08 Tower kernel: RBP: ffff888ad98fcdc0 R08: 00000000e48f2dcb R09: ffffffff81c8aa80
    May 26 19:17:08 Tower kernel: R10: 0000000000000158 R11: ffffffff81e91080 R12: 000000000000baf1
    May 26 19:17:08 Tower kernel: R13: ffffffff81e91080 R14: 0000000000000000 R15: 000000000000eaf0
    May 26 19:17:08 Tower kernel: FS: 0000000000000000(0000) GS:ffff8885a99c0000(0000) knlGS:0000000000000000
    May 26 19:17:08 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    May 26 19:17:08 Tower kernel: CR2: 0000146fb1114000 CR3: 0000000001e0a000 CR4: 00000000000006e0
    May 26 19:17:08 Tower kernel: Call Trace:
    May 26 19:17:08 Tower kernel: <IRQ>
    May 26 19:17:08 Tower kernel: ipv4_confirm+0xaf/0xb9
    May 26 19:17:08 Tower kernel: nf_hook_slow+0x3a/0x90
    May 26 19:17:08 Tower kernel: ip_local_deliver+0xad/0xdc
    May 26 19:17:08 Tower kernel: ? ip_sublist_rcv_finish+0x54/0x54
    May 26 19:17:08 Tower kernel: ip_sabotage_in+0x38/0x3e
    May 26 19:17:08 Tower kernel: nf_hook_slow+0x3a/0x90
    May 26 19:17:08 Tower kernel: ip_rcv+0x8e/0xbe
    May 26 19:17:08 Tower kernel: ? ip_rcv_finish_core.isra.0+0x2e1/0x2e1
    May 26 19:17:08 Tower kernel: __netif_receive_skb_one_core+0x53/0x6f
    May 26 19:17:08 Tower kernel: process_backlog+0x77/0x10e
    May 26 19:17:08 Tower kernel: net_rx_action+0x107/0x26c
    May 26 19:17:08 Tower kernel: __do_softirq+0xc9/0x1d7
    May 26 19:17:08 Tower kernel: do_softirq_own_stack+0x2a/0x40
    May 26 19:17:08 Tower kernel: </IRQ>
    May 26 19:17:08 Tower kernel: do_softirq+0x4d/0x5a
    May 26 19:17:08 Tower kernel: netif_rx_ni+0x1c/0x22
    May 26 19:17:08 Tower kernel: macvlan_broadcast+0x111/0x156 [macvlan]
    May 26 19:17:08 Tower kernel: ? __switch_to_asm+0x41/0x70
    May 26 19:17:08 Tower kernel: macvlan_process_broadcast+0xea/0x128 [macvlan]
    May 26 19:17:08 Tower kernel: process_one_work+0x16e/0x24f
    May 26 19:17:08 Tower kernel: worker_thread+0x1e2/0x2b8
    May 26 19:17:08 Tower kernel: ? rescuer_thread+0x2a7/0x2a7
    May 26 19:17:08 Tower kernel: kthread+0x10c/0x114
    May 26 19:17:08 Tower kernel: ? kthread_park+0x89/0x89
    May 26 19:17:08 Tower kernel: ret_from_fork+0x35/0x40
    May 26 19:17:08 Tower kernel: ---[ end trace b58796bea918bc16 ]---

    It didn't crash the server this time though. Sorry I just don't have experience with this kind of issue...

     

    UNRAID_CRASH05262020.jpg

    graylog-search-result-relative-0.txt

  16. Have you tried testing with iperf to see the speed between your two devices on your LAN?

     

    What OS are you transferring from? MacOS SMB over wifi has been slow for me since I was on High Sierrra.

     

    The other thing you might run into is the speed of the disks you are reading from/writing to.

     

    I've had much better luck running docker containers on the UNRAID box itself rather than using a separate box.