bumpkin

Members
  • Posts

    30
  • Joined

  • Last visited

Everything posted by bumpkin

  1. Hey, just a follow-up on this. Adding a variable for VPNPORT for the relevant port fixed this. Yes, port forwarding through openvpn-client is explained at your github page. I just totally overlooked it. My apologies.
  2. I really appreciate your persistence here. Well, I can't get his container to see devices on my lan (the ROUTE variable doesn't seem to work the same way) and it seems to be super laggy with slow pings on the order of seconds (using the very same ovpn file). Oh well. I can VPN into my LAN and use your container. That works well. It just doesn't work for things like an Apple TV which needs to authenticate in the manner described. Thanks again for all the help.
  3. Well, the ChannelsDVR container needs to be able to see an HDHomeRun tuner on my LAN (sitting at 192.168.1.63). Adding ROUTE to 192.168.1.0/24 in openvpn-client fixed that. I was thinking the reason openvpn-client wasn't forwarding the port to my container is because it wasn't communicating through to 172.17.0.0/16. But it's probably just a port forwarding issue through the container. I'm stumped.
  4. I don't think it's blacklisted because it works using the same .opvn config file with binhex-delugevpn. Can I add a second subnet using ROUTE? Do I just separate the two using a comma?
  5. Thanks so much for helping with stuff like this. I set a variable for ROUTE to my local subnet (192.168.1.0/24) which allows my application container to see other devices on my LAN (thanks for that help). I'm using an application called ChannelsDVR (which is awesome). It has an external authentication mechanism through my.channelsdvr.net. When sitting behind VPN using openvpn-client, mychannelsdvr.net can't authenticate. I have opened the relevant port with my VPN provider. It works using binhex-delugevpn if I set VPN_INPUT_PORTS and VPN_OUTPUT_PORTS to the Channels port (8089), but there are other problems using that container. I have everything working the way I want, but my.channelsdvr.net can't see through openvpn-client, and it has to be because it's not forwarding my port through to the container. So close! Is it also possible to add a route for my Docker subnet (172.17.0.0/16)? I tried comma separated after the ROUTE variable, but didn't work. Alternatively, do I need to affirmatively forward the port in openvpn-client? As mentioned, I tried VPNPORT but it didn't seem to work.
  6. Crazy. I definitely get faster speeds with that download link. It does seem speedtest-cli is crap. Unfortunately, when routing another container through binhex-delugevpn, the ping and bandwidth is still a mess. Something is wrong. Probably on my end. Thanks again for the help.
  7. I have another (hopefully) simple one. I'm routing a container through OpenVPN-Client with Network Type: None and Extra Parameters: --net=container:OpenVPN-Client. I've added a port to OpenVPN-Client for my container, and I've opened the relevant port in my firewall directed at my Unraid server). It's working nicely. However, I can't seem to port forward through the tunnel to my container. If I hit my WAN IP and the relevant port, then I can make it in, but it's not forwarding connections from the VPN tunnel. I tried just a variable for VPNPORT with the same port number, but then I can't access the container at all. Any thoughts? I tried setting my router to forward to the container subnet IP (instead of my Unraid server IP), but it wouldn't let me add an IP from that subnet.
  8. I do wish it were one of those. In both containers (with identical .ovpn files) I'm using speedtest-cli to test performance. In binhex-delugevpn, I'm getting a ping of like 3700. With openvpn-client, I'm getting a ping of ~30.
  9. Any chance it's some sysctl command in the OpenVPN-Client container?
  10. I'm routing a container's (Channels) traffic through the OpenVPN-Client container connected to a VPN. It works very well. However, I need Channels to be able to see a device on my LAN (HDHomeRun). When connected through OpenVPN-Client (--net:container:OpenVPN-Client), the Channels container cannot see devices on my LAN (i.e., the HDHomeRun). Any idea how I fix that?
  11. I am using the exact same .ovpn configuration for binhex-delugevpn (PureVPN) as I am for openvpn-client. For binhex-delugevpn, I'm getting like 2Mbit/s up and down. For openvpn--client I'm getting like 500 down and 40 up (which is closer to what I would expect). Ping speeds for binhex-delugevpn are also ridiculously slow. Any thoughts on this? There's certain functionality that I like for binhex-delugevpn (e.g., auto-setting forwarding port), but the speeds are super slow.
  12. I saw that guide. It seems to anticipate that I put my entire Docker stack on the second NIC. I'm not sure what happens then. Would all of those containers be on a new IP address? I'll have to make a lot of changes to my Swag configuration files if that's true (the container names never worked for me). I was kind of hoping that I could just put this one container on a new IP (192.168.1.301), and leave everything else where it is (various ports on 192.168.1.300).
  13. My firewall (Firewalla Purple) very elegantly lets me put devices on my network behind VPN. I'd like to do that with a specific container on my Unraid server that requires port forwarding (Channels DVR). The Firewalla seems to get confused when two IP addresses share the same MAC addresses. I've purchased a second NIC, but my first run at enabling it screwed things up. How would I assign that NIC/interface to a specific container with its own IP and MAC addresses? Should I be doing this with macvlan (something I've never setup)? Yes, I accomplished something similar with binhex-delugevpn, including port forwarding through the VPN provider, but for some odd reason it was very slow and flaky.
  14. Interesting that this was mentioned. For the first time in years of running Plex, I randomly had the mkv h264 codec take a dump. Couldn't figure out what the problem was. Just wouldn't stream some shows. Finally figured out it was mkv files, and all h264. After an hour of poking around, deleting the contents of the codecs folder in appdata fixed it for me.
  15. Started happening to me a few days ago. Suck. Restarting the server sorts things out, but cannot otherwise find the cause, and it keeps coming back. Filling the log repeating the following: Aug 1 00:10:53 BigChief nginx: 2023/08/01 00:10:53 [crit] 7226#7226: ngx_slab_alloc() failed: no memory Aug 1 00:10:53 BigChief nginx: 2023/08/01 00:10:53 [error] 7226#7226: shpool alloc failed Aug 1 00:10:53 BigChief nginx: 2023/08/01 00:10:53 [error] 7226#7226: nchan: Out of shared memory while allocating message of size 15979. Increase nchan_max_reserved_memory. Aug 1 00:10:53 BigChief nginx: 2023/08/01 00:10:53 [error] 7226#7226: *895289 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost" Aug 1 00:10:53 BigChief nginx: 2023/08/01 00:10:53 [error] 7226#7226: MEMSTORE:00: can't create shared message for channel /disks
  16. I have another EXTRA_FILE question. I'd happily delete the files below, but I can't seem to find them using the container's console. Any tips? Technical information ===================== The following list covers which files have failed the integrity check. Please read the previous linked documentation to learn more about the errors and how to fix them. Results ======= - core - EXTRA_FILE - core/js/tests/specs/appsSpec.js - dist/files_trashbin-files_trashbin.js - dist/files_trashbin-files_trashbin.js.LICENSE.txt - dist/files_trashbin-files_trashbin.js.map - lib/private/Files/ObjectStore/NoopScanner.php - lib/private/Updater/ChangesResult.php - lib/public/WorkflowEngine/IEntityCompat.php - lib/public/WorkflowEngine/IOperationCompat.php Raw output ========== Array ( [core] => Array ( [EXTRA_FILE] => Array ( [core/js/tests/specs/appsSpec.js] => Array ( [expected] => [current] => cf1ff76b5129943a1ffd6ea068ce6e8bc277718b9d0c81dccce47d723e1f290be20f197b6543f17f3b2ac78d8d4986354db4103de2b5e31c76e00e248984605b ) [dist/files_trashbin-files_trashbin.js] => Array ( [expected] => [current] => 24e537aff151f18ae18af31152bcfd7de9c96f0f6fdcca4c1ad975ece80bb35a2ab7d51c257af6a9762728d7688c9ba37a5359a950eb1e9f401d4b9d875d92b2 ) [dist/files_trashbin-files_trashbin.js.LICENSE.txt] => Array ( [expected] => [current] => 2e40e4786aa1f3a96022164e12a5868e0c6a482e89b3642e1d5eea6502725061e2077786c6cd118905e499b56b2fc58e4efc34d6810ff96a56c54bf990790975 ) [dist/files_trashbin-files_trashbin.js.map] => Array ( [expected] => [current] => 65c2a7ddc654364d8884aaee8af4e506da87e54ae82f011989d6b96a625b0dbfba14be6d6af545fa074a23ccf2cc29043a411dc3ac1f80a24955c8a9faa28754 ) [lib/private/Files/ObjectStore/NoopScanner.php] => Array ( [expected] => [current] => 62d6c5360faf2c7fca90eaafa0e92f85502794a42a5993e2fe032c58b4089947773e588ad80250def78f268499e0e1d9b6b05bc8237cc946469cd6f1fb0b590c ) [lib/private/Updater/ChangesResult.php] => Array ( [expected] => [current] => d2e964099dfd4c6d49ae8fc2c6cbc7d230d4f5c19f80804d5d28df5d5c65786a37ea6f554deacecad679d39dbba0d6bd6e4edadca292238f45f18f080277dad0 ) [lib/public/WorkflowEngine/IEntityCompat.php] => Array ( [expected] => [current] => ea1856748e5fcf8a901597871f1708bdf28db69d6fa8771c70f38212f028b2ff744b04019230721c64b484675982d495a2c96d1175453130b4563c6a61942213 ) [lib/public/WorkflowEngine/IOperationCompat.php] => Array ( [expected] => [current] => 6c09c15e9d855343cc33a682b95a827546fa56c20cc6a249774f7b11f75486159ebfe740ffcba2c9fa9342ab115e7bf99b8c25f71bb490c5d844da251b9751ed ) ) ) )
  17. I have a couple Windows machines with old archives of photos, hundreds of gigabytes. I'd like to copy these into Nextcloud. I have my usual Unraid user account with read/write access to the Nextcloud share, and I can SMB in. But when I go to copy a file in to the [user]\files folder, I get a permissions error. Any help? I've created a new share and connected it via external storage, and that would work fine. But I'd rather my photos get folded in natively (and then run occ files:scan [user]). Thoughts? * edit -- I just changed the permissions for the user folder on the nextcloud share using dynamix to change group and other to read/write. Now works fine. Hopefully somebody will pipe up if that was dumb (e.g., I created some security vulnerability).
  18. For the record, I just went ahead and did it. Had to go through the setup prompts again, but I just pointed it at the existing database and most of it came up fine. Had to re-register with the Filerun site. It maintained all my custom configurations.
  19. I got Filerun working using the instructions here. I now have everything setup and after several hours of tinkering, everything is working brilliantly. First time rolling my own instead of using Community Applications. It occurred to me to update the favicon and went looking for the appdata folder. Unfortunately, I realize now that I failed to set one (the guide I used was, evidently, based on this one, but didn't include everything). Is there any way to add that path now and copy the contents over without breaking everything? If I just let it be, is my Docker image going to start growing? That strikes me as unlikely, but wondering whether I should be concerned and whether there's a way to remedy the issue. I should have set a path for /mnt/cache/appdata/filerun/web : /var/www/html Am I screwed now?
  20. Oh, man -- thanks for that feedback, but that kind of sucks! If it's in my array and the whole thing powers on after power failure/ups exhaustion, and a disk doesn't start, then things start breaking!
  21. Anyone ever use one of these successfully? Looking to buy a D800S myself. As noted, the Asmedia 1164 appears in the thread noting compatible controllers. But the comments above have me hesitating.
  22. Maybe I'll just manually move the movies I already have to the share and keep sab and radarr working on the current array as I have it presently. I'll need to be sure there's additonal space on the remote server in case radarr upgrades one of the old movies it is monitoring. I wonder if there's a way to mass move movies in radarr. Is this how the big guys do it? Am I thinking about this all wrong?
  23. I'm running out of room in my box for more hard drives. I'd like to spin up a second Unraid instance on an old machine I already have and throw in some old drives I already have. I will need to learn how to setup Sonarr/Radarr to work with multiple folders, but the bigger question for now is how I manage the time required to transfer files. Presently, I have /movies cached (Yes), such that as soon as Sab pull the movie down, Radarr moves it from nvme to nvme, which is very fast. If have to wait for Radarr to move a 10 GB file over the LAN, the family will be waiting a very long time before it's imported/ready on movie night. It would be better if I could somehow cache the remote movies folder and let mover move the file at 3 am like it presently works. Is that possible? Alternatively, is there any way to just add the remote share to my current array, or at least add the storage space?
  24. LOL - so I finally get everything up and running on Unraid, including nzbget, and I see this at the linuxserver.io git change log: Surprised this isn't announced here. Should I make the switch to sab?