DBJordan

Members
  • Posts

    68
  • Joined

  • Last visited

Everything posted by DBJordan

  1. Not sure if the straight binhex-privoxyvpn container is different from the binhex-qbittorrentvpn, but on the latter the logs will have a message that looks like this: [info] Successfully assigned and bound incoming port '####'
  2. Can you navigate using your browser address bar to http://yourserver:9595 ? If that works, but you can't get there by clicking the web ui from the unraid docker page, try updating the advanced docker settings for that container to explicitly point to http://yourserver:9595 (the default is something like http://[IP]:[PORT]).
  3. Is it possible qbittorrent's ability to handle dropped packets is more robust than some of the other applications we're using? qbittorrentvpn seems to run qbittorrent ok for me, but even when I connected to that Amsterdam endpoint I'm getting intermittent failures for other programs. From within the qbittorrentvpn container: sh-5.1# while [ true ] > do > curl ifconfig.me > echo "" > done 181.x.x.x curl: (6) Could not resolve host: ifconfig.me 181.x.x.x curl: (6) Could not resolve host: ifconfig.me 181.x.x.x 181.x.x.x 181.x.x.x curl: (6) Could not resolve host: ifconfig.me curl: (6) Could not resolve host: ifconfig.me Edit: might also be that only PIA's ns lookup is wonky right now, which might also explain some of what we're seeing. Edit #2: looks like changing the NAME_SERVERS to 1.0.0.1,8.8.8.8 per posts later in this thread did the trick. Thanks!
  4. Also using PIA. I think something is amiss at PIA. I've tried OpenVPN, Wireguard, and a couple different servers. Packets are getting dropped left and right, DNS resolutions are hit or miss, etc. Started a day or two ago.
  5. I'm not sure I 100% understand the FAQ, but here's what works for me in 6.11.0-rc5. For the VPN (in my case, binhex-qbittorrentvpn), I have: <?xml version="1.0"?> <Container version="2"> <Name>binhex-qbittorrentvpn</Name> <Repository>binhex/arch-qbittorrentvpn:latest</Repository> <Registry>https://registry.hub.docker.com/u/binhex/arch-qbittorrentvpn/</Registry> <Network>bridge</Network> <MyIP/> <Shell>sh</Shell> <Privileged>true</Privileged> ... <WebUI>http://[IP]:[PORT:8080]/</WebUI> <TemplateURL>https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/qbittorrentvpn.xml</TemplateURL> <Icon>https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/qbittorrent-icon.png</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1663291762</DateInstalled> <DonateText/> <DonateLink/> <Requires/> ... <Config Name="radarr port" Target="7878" Default="" Mode="tcp" Description="" Type="Port" Display="always" Required="false" Mask="false">7878</Config> ... <Config Name="VPN_INPUT_PORTS" Target="VPN_INPUT_PORTS" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">7878,9117,8989,8686,5299,9696,8787,8191,5800,5900</Config> <Config Name="VPN_OUTPUT_PORTS" Target="VPN_OUTPUT_PORTS" Default="" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">7878,9117,8989,8686,5299,9696,8787,8191,5800,5900</Config> <Config Name="AppData Config Path" Target="/config" Default="/mnt/user/appdata/binhex-qbittorrentvpn" Mode="rw" Description="" Type="Path" Display="advanced-hide" Required="true" Mask="false">/mnt/user/appdata/binhex-qbittorrentvpn</Config> </Container> For radarr, I do have the network set to use the VPN container, and I've deleted the 7878 port references from it (except in the WebUI): <?xml version="1.0"?> <Container version="2"> <Name>radarr</Name> <Repository>linuxserver/radarr:nightly</Repository> <Registry>https://hub.docker.com/r/linuxserver/radarr/</Registry> <Network>container:binhex-qbittorrentvpn</Network> <MyIP/> <Shell>sh</Shell> <Privileged>false</Privileged> <Support>https://forums.unraid.net/topic/53758-support-linuxserverio-radarr/</Support> <Project>https://github.com/Radarr/Radarr</Project> <Overview>Radarr - A fork of Sonarr to work with movies &#xE0; la Couchpotato.</Overview> <Category>Downloaders: MediaApp:Video</Category> <WebUI>http://172.16.100.100:7878</WebUI> <TemplateURL>https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/radarr.xml</TemplateURL> <Icon>https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/radarr.png</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1661877324</DateInstalled> <DonateText/> <DonateLink/> <Requires/> <Config Name="Host Path 2" Target="/downloads" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/Saidar/_Downloads/movie/</Config> <Config Name="Host Path 3" Target="/movies" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="true" Mask="false">/mnt/user/Saidar/Videos/Movies/</Config> <Config Name="Key 1" Target="PUID" Default="99" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">99</Config> <Config Name="Key 2" Target="PGID" Default="100" Mode="" Description="" Type="Variable" Display="always" Required="false" Mask="false">100</Config> <Config Name="binhex connector" Target="/data" Default="" Mode="rw" Description="" Type="Path" Display="always" Required="false" Mask="false">/mnt/user/Saidar/</Config> <Config Name="AppData Config Path" Target="/config" Default="/mnt/user/appdata/radarr" Mode="rw" Description="" Type="Path" Display="advanced-hide" Required="true" Mask="false">/mnt/user/appdata/radarr</Config> </Container> I guess I'm abandoning the internal container network or something by doing this, but it works.
  6. Try explicitly specifying (hard coding) the WebUI. Maybe my screen shot wasn't clear enough-- what got it working for me was changing: http://[HOST]:[PORT:9117] to http://172.16.100.100:9117
  7. Try explicitly specifying the WebUI. Change <WebUI>http://[IP]:[PORT:9000]</WebUI> to <WebUI>http://192.168.198.12:9000</WebUI>
  8. Well, I have it working, although I'm not sure it's the most elegant solution. Everything is now working. I just updated webui to hard code the url of the webui instead of relying on things like [HOST], [IP], and [PORT]. See screen capture.
  9. Update: I posted the solution to my issue and marked the original thread over in General as solved. Original text follows: I posted this question over in general support, but this thread might be a better place for it. Screenshots are in the other post. Anyone know what I might change in my docker configurations to get the WebUIs to work for other dockers?
  10. Thanks for the reply. I tried changing HOST to IP without success. I'll check out the container support thread to see if anyone has any ideas.
  11. Hi Squid, Just updated to 6.11-rc3 and seeing the same. Can get to the web ui by specifying my ip:port in the browser address bar but not by clicking from the docker page.
  12. Version: 6.11.0-rc3 I've successfully set up a custom network to route traffic on some of my containers through a VPN. When I try to use Unraid to navigate to the docker's web interface by clicking the docker's icon and drop list, it doesn't work. It opens a new tab that Chrome tells me is called about:blank#blocked. I have to instead enter unraid's url:port in the address bar manually to get to the web UI and am hoping there's a simpler way. Any ideas on how to make the WebUI work from the Unraid page?
  13. I just wireguard working with PIA, but don't seem to be able to find the list of endpoints in the logs, so I'm stuck with the default. Am I looking in the wrong place? Log attached. Thanks for any help. supervisord.log
  14. No problems so far. Thanks for the help trying to get the old hardware working. Can you mark this one solved or whatever it is you do? I'll make a new topic if anything new pops up.
  15. Hmm I'm going to try to replace the motherboard and cpu. They're getting long in the tooth, anyway. Will let you know how it goes!
  16. Still get some intermittent reboots a few times a week. Had a crash and reboot just before 0500. Auto-starting the array after unexpected shutdown is disabled, so the logs say nothing after reboot until I logged in to the webpage at 1123. Have noticed the automated mover exits with a 1. When I run it manually, it completes with return code of 0. Any thoughts on whether this is an indicator as to why the system is subsequently rebooting? Logs attached. SyslogCatchAll-2022-07-17.txt
  17. Ooo, sneaky. I'll give that a shot and see what happens. Thanks! Update: no errors found during scrub. Will keep capturing syslogs and see what happens next. Thanks!
  18. Oh, lemme set memtest running, will let you know if anything pops up.
  19. Thanks for the help! I was able to set RAM to 1866 but couldn't find an option to handle power supply idle control or c-states. I tried this: Started here and picked "CPU Configuration" Once in there, I changed C6 mode from "enabled" to "disabled." Also, btrfs scrub detected some irreparable errors in the syslog: 22-06-25 17:42:53 Kernel.Info 172.16.100.100 Jun 25 17:42:53 Truesource kernel: BTRFS info (device nvme0n1p1): device stats zeroed by btrfs (25391) 2022-06-25 17:42:53 Kernel.Info 172.16.100.100 Jun 25 17:42:53 Truesource kernel: BTRFS info (device nvme0n1p1): device stats zeroed by btrfs (25391) 2022-06-25 17:42:56 Kernel.Info 172.16.100.100 Jun 25 17:42:56 Truesource kernel: BTRFS info (device nvme0n1p1): device stats zeroed by btrfs (25402) 2022-06-25 17:42:56 Kernel.Info 172.16.100.100 Jun 25 17:42:56 Truesource kernel: BTRFS info (device nvme0n1p1): device stats zeroed by btrfs (25402) 2022-06-25 17:43:10 Kernel.Info 172.16.100.100 Jun 25 17:43:09 Truesource kernel: BTRFS info (device nvme0n1p1): scrub: started on devid 1 2022-06-25 17:43:10 Kernel.Info 172.16.100.100 Jun 25 17:43:09 Truesource kernel: BTRFS info (device nvme0n1p1): scrub: started on devid 2 2022-06-25 17:44:00 Kernel.Warning 172.16.100.100 Jun 25 17:44:00 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 932456402944 on dev /dev/nvme0n1p1, physical 222713057280, root 5, inode 6909694, offset 2324074496, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:44:00 Kernel.Error 172.16.100.100 Jun 25 17:44:00 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 2022-06-25 17:44:00 Kernel.Error 172.16.100.100 Jun 25 17:44:00 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 932456402944 on dev /dev/nvme0n1p1 2022-06-25 17:44:40 Kernel.Warning 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1262362894336 on dev /dev/nvme0n1p1, physical 574094385152, root 5, inode 13617139, offset 347021312, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:44:40 Kernel.Error 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 2022-06-25 17:44:40 Kernel.Error 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1262362894336 on dev /dev/nvme0n1p1 2022-06-25 17:44:41 Kernel.Warning 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1280042659840 on dev /dev/nvme0n1p1, physical 591774150656, root 5, inode 14185170, offset 340320256, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:44:41 Kernel.Error 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 3, gen 0 2022-06-25 17:44:41 Kernel.Error 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1280042659840 on dev /dev/nvme0n1p1 2022-06-25 17:44:41 Kernel.Info 172.16.100.100 Jun 25 17:44:40 Truesource kernel: BTRFS info (device nvme0n1p1): scrub: finished on devid 1 with status: 0 2022-06-25 17:46:02 Kernel.Warning 172.16.100.100 Jun 25 17:46:02 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 932456402944 on dev /dev/nvme1n1p1, physical 222692085760, root 5, inode 6909694, offset 2324074496, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:46:02 Kernel.Error 172.16.100.100 Jun 25 17:46:02 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 2022-06-25 17:46:02 Kernel.Error 172.16.100.100 Jun 25 17:46:02 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 932456402944 on dev /dev/nvme1n1p1 2022-06-25 17:48:19 Kernel.Warning 172.16.100.100 Jun 25 17:48:19 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1262362894336 on dev /dev/nvme1n1p1, physical 574073413632, root 5, inode 13617139, offset 347021312, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:48:19 Kernel.Error 172.16.100.100 Jun 25 17:48:19 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 2, gen 0 2022-06-25 17:48:19 Kernel.Error 172.16.100.100 Jun 25 17:48:19 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1262362894336 on dev /dev/nvme1n1p1 2022-06-25 17:48:22 Kernel.Warning 172.16.100.100 Jun 25 17:48:21 Truesource kernel: BTRFS warning (device nvme0n1p1): checksum error at logical 1280042659840 on dev /dev/nvme1n1p1, physical 591753179136, root 5, inode 14185170, offset 340320256, length 4096, links 1 (path: PRIVATE) 2022-06-25 17:48:22 Kernel.Error 172.16.100.100 Jun 25 17:48:21 Truesource kernel: BTRFS error (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 3, gen 0 2022-06-25 17:48:22 Kernel.Error 172.16.100.100 Jun 25 17:48:21 Truesource kernel: BTRFS error (device nvme0n1p1): unable to fixup (regular) error at logical 1280042659840 on dev /dev/nvme1n1p1 2022-06-25 17:48:22 Kernel.Info 172.16.100.100 Jun 25 17:48:22 Truesource kernel: BTRFS info (device nvme0n1p1): scrub: finished on devid 2 with status: 0
  20. It didn't hard crash yet, but after the parity check completed it did take the array offline for some reason. I started the array back up. Here are the logs at this point. I'll post again once I SyslogCatchAll-2022-06-21.txtSyslogCatchAll-2022-06-22.txtsee it go completely unresponsive.
  21. Good idea. I've set it up and started the array. It usually takes a few days, so I'll post once I see a freeze. Thanks!
  22. Unraid freezes multiple times a week and I have to cold reboot it. (This isn't just the http server going down -- it won't respond to key presses from a keyboard directly connected to the server.) This configuration used to work for months at a time, but it seems something has gone wrong. I've run memtest86 overnight with no findings. I'm not sure what else to try. Any ideas? truesource-diagnostics-20220620-1353.zip
  23. Thank you for your quick response. Cannot ssh in -- just like the other protocols, it just times out. I'm not worried about external compromise -- I'm using OpenVPN to permit external access while blocking malignants. I'm thinking my only option at this point is to go to where the box is and issue a reboot from the local keyboard. I can run the diagnistics before doing so and post it here to see if anyone finds it useful. Will be a few days -- won't be home or a few days yet.
  24. All of my server's dockers are running fine. I can connect to them and send and receive data. Howerever, when I try to load the unraid home in a browser, the server times out. I can't ping the server. I can't ssh into it. I am not physically collocated with the hardware so I can't use a keyboard to type in the commands to restart unreid's core components (not that I know how to off the top of my head!). Anyone have any ideas on how I can get Unread's web interface back?
  25. For some odd reason, qdirstat isn't able to delete anything. I keep getting this error when I try (this is from the console, but it's the same error message the gui has): /storage/Saidar # rm syslog.log rm: remove 'syslog.log'? yes rm: can't remove 'syslog.log': Read-only file system This is the only Docker I've got that has a problem creating and removing files. Any thoughts?