Jump to content

BurntOC

Members
  • Posts

    163
  • Joined

  • Last visited

Everything posted by BurntOC

  1. Thank you for taking a look. I hadn't noticed that. Of my two servers, this is the only one that has a UA SSD mounted for VMs and containers. In the Main screen and in the dash it always shows no size, 0 used, and 0 available. Is that uncommon? If it's normal, is there a way to raise visibility of this? Within the next 30-60 days I'm planning on replacing this with a 1TB+ drive, and I'm hoping it is easy to swap out. Until then, I found a 75GB "base" VM image I will move off into a backup folder on the array which should help a lot if that's the issue.
  2. I’ve had this occur twice recently. Last time I deleted the Docker image and recreated and things began working again, but clearly something is wrong. From my searches it suggests maybe cache drive issue or Docker image corruption, but I don’t know how to verify that. This server and my other Unraid both otherwise seem to be running fine, though I will note they’re on the newest beta and when I stop the array or stop services on occasion the machines may become unresponsive. If so, then I have to press the hardware power button and about 80% of the time they will cleanly power off. I appreciate any help getting to the bottom of this. unraid1-diagnostics-20201211-0637.zip
  3. Update - saw an update in Unraid and applied it and it wouldn't start. Based on this line in the log, I realized it was probably failing because my password had a % character in it. I changed it to remove that, updated the config.ini and it works now. Initiating Locast2Plex v0.6.1 Opening and Verifying Configuration File. /app/config/config.ini Loading Configuration File: /app/config/config.ini Traceback (most recent call last): File "/app/main.py", line 49, in <module> config = get_config(script_dir, opersystem, args) File "/app/lib/user_config.py", line 10, in get_config return UserConfig(script_dir, opersystem, args).data File "/app/lib/user_config.py", line 52, in __init__ self.import_config() File "/app/lib/user_config.py", line 75, in import_config for (each_key, each_val) in self.config_handler.items(each_section): File "/usr/local/lib/python3.8/configparser.py", line 859, in items return [(option, value_getter(option)) for option in orig_keys] File "/usr/local/lib/python3.8/configparser.py", line 859, in <listcomp> return [(option, value_getter(option)) for option in orig_keys] File "/usr/local/lib/python3.8/configparser.py", line 855, in <lambda> value_getter = lambda option: self._interpolation.before_get(self, File "/usr/local/lib/python3.8/configparser.py", line 395, in before_get self._interpolate_some(parser, option, L, value, section, defaults, 1) File "/usr/local/lib/python3.8/configparser.py", line 442, in _interpolate_some raise InterpolationSyntaxError( configparser.InterpolationSyntaxError: '%' must be followed by '%' or '(', found: '%restofpassword
  4. Hey all, so I'll start off by saying that if there is a better place to get an answer for this, please let me know and I'll post there. This is a Duplicacy container. There's not a specific support thread, and the general template thread serves so many varying situations it's challenging to identify real trends. I've had the Duplicacy containers running on my 2 Unraid servers for several weeks. Uptime for both is 20+ days, and that goes back to some manual reboots on my part. Both servers have been running great in the bridge I created for their VLAN. Today I was on the dashboard and noticed Unraid1's Duplicacy container was stopped. Weird, as I didn't stop it. All others running fine. Unraid2 is just great. I tried starting it and it stopped immediately. Here's what the log reports: Can't start the web server: listen tcp 192.168.60.200:443: bind: cannot assign requested address There's nothing else on that custom bridge and no conflict showing. Just to test, I tried changing IP, no go. I tried changing network to host, bridge, etc. All complete fine but it fails to start. I've not touched anything on my Unraid server. EDIT: I realized it might be worth noting that I have port 3875 on this as the container port. Same as the other one, and it's been that way from the beginning, so I'm not sure why it is referencing 443 off the top of my head, either. EDIT2: I compared the settings.json file on my other Duplicacy instance and I noted that it did not have entries for https and domain as this one did. Even though the templates looked basically identical, there were differences here in the settings.json. I edited those lines out and I was able to recover my instance so both are working now.
  5. Not so fast, says Plex..... So I set up a recording for this afternoon and I accessed Plex at dinner to watch it. There's no recording, but a triangle next to it in the "DVR Scheduler". Two shows for tomorrow show scheduled, but I expect they'd fail. When I try to watch live TV, I get a dialog box "Playback Error: Could not tune channel. Please check your tuner or antenna." Looks like it could be something with l2p. I see this in the logs for the l2p container: 192.168.70.20 - - [12/Nov/2020 04:20:59] "GET /watch/1075?X-Plex-Token=redacted HTTP/1.1" 200 - ---------------------------------------- Exception happened during processing of request from ('192.168.70.20', 56744) Traceback (most recent call last): File "/usr/lib/python3.8/socketserver.py", line 316, in _handle_request_noblock self.process_request(request, client_address) File "/usr/lib/python3.8/socketserver.py", line 347, in process_request self.finish_request(request, client_address) File "/usr/lib/python3.8/socketserver.py", line 360, in finish_request self.RequestHandlerClass(request, client_address, self) File "/usr/lib/python3.8/socketserver.py", line 720, in __init__ self.handle() File "/usr/lib/python3.8/http/server.py", line 427, in handle self.handle_one_request() File "/usr/lib/python3.8/http/server.py", line 415, in handle_one_request method() File "/app/main.py", line 127, in do_GET ffmpeg_proc = subprocess.Popen(["ffmpeg", "-i", channelUri, "-codec", "copy", "-f", "mpegts", "pipe:1"], stdout=subprocess.PIPE) File "/usr/lib/python3.8/subprocess.py", line 854, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "/usr/lib/python3.8/subprocess.py", line 1637, in _execute_child self.pid = _posixsubprocess.fork_exec( TypeError: expected str, bytes or os.PathLike object, not bool ---------------------------------------- 192.168.70.20 - - [12/Nov/2020 04:21:00] "GET /discover.json HTTP/1.0" 200 - 192.168.70.20 - - [12/Nov/2020 04:21:00] "GET /lineup_status.json HTTP/1.0" 200 - UPDATE: So I found others with similar issues, and one of the investigation steps was to go to http://plex_accessible_ip:plex_accessible_port in a browser. This showed the IP:port:port. I removed the port from my config.ini (which may have been key to getting the tuner into Plex to begin with), restarted the container, and now I can watch live TV - on about half my local HD channels. I'm not sure why this would be the case unless there is some sort of transcoding issue going on as most all my stuff is Direct Play, but at least it's progress. Baby steps.... UPDATE2: Back into Plex again I noticed a message about a new tuner detected, a SiliconDust Locast device. I figured wth, I'll go through DVR setup again. It seems like more channels are playing nice now (and it still only shows the SiliconDust Locast device), so maybe it's just temperamental.
  6. I think you're right. I did delete the uuid in the config.ini and I also added the port to the ip entry so that I had 192.168.100.15:6077 vs just the IP even though I'm using the default port. I'm not sure if that helped, but I went back into Plex, readded the tuner, and for the LA Broadcast I just selected the 12 or so HD channels I cared about and it finished successfully.
  7. Added it into config.ini and I can see from the logs that switches it from ip-based location finding just fine. Unfortunately I still can't save DVR in Plex. I've tried LA broadcast OTA, which is what it selects first, and also Locast LA OTA and both error out.
  8. Looks like I'm really close, with no errors on container start and the Tuner shows up in Plex and starts through the setup process including scanning and returning channels, but when I try to save my channel list it gives me: There was a problem saving your DVR. Please try again. Maybe I need the zip override. Can you tell me where to add this? Config.ini or docker variables?
  9. Port was correct, but though I'd tested http vs https earlier with no effect (of course because port isolation was probably blocking it in any case), I just tried switching it to https and it works. I have had Hass pulling a cert with the LetsEncrypt addon and I had it set to access via HTTPS. I'm tempted to leave it for now. As I understand it, I'm doing SSL to Swag, but it's doing HTTP to the proxied hosts in most cases per the template default, right? And there would be some risk of something else on the same subnet trying to sniff the unencrypted traffic, but in this case I'm doing SSL to swag and then also to the proxied server so the full path is encrypted, right? If not, I will leave these other connections be, as I was going to look into usings HTTPS with them as well.
  10. So I verified that I had port isolation enabled on both the Unifi switch port connected to that Unraid network and the port the Pi is connected to. Disabling it on the Pi port allowed swag to ping the Pi, but I am still getting the Nginx gateway error. The isolation observation and the lack of entries in the logs confirms this is transiting port to port without the firewall seeing it, but it's even more puzzling as to why it still isn't working...
  11. Fair observation. I thought about including it originally but if the connectivity is there, it seems like this would be some well-known trick that I don't know about. To that point, your question is a great one to which I believed the answer was "Yes, I've tested it.". But if so I'd have been wrong, as checking right now it is not getting a response. I'm up to 15 other devices that are working just fine across the other 2 situations I included in my initial post on this. Since it is working for other servers in that same domain it would seem like the traffic should have no problems getting from my Unraid server to the firewall headed to the Pi, but clearly I do. Here's my proxy, in any event (I use hassio.mydomain.me and the device is on 192.168.60.4 in this example): server { listen 443 ssl; listen [::]:443 ssl; server_name hassio.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; # set $upstream_app homeassistant; set $upstream_app 192.168.60.4; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } }
  12. So I've gotten it all operating fine - EXCEPT Home Assistant Supervised on my Pi. I still get the 502 Gateway error and I don't see it even trying to proxy request to the Pi. I know there are some pointers to ensure the Hass instance accepts the proxy, but why the heck would it not even be forwarding the proxy requests like it does the other dozen servers and containers I'm running just fine?
  13. I just got this container set up yesterday morning and man, it is so great. I have a handful of my containers proxied already, but I've hit a snag that I can't figure out and I'm hoping someone can help. These 5-7 other servers I'm trying to proxy fall into 3 categories: On this Unraid server, but assigned to a different network and VLAN On my other Unraid server On a Raspberry Pi I'm hoping that some sort of editing the configuration files and also my OPNsense firewall rules will solve items #2 and #3. I'm wondering if #1 acts differently, though, and if so how I am supposed to make those proxyable by Swag. Here are a few more details if you can help guide me I would appreciate it. Swag and currentlly proxied containers - br1.60, network 192.168.60.0/24 Non-working containers - br0.20, network 192.168.20.0/24 Thanks for any help. UPDATE: So getting item 1 handled turned out to be easier than I feared. By using an IP for upstream app it worked great. I thought then that #3 would be easy, but it's not working. Specifically, I'm trying to get to my Hass (Home Assistant Supervised) on it, which is available when I hit it directly via its 192.168.60.X:8123 address. If I try to hit it via the proxy I get the Nginx default page and I don't see any traffic trying to proxy from the swag IP to the hass server. It's like it isn't recognizing the hass.mydomain.me even though I edited the conf to reflect that subdomain name of hass and the app IP. Any ideas as to what could be up there?
  14. I have many of the same errors others are reporting after installing this plugin. While I appreciate the efforts of the original dev, and I recognize it is marked as alpha beta, when such potentially risky issues are identified it seems like it should also be flagged with a warning in some way. I'd have been fine using the userscript version, but I wasn't noticing these errors on my headless rig. Anyway, I've uninstalled it for now and I hope maybe it will get fixed someday. EDIT - Feeling more strongly about this after I see it marked beta now....
  15. I appreciate anyone reading this, and I'll try to be concise. I have 2 Unraid servers on the latest beta. Both have been pretty close to perfectly. I've had the Wireguard app on for a long time but I just redid my network a bit and changed some IP ranges so I wanted to try it again as I've never gotten it working beyond accessing a portion of my network remotely. I was deleting and readding a peer to test each of them this weekend and after clicking Done it showed the "processing wave" image and I was never able to get into the GUI on either. I've tried SSH, which gets rejected, and I've tried all 4 boot options (regular, GUI, safe no GUI, safe with GUI) and none allow me to access the systems. I see them boot, with no obvious errors, but I never get local console access. They do respond to ping, and pushing power starts the shutdown sequence okay. Any ideas? I'm guessing I might need to pull the USB on each and edit something, but I don't know what I would edit to minimize risk to me screwing up my array info. P.S. I can't imagine what this would have to do with adding WG peers, but I wanted to mention all the info and that it happened exactly the same on 2 different servers.
  16. Man, this is awesome. Any chance you or someone else here is willing to maintain this at least semi-officially over time? I'm using Duplicacy, which is great, but I have a GS license and I feel more comfortable with the flexibility of GS (I am using the Duplicacy web gui which is sparse) and being able to access my backup files (I'm really using that, not actual sync) outside of the app (vs snapshots) in a crisis.
  17. I didn’t think that it was a bug, but it reported one in the log and I was prepared to be wrong. Regarding the sysdig - got it. I just wish that had been emphasized in addition to or in lieu of the diagnostics file from the beginning and I would’ve made it happen. The impression I had was that providing this detail and the diagnostics file was the way to get help, with a syslog requested if more info was needed.
  18. Changed Status to Solved Final update: 2 threads (this and the original support request), 6 days here with no response hereI do really, really like Unraid and I don't regret buying a license. Kinda regretting buying the 2nd one now though, but maybe I'll luck out and not run into any major issues and the point will be moot. Hope so. In any case, I want to add this update for posterity. Current uptime is 3 days, 17+ hours, well beyond what it had been as of late. I believe the solution was that I pulled the 1 stick of RAM that didn't match the others out of the system. Even though the run of memtest had passed maybe subsequent ones would fail, but the stick has performed perfectly fine in other multifunction server installs. In the config I had here it just seems it was enough of a mismatch that it caused an error.
  19. Final update: 2 threads (this and a BUG report), 12 days posted amongst the 2 of them, both with diagnostics.zip files and other info requested - 1 reponse pointing me to the FAQ. EDIT - snipped my comments about support and process frustration /EDIT Current uptime is 3 days, 17+ hours, well beyond what it had been as of late. I believe the solution was that I pulled the 1 stick of RAM that didn't match the others out of the system. Even though the run of memtest had passed maybe subsequent ones would fail, but the stick has performed perfectly fine in other multifunction server installs. In the config I had here it just seems it was enough of a mismatch that it caused an error. I'll only update this if it looks like it WASN'T the RAM, but for now I'm considering it case closed.
  20. First of all - thanks for your response. I posted this and a BUG report, as the diagnostics file references a BUG as well and 3 days later this is the first response I've received from anyone else. Setting up syslog to capture this is definitely in my plans here after I knock out a couple of other things, but I thought that the diagnostics.zip was considered the key starting point. As best I can tell no one's even looked at that, so I don't know that sending a zipped log from my syslog server will help as I'll still need assistance in understanding what it indicated with respect to Unraid and the crashes.
  21. So it ran for about 40 hours or so with the GTX out of the server and I thought it might be working without it at least. Then I checked it this afternoon and it is completely non-responsive and I had to hard power off. So basically I can't run any VMs on Unraid without it crashing at some point, which sucks royally. Here's a screenshot of the local console, if that adds any value for the analysis.
  22. So it ran for about 40 hours or so with the GTX out of the server and I thought it might be working without it at least. Then I checked it this afternoon and it is completely non-responsive and I had to hard power off. This was showing on the screen, in case it is of any help.
  23. So another update. It’s been running for 29 hours since I completed the memtest and all seems well in its current state with the GTX out of the chassis. If I pop it in like I said it runs passed through to the VM without issue, but i imagine I’ll start seeing the crashes again. I’d really hoped someone here could interpret the diagnostics log well enough to figure out what’s up. Maybe it’s a bug. There is a line referencing a BUG in the log, so maybe it just doesn’t handle this combo well. I’ve opened up a bug report as well. Fingers crossed this thing stabilizes when I pop the GPU back in tomorrow.
  24. I'd created a post in the support forum but after doing more searches maybe I'm actually seeing a bug here. There is some background info here, but basically I'd been running rock solid for weeks with no problems. I started trying to use a Windows VM converted from a physical machine and pass my GTX 1050 Ti through. While that all appeared to work fine as well, now my Unraid server becomes non-responsive. Sometimes this happens late in the day, sometimes the next morning, but it is pretty consistent and I can't access the web GUI, ssh, and usually even the local display. I've run at least one pass of memtest and had 0 errors in that run as well. I've attached a diagnostics zip here as well, but here's what caught my eye today: unraid-diagnostics-20200830-0953.zip
  25. UPDATE: Memtest completed the first pass and no errors. Maybe I could run it overnight to see if something else crops up, but I'd have to thing that's unlikely. I was a bit off about the memory config I have in it, though. Turns out it looks like this: Slot 0 : 4096 MB DDR3-1600 Micron 1G6E1 Slot 1 : 8192 MB DDR3-1600 Patriot memory Slot 2 : 4096 MB DDR3-1600 Micron 1G6E1 Slot 3 : 4096 MB DDR3-1600 Micron 1G6E1 I'll get a nice set of matched speed and capacity RAM when I buy my new rig soon, but I was hoping to limp along for a bit. If there's another config that looks more stable let me know - I can probably get away with less for now. I have a 3-4 containers running and I was alloting 8GB to the VM when running, though.
×
×
  • Create New...