Chandler
Members-
Posts
98 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by Chandler
-
Thanks! Just wanted to make sure there wasn't anything else I missed.
-
Last night I noticed issues with my server and came to find all my shares missing and dockers appeared to be running even though they weren't. I checked the logs and found this: Oct 13 19:09:01 Tower shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. Oct 13 19:09:01 Tower rsyslogd: file '/mnt/user/Logs/syslog-192.168.1.17.log'[2] write error - see https://www.rsyslog.com/solving-rsyslog-write-errors/ for help OS error: Transport endpoint is not connected [v8.2102.0 try https://www.rsyslog.com/e/2027 ] Oct 13 19:09:01 Tower rsyslogd: file '/mnt/user/Logs/syslog-192.168.1.17.log': open error: Transport endpoint is not connected [v8.2102.0 try https://www.rsyslog.com/e/2433 ] I tried stopping and restarting the array but that didn't work. Next was a full server restart and everything appeared to be back to normal. Does anyone know what caused this or if there is anything else in my diagnostics that could help?
-
Seems to be a solution, though not the ideal one 😅 Is there another better solution or do these drives just suck lol
-
Thanks for that. Silly that Seagate has this issue. I went through and completed all these steps after I rebuilt the disabled drive. EPC and low current spin up are off on all my ST8000VN004 drives. My drives are still getting the read errors though. Any more ideas? Here is a new diagnostics if that helps. tower-diagnostics-20220107-1445.zip
-
I believe so, I have redundant 920W PSUs. Do you see something in the diagnostics that indicate a power issue?
-
Hello, recently I have been getting read errors on a few disks. Been using the same setup for years now, slowly adding drives over time and never had this issue before. One drive already became disabled due to this and I rebuilt it successfully. Now another drive has become disabled because of this. The drives it has been happening on are mostly newer ones. However, the second one to become disabled because of this, I have been using for over a year. I've read that this is more commonly caused by a bad connection instead of the drive. I'm wondering where I should start or if anyone has any tips for me on this. I have a 4U supermicro chassis and my drives are plugged into a backplane (BPN-SAS-846A). I believe I have 3 LSI 9210-8I that the backplane is plugged into. Since this is a more recent issue is it possible my issue could be caused by one of the LSI cards that I am now starting to use more due to the new drives?
-
Got it, thanks! I had found a similar post and the solution was upping from 30 to 60 seconds. Mine was at 90 so I figured it was fine. But I added 3 to the array and 1 to the parity earlier this month and I think that is roughly when this started happening so it makes sense!
-
Yes, I looked there first but I didn't see anything stand out. That's why I attached them to the original post.
-
Parity checks have started happening after every reboot which usually means there was an unclean shutdown.. I can't seem to figure out what is causing an unclean shutdown or if that is the issue. I've attached a diagnostics from the flash drive.
-
What details? Yeah, I understand it fails when attempting to move a duplicate. The issue is that I cannot find the duplicate. For example the mover is saying it is moving /mnt/cache/folder/fileA to the array but it already exists. I looked through /mnt/disk#/folder/ for fileA on each disk but it was not on any of them. From what I could tell it only exists on the cache.
-
Ah, I will have to check what I have that set to then. Thanks. Currently while watching the mover logs, it is saying some files can't be moved because it already exists. I look at its location on the array and its location on the cache but it only exists in one of them. The folder it needs to go in is on both but the file itself only exists once. Any ideas on this one before I start deleting things?
-
Okay, I set domains and system to cache-prefer and started the mover after turning off docker and vms. My appdata was already set to cache-prefer and has been since I started using Unraid. I find it odd that some files exist on the array..
-
Alright I've just updated and tested a few things. The main thing I am using to test is Plex. For some reason, only on iOS connections, everything I play buffers every 1-5 seconds. I thought it was Plex so I troubleshot that first with the verbose logging to check my transcode speed but when I'm on wifi it is a direct play and there is no transcoding but it was buffering which was odd. My network can easily handle this. I switched to cellular and it was transcoding 1080p 10,000kbps bitrate to SD and struggling according to Plex verbose logging which said I was at around 0.3 when I want that number to be above 1. What is odd is that my CPU usage was only around 25% and RAM around 25% as well. So that is when I thought it must be something acting up with the Unraid server among the other problems I am having overall. Anyways, after updating and rebooting my above problem was still present so I tried booting in safe mode. I attempted the above with Plex and it worked flawlessly on wifi and when switching to cellular, the verbose logging said I was transcoding at a speed of 13.5 which sounds much better. So safe mode boots without plugins, correct? Would my next step be to figure out what plugin may have been causing this issue? How do I enable the plugins one at a time? I'm not even sure if it did boot in safe mode because my syslinux config is already reset to boot back into normal Unraid and not safe mode (unless it does that automatically) and all my dockers started on their own and it appears all plugins are still working. Appdata is set to prefer cache so I'm not sure why there are files being put on the array when I have 2TB of cache.. I'll set it to only use cache as well as the other shares you mentioned.
-
My server has been running slow for a while now and I am tired of dealing with it acting this way. Dockers will be slow to start/respond, I'll get upstream failures in the logs when trying to load pages on the GUI, server will occasionally hang on reboots, the list can go on but I'm just wondering if anything stands out in my diagnostics. tower-diagnostics-20200713-1436.zip
-
Sorry, I was meaning I was trying to set these up for use with Organizr. I have not actually put them in Organizr yet so we can take that out of the equation. All I have done is enable the confs, make sure they are pointing to the right containers/ports, and entering mydomain.com/container and I received all those errors in my post. I fixed Tautulli. Had to add tautulli to the https root in its config. For Jackett, I have made no modifications to the subfolder conf other than renaming it to remove the sample portion. I don't get the usual 404 nginx error... Fixed Jackett, needed to redefine the base url in its gui. I guess the grayed out one didn't count. This leaves Ombi, Radarr, and Sonarr. I am not sure what to do with Ombi yet but Radarr and Sonarr I think I need to modify the confs.. It looks like it is definitely hitting them when I go to mydomain.com/radarr but then Radarr redirects it to mydomain.com/login?returnUrl=/radarr because I have forms authentication enabled. How do I get it to not redirect there? Basically it needs to redirect to mydomain.com/radarr/login?returnUrl=/ instead. Sonarr and Radarr are also now working since I added base urls to them too.. Now I just have an issue with Ombi. Heading to mydomain.com/ombi greets me with this:
-
I am trying to setup various dockers with the default subfolder confs for use with OrganizrV2. Some of the default configs are working and some aren't. ApacheGuacamole - works Deluge - works Jackett - This page can't be found Ombi - Appears to work but sits at a white page with the text "Loading..." (The subdomain conf for Ombi works though) Plex - works Radarr & Sonarr - I have forms authentication enabled and going to either of these turns the url into login?returnUrl=/radarr instead of /radarr causing it to not work Sabnzbd - works Tautulli - 404 not found the path '/tautulli' was not found I have made sure that all containers are on the same network and that the container names match what the conf is looking for. I do not see any errors for these in the error log file either. Any ideas for these issues?
-
Hello, I had updated to Unraid 6.8 which broke the --network extra param I was using so after a bunch of looking around and digging I finally got it to work.. with most dockers. The issue I have ran into is getting it to work with dockers that have the same container ports. For example, right now I am trying to get it to work with OrganizrV2 which has port 80 and port 443 as its ports. The solution I found for getting it to work with other dockers was to, in the conf, instead of using the docker name I had to use the IP of the letsencrypt docker. If I try to point the Organizr with the letsencrypt IP I get endless redirects and it fails because letsencrypt is using port 80 and port 443. This does not give me any errors in the logs. So I tried putting the container name in the conf instead of the IP and I get an operation timed out: 2020/02/04 18:23:20 [error] 397#397: *3 organizrv2 could not be resolved (110: Operation timed out), client: 192.168.1.1, server: organizr.*, request: "GET / HTTP/2.0", host: This is my conf for OrganizrV2: server { listen 443 ssl; listen [::]:443 ssl; server_name organizr.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_organizr organizrv2; proxy_pass http://$upstream_organizr:80; proxy_buffering off; } }
-
What version of Unraid are you on? 6.8? What are you trying to create the docker network for? I had my docker network working before upgrading to 6.8.. You can sort of see me fumbling my way through it in 6.8 after the upgrade broke it here, there is also a short tutorial someone posted on my thread as well.
-
In Organizrs docker logs it attempts to use the port but finds that it is already in use: nginx: [emerg] bind() to 0.0.0.0:80 failed (98: Address in use) [04-Feb-2020 18:55:13] ERROR: unable to bind listening socket for address '127.0.0.1:9000': Address in use (98) [04-Feb-2020 18:55:13] ERROR: FPM initialization failed It spams this over and over. If I attempt to define a port in the config I only have the option to fill the host port field. A container port field does not exist.
-
I am trying to get my reverse proxy working on Unraid 6.8. I had it working before but it broke after updating because you can no longer specify a network in the extra params flag. I have it working now but the problem I am running in to is that it uses the default ports on the docker. In my image, Organizr has the same ports as letsencrypt. When I point the site conf to port 80/443 it just ends in redirects. I don't know how to change those default ports.. In the config for Organizr I don't even have any ports defined.. I am really not sure where to go from here in order to change these ports..
-
Ok I think I have it working now. Removing the ports still left the port mappings and I changed my apacheguacamole letsencrypt config to point to the bridge IP address of letsencrypt and it seems to be working. I have a new question though.. apacheguacamole uses port 8080. I have another docker that also uses port 8080 by default. How do I change the default mapping to allow both of them to work? As an example: Organizr uses the same internal ports as letsencrypt. So when I try to point towards organizr it goes to letsencrypt. I don't have any ports set up in the container so how can I change the default ones shown above?
-
I notice in the link above it calls the --link parameter in the nginx docker to link the deluge to the vpn and pass port 8112. I tried running a link between my letsencrypt and apacheguacamole dockers but I get a failed message saying linking is only for dockers on a custom user network.. My guacamole docker is on a custom user defined network..
-
Got it. I am sure someone else has it working as well.. Thanks for the input. I just tried deleting and recreating the network I had tried it on a new container but still get the port mapping issue. Anyone else have any ideas?
-
I did try the network creation via cli due to Unraid 6.8 but that is when the port issue occurred. It may be because of the network my letsencrypt is set to? In that guide what network setting do you have the network set to on the VPN container? I still cannot create a docker with Custom: container:letsencrypt as the network and containing ports with the letsencrypt docker set to Host or Bridge as its network.
-
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='apacheguacamole' --net='container:letsencrypt' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'OPT_MYSQL'='Y' -e 'OPT_MYSQL_EXTENSION'='N' -e 'OPT_SQLSERVER'='N' -e 'OPT_LDAP'='N' -e 'OPT_DUO'='N' -e 'OPT_CAS'='N' -e 'OPT_TOTP'='N' -e 'OPT_QUICKCONNECT'='N' -p '9876:8080/tcp' -v '/mnt/user/appdata/ApacheGuacamole':'/config':'rw' --log-opt max-size=50m --log-opt max-file=1 'jasonbean/guacamole' /usr/bin/docker: Error response from daemon: conflicting options: port publishing and the container type network mode. See '/usr/bin/docker run --help'. The command failed. Is what I get when trying to use the docker network. root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='apacheguacamole' --net='container:letsencrypt' --privileged=true -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'OPT_MYSQL'='Y' -e 'OPT_MYSQL_EXTENSION'='N' -e 'OPT_SQLSERVER'='N' -e 'OPT_LDAP'='N' -e 'OPT_DUO'='N' -e 'OPT_CAS'='N' -e 'OPT_TOTP'='N' -e 'OPT_QUICKCONNECT'='N' -v '/mnt/user/appdata/ApacheGuacamole':'/config':'rw' --log-opt max-size=50m --log-opt max-file=1 'jasonbean/guacamole' 6d183fe75c50fbd4f6b3186fbdeecb9e964ebc76958e25a2ea2f0356d8b832f5 The command finished successfully! Is what I get when I remote the ports form the template. Is this intended? How else am I supposed to access the docker without the ports being mapped?