Jump to content

brianbrifri

Members
  • Posts

    54
  • Joined

  • Last visited

Posts posted by brianbrifri

  1. 3 hours ago, kizer said:

    All of the sudden today I'm Noticing none of my Dockers or Plugins know what version they are. *sigh* as well two dockers can't download their icons. Moving some files around maybe a reboot will fix it or eeeeeeeeekkkkkk my ssd might be glitching. 

     

    Oddly I'm also seeing that the mothership api is having a problem updating....... Hmmmm Even though I'm connected via it. Lol

     

    Gotta love troubleshooting stuff. 

     

    My work connection can see the missing Icons. Gawwww has to be my system. 

     

    ********************* update *********************

    Reboot fixed it right up. No idea to what was stuck, but didn't feel like tracking it down. lol

     

     

    Reboot worked for me too :(

  2. 4 hours ago, MisteR- said:

    I have EXACT the same problem as @brianbrifri. Since last time the Apps section isn't working anymore, nothing changed in the network and via terminal I am capable of downloading the files. Reinstalling doesn't help. Any ideas?

     

    EDIT:

     

    Ok, upgrade from 6.9.2 (350+ days uptime :'( ) to 6.10.3 fixed this problem.. But something has changed on the 'other' side...

    NOOOO I have about the same uptime :( I'll try upgrading to see if that helps. But ya, it's definitely NOT a networking issue on my end as I proved I can reach both aws and github from unraid.

  3. On 8/12/2022 at 3:56 PM, Squid said:

    Are you guys running via a proxy?  Neither of your debugs shows that CA can download at all.

     

    Nope. I am running pi-hole but I changed my unraid server's DNS to OpenDNS and Google in Unraid's network settings as a test and still got that error.
    Also, I'm not getting any blocks in the pi-hole logs when launching CA.

    EDIT: Again...was able to pull CA from the CLI of Unraid's terminal ...

     

    Quote

    On any given file at any given time, the source files on master will never match what's in the package.  

     

    Primarily because the master branch for CA is the only branch I use in dev and always reflects the current state of the files on my system and not the files in the release versions which are contained within the applicable .txz

    Gotcha. Wasn't sure how it all worked, was just trying to do some investigation and provide as much info as possible.

  4. 1 hour ago, brianbrifri said:

    I'm having the exact same issue for a while now as well. MD5 failed for ./Apps.page ONLY. Currently on Unraid 6.9.2 and CA version 2022.07.26

    Any thoughts on how to fix?
    EDIT: Reinstall of plugin did not help.

     

    Current status:

    The output of

    md5sum Apps.page

    and what was in the ca.md5 file for the Apps.page entry did not match.

    So, I updated the ca.md5 file to contain the output of

    md5sum Apps.page

    I'm still getting the "Download of appfeed failed" error. However, the hashes now all check out according to the logs (attached here).

    I was also able to download both s3.amazonaws.com and raw.githubusercontent.com URLs via wget from my unraid terminal shell, so it's not networking that's the issue.
    I also downloaded the latest Apps.page file from github

    wget https://raw.githubusercontent.com/Squidly271/community.applications/master/source/community.applications/usr/local/emhttp/plugins/community.applications/Apps.page

     

    then ran

    diff Apps.page Apps.page.1

    and got an output of

    5c5
    < Code="e942"
    ---
    > Code="f0db"
    1184c1184
    <                       confirmButtonText: "<?tr("OK")?>",
    ---
    >                       confirmButtonText: "<?tr("Install")?>",

    So it doesn't look like there's any real difference in my version of Apps.page with what's on master.

    CA-Logging-20220812-1407.zip

  5. On 8/8/2022 at 1:16 PM, FoxxMD said:

    I've had an issue viewing anything in the CA tab for the past two weeks. The rest of my network is fine as is all my dockers and unraid in general. When I visit the CA tab I get this:

     

     

    Attached is log I got after visiting the page with Save CA debugging information enabled.

     

    The log looks fine for downloading most json files but I think applicationFeed.json is giving it issues. It doesn't seem network related (judging by the json error reported). I can visit all the s3 links for json files in my browser reported in the log without an issue. As well as download them with wget from the unraid cli.

     

    I have uninstalled and re-installed the CA app with no change in behavior. This is on Unraid 6.9.0

    CA-Logging-20220808-1609.zip 2.35 kB · 1 download

    I'm having the exact same issue for a while now as well. MD5 failed for ./Apps.page ONLY. Currently on Unraid 6.9.2 and CA version 2022.07.26

    Any thoughts on how to fix?
    EDIT: Reinstall of plugin did not help.

  6. 19 hours ago, tmoran000 said:

    this last binhex release has completely broke my plex. WTF

     

    Edit. Looks like a new update released 9 MInutes ago.. Hope it corrects this. 

     

    Edit. The update has seemed to correct the Settings issue*

    Yup. Looks like it was a Plex specific issue and not a docker build issue. Remove the tag for the specific version and apply (if you rolled back to the previous version) and then check for updates (those who have not rolled back to the previous version just need to do this). Latest version fixes the issue. 

     

     

    plexUpdateNotes.jpg

  7. Hello, 

    I just updated to the most recent docker version: Container ID 04de605cabe4 and am getting errors: 

    Plex Plug-in [com.plexapp.system]: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
    
    
    2019-08-26 11:15:29,351 DEBG 'plexmediaserver' stderr output:
    Plex Plug-in [com.plexapp.system]: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory
    
    
    2019-08-26 11:15:29,357 DEBG 'plexmediaserver' stderr output:
    Plex Plug-in [com.plexapp.agents.htbackdrops]: error while loading shared libraries: libpython2.7.so.1.0: cannot open shared object file: No such file or directory

    I've researched the error online and most common fix was to export LD_LIBRARY_PATH=/usr/lib/plexmediaserver. I added that to the docker config but the errors still persist. Any thoughts on what might be the issue?

  8. 6 hours ago, binhex said:

    if the docker image hasnt changed (it hasnt) then it can only be external factors in play, your firewall, your ISP, your VPN provider or your usenet provider, any of these could potentially cause a block.

    Yup, I'm thinking it was a isp or VPN issue 

  9. On 1/1/2019 at 12:17 PM, binhex said:

    i cant currently replicate tthe issue guys, latest image for me just works, with vpn enabled or not.

     

    @brianbrifri and @M0zza please can you both follow the procedure linked below:-

     

    https://forums.unraid.net/topic/44108-support-binhex-general/?do=findComment&amp;comment=435831

     

    @binhex Here is my supervisord.log file (I removed sensitive info and cert stuff only)

    supervisord.log

  10. 4 hours ago, M0zza said:

    Hi Binhex,

     

    I am in the same boat as brianbrifri. After latest update no UI and the other dockers cannot connect.

     

    hope to hear good news soon. But seeing as it new years I am not expecting you to sort straight away.

     

    have a good new year.

     

    M0zz

    Sad other people are having this issue too but also good to know that it's not just me. 

  11. Hello,

    After the most recent update the binhex-sabnzbdvpn docker container is not available. The logs do not seem to indicate anything wrong but I cannot pull up the UI and my services cannot connect to the application either. I have changed nothing other than updating the application. Here is the link to the pastebin for my logs on a fresh startup. Also attached is my docker config. 
    Any help would be appreciated.

    screencapture-c7b9c8291ae6512ed6e0904e830117762178c8f6-unraid-net-Dashboard-UpdateContainer-2018-12-26-12_17_50.jpg

     

    EDIT: Tried recreating from scratch and am getting the same behavior.

    EDIT2: Tried turning off privoxy and strict port forwarding. Also changed vpn servers. Still the same behavior

  12. So I recently had to re add ombi and now I can't get nginx to work whereas it worked previously. My default file for site-confs is:

     

    upstream backend {
    	server 192.168.42.9:19999;
    	keepalive 64;
    }
    
    server {
    	listen 443 ssl default_server;
    	listen 80 default_server;
    	root /config/www;
    	index index.html index.htm index.php;
    
    	server_name _;
    
    	ssl_certificate /config/keys/letsencrypt/fullchain.pem;
    	ssl_certificate_key /config/keys/letsencrypt/privkey.pem;
    	ssl_dhparam /config/nginx/dhparams.pem;
    	ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    	ssl_prefer_server_ciphers on;
    
    	client_max_body_size 0;
    
    	location = / {
    		return 301 /;
    	}
    
    	location /tautulli {
    		include /config/nginx/proxy.conf;
    		proxy_pass http://192.168.42.9:8181/tautulli;
    	}
    
    	location /ombi {
    		include /config/nginx/proxy.conf;
    		proxy_pass http://192.168.42.9:3579/ombi;
    	}
    	
    	if ($http_referer ~* /ombi/) {
            rewrite ^/dist/(.*) $scheme://$host/ombi/dist/$1 permanent;
        }
    	
    }
    

    I have ombi's base url set to /ombi. I was told to add the line if ($http_referer...)  when ombi got updated a while back since they had some issues. I have with and without this block of code. I know nginx is working because tautulli is working. Any ideas if there is a step I am missing?

     

    EDIT: Error message is 

    This site can’t be reached
    The connection was reset.
    Try:
    
    Checking the connection
    Checking the proxy and the firewall
    Running Windows Network Diagnostics
    ERR_CONNECTION_RESET

     

  13. On 7/18/2018 at 5:59 PM, RockDawg said:

    I just recently setup Letsencrypt with nginx and things work when accessing the web GUI, but the Android app doesn't work now.  Does something else need to be done to get that working again?

    Might need to re setup your android app as settings may have changed for it.

  14. So, I've just restarted my server and also just my array and now deluge is not downloading anything. There are also "Unhandled error in Deferred" messages now. I can add torrents manually, via a link, as well as radarr/sonarr but they always add in a paused state now and won't download even if I do a force recheck or click resume. No settings have been changed. Here a pastbin of my supervisord.log: https://pastebin.com/DNuzvbb4

     

    Any help would be appreciated

     

    EDIT: I ended up deleting all my deluge and sonarr configs/dockers then re-adding everything. Docker logs are quiet. Now, I can connect to Deluge from Sonarr and Radarr on first setup but after a bit, nothing seems to be able to connect to it and nothing I do can get the connection back. I can now add torrents manually however. Should I move this to a Sonarr/Radarr support thread or should this be here? These are all binhex-* dockers btw. 

     

    System.Net.WebException: The request timed out: 'http://192.168.0.136:8112/json' ---> System.Net.WebException: The request timed out
      at System.Net.HttpWebRequest.EndGetResponse (System.IAsyncResult asyncResult) [0x00049] in <2fef7234205a4a009fe5995569c314ee>:0 
      at System.Net.HttpWebRequest.GetResponse () [0x0000e] in <2fef7234205a4a009fe5995569c314ee>:0 
      at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000f6] in <22414d89e85c45babce99539812a436f>:0 
       --- End of inner exception stack trace ---
      at NzbDrone.Common.Http.Dispatchers.ManagedHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x001ca] in <22414d89e85c45babce99539812a436f>:0 
      at NzbDrone.Common.Http.Dispatchers.FallbackHttpDispatcher.GetResponse (NzbDrone.Common.Http.HttpRequest request, System.Net.CookieContainer cookies) [0x000b5] in <22414d89e85c45babce99539812a436f>:0 
      at NzbDrone.Common.Http.HttpClient.ExecuteRequest (NzbDrone.Common.Http.HttpRequest request) [0x0007e] in <22414d89e85c45babce99539812a436f>:0 
      at NzbDrone.Common.Http.HttpClient.Execute (NzbDrone.Common.Http.HttpRequest request) [0x00000] in <22414d89e85c45babce99539812a436f>:0 
      at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.AuthenticateClient (NzbDrone.Common.Http.JsonRpcRequestBuilder requestBuilder, NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.Boolean reauthenticate) [0x0005b] in <65fd07448b304721a1cf8bbfea4394c9>:0 
      at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.BuildRequest (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x0006d] in <65fd07448b304721a1cf8bbfea4394c9>:0 
      at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.ProcessRequest[TResult] (NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings, System.String method, System.Object[] arguments) [0x00000] in <65fd07448b304721a1cf8bbfea4394c9>:0 
      at NzbDrone.Core.Download.Clients.Deluge.DelugeProxy.GetTorrentsByLabel (System.String label, NzbDrone.Core.Download.Clients.Deluge.DelugeSettings settings) [0x00012] in <65fd07448b304721a1cf8bbfea4394c9>:0 
      at NzbDrone.Core.Download.Clients.Deluge.Deluge.GetItems () [0x00029] in <65fd07448b304721a1cf8bbfea4394c9>:0 
      at NzbDrone.Core.Download.TrackedDownloads.DownloadMonitoringService.ProcessClientDownloads (NzbDrone.Core.Download.IDownloadClient downloadClient) [0x0000c] in <65fd07448b304721a1cf8bbfea4394c9>:0 

     

  15. On 11/3/2017 at 1:08 PM, Fredrick said:

    Well, in conclusion I fucked my previous Plex container good, so I made the switch without moving any config/database :P

     

    Was a bit of a pain, worst thing is loosing watched status I guess. A rescan takes time, but not too bad.

     

     

    You could sign up for a Trakt account, then add the Trakt plugin (channel) to Plex and it syncs your watched statuses. Then, if you ever have to move your library without database, you can just resync trakt!

    • Like 1
×
×
  • Create New...