Jump to content

kamhighway

Members
  • Posts

    294
  • Joined

  • Last visited

Posts posted by kamhighway

  1. Is anyone having trouble with Sonarr communicating with Jackett? I am running Unraid 6.7.0 with the latest linuxserver.io Sonarr and Jackett dockers which had been working flawlessly. This morning Sonarr reported that none of Jackett's indexers were working. I saw there was an update to Jackett and thought that might fix it. It did not. 

     

    I noticed that the url Jackett gives for a torznab feed is quite different than what I had in Sonarr when I originally set up Jackett.  I updated one of my Sonarr indexers with the new torznab feed but the indexer would not pass the test connection.

     

    The only other setting for the Sonarr indexer is API which I left at /api.  Does anyone know if that has changed?

     

    Thanks in advance.

  2. @jbartlett, Thank you for this tool. I very much appreciate what you are doing here. 

     

    Update:  Looks like this is the same error message Simon021 posted.

     

    I have downloaded the latest docker version and can't seem to get past the initial hardware scan. This is the error message I received:

     

    DiskSpeed - Disk Diagnostics & Reporting tool
    Version: Beta 3a
    
    Scanning Hardware
    12:01:51 Spinning up hard drives
    12:01:51 Scanning storage controllers
    12:01:53 Found Controller SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (8 ports)
    12:01:53 Found Controller C602 chipset 4-Port SATA Storage Control Unit (4 ports)
    12:01:53 Found Controller C600/X79 series chipset 6-Port SATA AHCI Controller (6 ports)
    12:01:53 Scanning controllers for hard drives
    Lucee 5.2.6.59 Error (expression)
    Message	Element at position [9] does not exist in list
    Stacktrace	The Error Occurred in
    /var/www/ScanControllers.cfm: line 646 
    644: <CFLOOP index="Key" list="#StructKeyList(HW)#">
    645: <CFLOOP index="PortNo" from="1" to="#ArrayLen(HW[Key].Ports)#">
    646: <CFIF HW[Key].Ports[PortNo].DevicePath NEQ "">
    647: <cfexecute name="/bin/ls" arguments="-l #HW[Key].Ports[PortNo].DevicePath#" timeout="300" variable="tmp" />
    648: <CFSET dir=ListToArray(tmp,Chr(10))>

    There was a stack trace after this section. I've attached a txt file with the full output. 

     

    I'm concerned because the message seems to indicate that there is a disk in position 9 that does not exist. If that is the case, that may explain some problems I've been having. If you could tell me if this message means that my Unraid server thinks it has a HD that is not really there, it might be the key to figuring out what's going on with my server.

     

    Thank you!

    Kamhighway

    DiskSpeed.txt

  3. I am having trouble getting the plugin to remember changes made from the Wake On Lan tab. Here's what I do:

     

    1. Click on "new" button

    2. New line shows up with hostname "New", IP Address 192.168.X.XXX, Mac Address XX:XX:XX:XX:XX:XX.  It is always the same IP Address and Mac Address.

    3. I edit the hostname, IP Address, and Mac Address

    4. Status changes to green

    5. Click on "Done" whereupon I get sent back to Unraid's Setting tab

    6. If I click on Wake On Lan in the Settings Tab, I go back to the Wake On Lan tab of this plugin. The Hostname, IP,  and Mac address have all reverted back to what they were in step 2

     

    I have examined wakeonlan.xml and find that the new host does not get saved to the file. If I edit the file to put the new host in manually, it does not show up as a host in the plugins Wake On Lan tab.

     

    I am on Unraid 6.3.5. Does this plugin require 6.4?

     

    Update: After about an hour, the edits I made to wakeonlan.xml were reflected in the plugin as I had hoped. There must be a cache somewhere that takes a while to update. I had rebooted Unraid hoping that would make the cache read the edited wakeonlan.xml, but that did not work. Waiting an hour did work. 

  4. In Step 1, is the IP address the external ip address or the local ip address?

    Thanks,

    Kamhighway

    1. Set required Environment Variables.

    After looking at the above tabe, set variables as required.
    If running manually, be sure to add this extra parameter: --add-host mail.domain.com:xxx.xxx.xxx.xxx
    Replace 'mail.domain.com' with your MAIL_HOST, and the 'xxx.xxx.xxx.xxx' with your MAIL_HOST's IP Address.

  5. @alowishes, Thanks for your link. That is definitely part of the solution.

    @ccmpbll, You are on to something. Adding an unraid share changes the ip link and thus necessitates editing /etc/network/interfaces to restore network functionality.  However, the unraid share is still not mounted to the VM for me on Ubuntu 17.04. Does it work for you? 

     

    I can't get this to work by editing the VM XML. Instead, I edited /etc/fstab in the VM to mount the UR share to /media/unraid

  6. After getting the notification that an update was available, I updated, and now the docker will not start up. I have deleted the docker and reinstalled from scratch and still the docker will not start up. Is anyone else having the same problem?

  7. Thank you tjb_altf4 and wgstarks. The instructions that wgstarks linked to were a little confusing so while its fresh in my mind, I'd like to write down how to enable IPT.

     

    First, it was not clear that to see the cookies in Chrome you have to go to "More Tools" and turn on "Developer Tools". The cookie is found under the tab "Application".

     

    There are three parts:  PHPSESSID, pass, and uid.  Copy the three pieces to a text editor and put it together like this:  PHPSESSID=[insert long number]; pass=[insert second number]; uid=[insert third number]

     

    Now the question is how to get this cookie into Jackett?  In Jackett, if you hit the "+ Add Indexer" you will find IPTorrent listed. Select it and it will come up with the default ip as https://iptorrents.com. Change https to http and click "OK". Now you should see a box to paste the cookie. Click OK.  It will fail, but now you can change the link from http back to https.  Now click OK and it should succeed.

     

     

     

     

  8. Solved File Upload Size Limitation 

     

    I had been fiddling with the LSIO's letsencrypt container to make it work as a reverse proxy for LSIO's Nextcloud.  The reverse proxy works, but file uploads are limited to 10MB.  The solution is to edit the file proxy.conf which for me resides in /mnt/cache/appdata/letsencrypt_lsio/nginx. The first line in that file is:

     

    client_max_body_size 10m;

     

    Change to:

     

    client_max_body_size 0;  #This turns off checking and everything works.

    • Like 1
    • Upvote 1
  9. I can now set the max file upload size in the Nextcloud webui and it works just as CHBMB said as long as I access it via internal IP address 192.168.x.xxx:zzzz. 

     

    However, when I access nextcloud through lsio's letsencrypt set up as a reverse proxy, I again get "request entity too large."

     

    I've set client_max_body_size to 2000M in the reverse proxy as explained here:

     

    https://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/

     

    However, this does not seem to fix the problem.

     

    Is anyone else using lsio's letsencrypt to reverse proxy lsio's nextcloud? If so, are you able to upload files larger than about about 10.5 MB?

     

    Thanks in advance.

  10. Update: 

     

    It looks like at some point during the Nextcloud updates, something caused an incompatibility with Needo's mariadb docker. The clue was the post by Vizhouz that referred to new installation instructions on page 28. 

     

    I followed the instructions CHBMB wrote starting on page 28 installing lsio's mariadb and nextcloud dockers on a back up unraid server and sure enough it works. The only thing I changed about his instrux is that I did a chmod (not chown) 644 custom.cnf.

     

    Thanks for your help today CHBMB. 

     

     

  11. @CHBMB,

     

    I would think the php.ini values would get overridden by .user.ini values.  However, I'm not at all sure.  Since your config is working and mine isn't, I'm trying to figure out what could be wrong with my config since our .user.ini files look identical.

     

    I've been testing files of various sizes.  Seems that files below 10.4 mb upload just fine.  Files 10.5 mb and above throw an error "Request entity too large"

  12. @CHBMB

     

    Thanks for your reply:

     

    My .user.ini file looks to be updated:

     

    upload_max_filesize=2G
    post_max_size=2G
    memory_limit=512M
    mbstring.func_overload=0
    always_populate_raw_post_data=-1
    default_charset='UTF-8'
    output_buffering=0
    

     

    I see how I could be reading the documentation the wrong way regarding .htaccess. 

     

    Still, I am stumped.  I wish it were as simple as changing the max file size in Nextcloud's webui, but that did not work for me.

     

    Can you tell me what your php.ini has for these two variables?

     

    1. upload_max_filesize

    2. post_max_size

     

    Thank you!

  13. @CHBMB

     

    Do you use LSIO's letsencrypt as a proxy in front of Nextcloud?

     

    Update:  I just took the proxy out of the equation to eliminate one potential source of the problem.  The issue still persists, so it must be something within the settings for the Nextcloud docker.

     

    Reading the documentation it appears that changing the max filesize from the Nextcloud webui only works if using apache as the webserver. Since the docker is using nginx, I'm not sure how this works for CHBMB.

     

    From:  https://docs.nextcloud.com/server/9/admin_manual/configuration_files/big_file_upload_configuration.html
    
    To be able to use this input box you need to make sure that:
    
    your Web server is able to use the .htaccess file shipped by Nextcloud (Apache only)
    the user your Web server is running as has write permissions to the files .htaccess and .user.ini

  14. Has anyone been able to increase the maximum upload size?  If so, could you please help me out here.  The default seems to be 512mb and I'd like to increase that to 2 GB. 

     

    I have edited nginx.conf to change client_max_body_size to 2G.

     

    I have also edited php.ini in the docker to change post_max_size = 2G and upload_max_filesize = 2G

     

    I have made the same edits for LSIO's Letsencrypt which i am using to proxy Nextcloud.

     

    What am I missing?

×
×
  • Create New...