Jump to content
We're Hiring! Full Stack Developer ×

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. ASSuming you can configure each container to be able to communicate through port 443 from the outside world. Might require some fiddling at the router level. Or switch each container in turn to be on 443 externally when it comes time to obtain or renew certificates. Don't know if you can tell the letsencrypt script to talk back to a different port.
     
    After it's running with good certificates I'm assuming you can put them on whatever external port you want.
     
    I may be completely wrong though.


    I have a few containers on my server most are test versions.

    Only the main one where 443 on the router is forwarded to will auto renew the certs.

    For others, when I get the email notice saying the cert is about to expire, I forward port 443 to the expiring one and restart the container. After renewal, I switch the port forward to the main one.
  2. Sorry for bumping this, but does anyone have any idea what I could be missing?  I messed around with it again this weekend and couldn't get anywhere.  I feel like it's something simple, but it has to do with the dynamic DNS and what LetsEncrypt sees when it pings that domain.  If I use something like DuckDNS it works, and that's what I've been using.  However that isn't my desired end state.
     
    I appreciate any thoughts/help anyone has, thank you!


    It's certainly a dns setting issue. I have not used Google domains before so can't help you there. Perhaps you can search on letsencrypt forums for Google domains help
  3. The template for radarr is missing a path option for "drone factory" folder.
     
    I don't help adding it, just mentioning it was missing.


    Done factory has been on its way out for over a year. It had been optional and just got deprecated in sonarr.

    Users can easily add a volume map for it if they still need it
  4. I just got done testing. I am using a single cache drive, btrfs, a 500GB Samsung EVO (3D nand and pretty fast)

     

    I tried a copy from cache to cache, a 24GB image file. Server didn't break a sweat, load average did not go up to more than 6, which is perfectly fine on my 8 core, 16 thread machine.

     

    Then I tried a sab download, a 7.5GB file. During unrar, load average was at about 1.6. Then, while radarr was renaming and moving (cache to cache) load average went up to about 4 and stayed there.

     

    I'll try a larger file download in the near future to simulate the scenario where I had the issue with a cache pool. But so far, it looks like my issues are gone with a single cache drive.

  5. On 7/1/2017 at 1:45 PM, cypres0099 said:

    I just switched all my plugins (SABnzbd, Sonarr, Plex) over to dockers and added Radarr as a docker and most things are working correctly, but I'm running into a couple of snags with Radarr and to some degree Sonarr.

     

    I've been using Sonarr for some time use the Drone Factory for my TV download folder that SAB puts TV downloads into. While I was updating things, I saw that Sonarr and Radarr now have "Complete Download Handling". From what I understand, you can turn off the drone folder and just have SAB and Radarr/Sonarr communicating when downloads are finished. Then Radarr will process the movie, move it to the movie folder, and clean up the folders. 

    PROBLEMS
     

    1. The only way my movies are getting processed is if I setup the drone folder.
    2. After they are processed I'm left with folder containing leftover junk files. They don't get cleaned up.

    3. If I try to use complete download handling the movies/tv shows get downloaded to the appropriate folders, but then nothing happens.

     

    See configuration in image attached. 

    I'd really appreciate any help!

     

    I don't think you had them set up right from the start. Sonarr and Radarr always had complete download handling. You were never supposed to use the drone factory for things that are fetched by Sonarr and Radarr, but only things that you added to sab yourself manually.

     

    The important thing to keep in mind is that, Sonarr and Radarr communicate with sab through its api and retrieve the location of the files. They then retrieve the files from that location and process them. First you need to make sure that your volume mappings are consistent between the containers. From your screenshots, they seem to be right (/mnt/user inside and outside in all three).

     

    The second thing is, you may have a post processing script. If so, make sure that the script does not move the files to a different location that sab doesn't know about, or that it doesn't rename the files or the folders. Your second screenshot shows that sab thinks the downloaded files are at /mnt/user/Usenet/Complete/Autoprocess Movies/<movie name> but radarr cannot find them at that location. Either they are moved, or they are renamed, which is the problem. I do notice that some of the folder names have an additional tag in their name "Obfuscated" is that removed by a post process script?

     

    I have sab download the files as is into a temp folder, and radarr retrieves them from there just fine. No post processing script in between. That is likely your issue.

  6. Has anyone tried to have two plex servers running locally that are accessed remotedly?  I'm having problems getting the second one to work remotely.  Apparently I need to set the public port to something other than 32400 (which the first server is using).  I've tried to set the public port on the 2nd plex server manually to 32401, but don't see anywhere in the docker plex config where I can set that.  Is there a place to do that?
     
    Thanks in advance!
    jeff...


    I don't believe it's possible due to some ports being hardcoded in plex
  7. Is it possible to have some pages served unsecured with this server?  I tried adding some locations to the listen 80 server, but I don't know if I truly understand how to set it up.  no matter what I try, browsing to the matched uri still redirects to the secure server.

     

    server {listen 80;server_name _;root /config/publicwww;index index.html index.htm index.php;location ^~ /public {	try_files $uri =404;}location / {	return 301 https://$host$request_uri;}}

    Any suggestions on how to tackle this?

     

     

    You're on the right track. Just keep in mind that the "location" is relative to the root.

     

    The way you set it up, when a user goes to http://yoururl/public/index.html the webserver will try to serve the file located at /config/publicwww/public/index.html

     

    Other things to keep in mind are, restarting the container after changes to the config files, and clearing the browser cache

  8. got a update today (PlexMediaServer 1.7.5 (PlexPass)) and saw this message.
    ################################################################### NOTE: Your system does not have udev installed. Without udev ## you won't be able to use DVBLogic's TVButler for DVR ## or for LiveTV ## ## Please install udev and reinstall Plex Media Server to ## to enable TV Butler support in Plex Media Server. ## ## To install udev run: sudo apt-get install udev ## ###################################################################

    DVR still works fine, but a don't have no liveTV or other new stuff...
    i use a HDHomeRun EXPAND...
    Non of my client have any ability to LiveTV (Android smartfone, pmp, plex web, openpht)



    If you don't have that specific tuner, it won't affect you
  9. Hi,

     

    I have a btrfs cache pool of 4 ssd drives, which hosts the docker image as well as downloaded data. I noticed that some of the docker apps were occasionally having issues like "database locked" and "write error, disk full?", etc.

     

    After thorough testing, I realized that whenever there is a large file transfer on the cache pool, where the file is read from and written to the cache drive, the server temporarily locks up. The unraid gui is unreachable, the docker apps stall and their guis are unreachable and ssh access is slow and occasionally hangs in the middle of a basic operation like "pf -es". This continues until the file transfer is over, and a few minutes later, everything is back to normal. This seems to happen with files larger than about 10GB. 

     

    A typical scenario is, sabnzbd downloads a file over 10GB, during unrar of the file to the cache pool, everything else is locked up. Then, when the file is being moved from the sab temp folder to the Movies folder on the cache pool by radarr, again, everything else is locked up.

     

    Because of the lock up, it is difficult to trouble shoot. I don't know what else to try or test.

     

    I have sabnzbd only using certain cores and not all, so the issue should not be due to high cpu utilization during unrar. Therefore I believe it is due to disk io that is fully taken up by the file transfer.

     

    Attached is my diagnostics, which should include info from this morning's 20GB file transfer (unrar operation failed a couple of times due to disk full message although there was 100GB of free space, but then succeeded on the third try).

     

    I would appreciate any ideas or suggestions. Thanks

     

    PS. File transfers from the cache pool to the array are completely fine. Mover does not affect general server operations and neither does a regular copy to the array.

    tower-diagnostics-20170627-1153.zip

    • Upvote 1
  10. That is what it said to do on the linuxserver page, and a few people on this thread said it had to be that way. No worries I'll just use a virtual appliance instead of the Docker.
     
    https://github.com/linuxserver/docker-openvpnas


    What chbmb means is, if you don't tell us exactly what you did, we can't help you figure out why it's not working.

    Post your docker run command and the container log and we can help you troubleshoot.

    My guess is you are trying to port forward and the default port is taken up by something else, but it's just a wild guess since I have no idea what settings you used.
  11. Any thoughts on my post prior with the URLPREFIX setting not working?


    Sorry, I forgot to respond to that. But there were a couple of issues.

    Url prefix should have been just the prefix, no forward slash.

    And the prefix option only applied to the calibre webserver, not the calibre gui. Calibre gui already has a prefix that can be used in reverse proxy situations.

    With the latest update the url prefix option has been deprecated. You can set that in the calibre gui webserver settings.
  12. 33 minutes ago, poldim said:

    My setup:

    • 192.168.2.0/24 - VLAN 20 - HOMELAB

    • 192.168.3.0/24 - VLAN 30 - HOME AUTOMATION

    • 192.168.4.0/24 - VLAN 40 - WLAN

    • 192.168.5.0/24 - VLAN 50 - GUEST [Subnet routing only blocked for GUEST VLAN.]

    What happens:

    • 4G connection from cell -> nginx reverse proxy = works
    • VM on HOMELAB VLAN -> nginx reverse proxy = works
    • VM on HOME AUTOMATION VLAN -> nginx reverse proxy = does not work
    • phone/laptop on WLAN VLAN -> nginx reverse proxy = does not work
    • phone/laptop on WLAN VLAN -> IP of service on HOMELAB VLAN = works

     

     

    The unraid server has the letsencrypt / nginx docker. The server sits on 192.168.2.100 VLAN 20 but is bridged to 192.168.3.100 VLAN 30 and 192.168.4.100 VLAN 40.  I added the networking settings to the gdoc https://docs.google.com/document/d/1Cf8qLFcBVAen3yxqcOzA3kI_KbaRuS582b7yE7X7J1Q/edit?usp=sharing

     

    What does not make sense to me:

    • I can access the UniFi UI on 192.168.2.100 from my macbook or phone which are 192.168.4.1xx
    • I can access my security cameras that are 192.168.3.xxx from the same macbook/phone and VM hosted on 192.168.2.xxx.

    Doesn't this mean that subnet routing is, in fact, working correctly?

     

    Is this correct while on WAN: unifi.mydomain.com

    • Phone > DNS lookup and send me to my router > router forwards 80 + 443 to 80 and 443 on my unraid server > letsencrypt is bound to those ports, receives the request, and forwards to the appropriate internal address based on config.

    • If I have hairpin NAT enabled, is the process the same? Does the lookup for unifi.mydomain.com not get routed by the router back to the unraid server?

     

    Your network is super complicated. The fact that reverse proxy works from wan/4g but has issues from inside, suggests dns/firewall issues on the router. It may be nat loopback issues (many routers by default block connections that go out to the wan and come right back in to the router). You may also have issues with your firewall settings for connections between vlans. All those issues are really beyond the scope of our support for this image.

  13. 1 hour ago, miftis said:

    This was more of a bug report.  Restarting the container was not working for me initially because I had removed the default conf file to favor multiple smaller conf files with a dedicated purpose.  Every time I restarted the container the default conf would be recreated and break nginx.  I temporarily solved the problem by renaming the conf I did have back to "default".

     

    "nginx -s reload" not working is not a bug. It's not supposed to work. That's not how nginx is launched in this image. Nginx is run and managed by the s6 supervisor.

     

    The way this image is set up is that, if there is no default config in the config folder, it copies the default one from inside the image, just like when you first install it. Also when someone messes up their settings, they can remove the default config file and restart the container to go back to the original setup.

     

    You're not supposed to remove the default config, but you can modify it to your heart's content. You can also add as many additional site configs as you like. Just don't delete the default one (you can leave it blank)

  14. hi everyone, 
    i would like to set up plex server to be give access to my ios and android devices in my network (plexpass)... however, more important i wouldn't want to start buying extra SSD's to save the metadata.
     
    i would like all heavy metadata files to either be saved in my array (shares) or can i configure the docker (unraid app), to be set up so it could use the existing metadata currently saved.  obviously, the docker will still run from my cache ssd.
     
    is there someone that can suggest how to configure this, or if it can't be done? 
    using unraid 6.3.2
    thanks everyone
     


    If you have plex set up to create index files, they take up a lot of space. They are all in a folder called "media" I believe and when I had a smaller cache drive, I had moved that to the array and put a symlink in its place
×
×
  • Create New...