Jump to content
We're Hiring! Full Stack Developer ×

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Posts posted by aptalca

  1. Hello, I've recently replaced write cache and with Squids help was able to fix a corrupt docker. my next challenge is my apps tab, the view has changed and the Add button has disappeared.(see attached picture) -  Any guidance please?
     
    5908ef3443bf4_Unraidappnotaddbutton.thumb.PNG.a93e070af5fade1d883ca596acd01f80.PNG


    Read two posts above yours
    • Upvote 1

  2. Give me a review of it in a week or so. After you get used to it

    Sent from my LG-D852 using Tapatalk



    It wasn't a complaint or criticism. Just an observation. It wasn't what I was looking for based on what I was used to from the earlier versions. Now that I know, it's perfectly fine (and it would have been fine if I were a new user) [emoji6]
  3. Also keep in mind that if you get power cuts back to back where your server auto starts when the ups battery is already low, and you get another power cut, your server may not have enough time to shut down safely. I like to start the server manually (or through ipmi) after making sure the ups battery has enough juice for another power cut

  4. Hello

    Thanks for maintaining this wonderful docker.

    I have a minecraft server running and use a tool called overview which creates a "google maps" like map. This tool runs on a ubuntu VM and outputs all files in a folder. Now to share that I'd like to use the nginx webserver. My question now is whats the best way to mount that folder within the docker to be able to share it?


    Is the vm on unraid? You can probably do a 9p share in the vm, save into that folder and map that for this container
  5. Does anyone know how to add additional lop level domains after this docker app has been configured? ie example.com AND example.org. If any commands need to be run or scripts modified I can do that, just need a little direction. Thanks!
     
    edit- I'm referring to the letsencrypt portion of this (not nginx)


    This container only supports one domain.

    You could redirect the org to the com, though, if they are pointing to the same web folder
  6. I am getting this error when trying to connect from outside my home network: NET:ERR_CERT_AUTHORITY_INVALID, but it works fine within my network. I am using HSTS. Nothing has changed from a configuration standpoint to cause this. However, we did just have AT&T out here to install U-verse the other day and they were messing with the gateway. Does this behavior sound like an issue with LetsEncrypt or the gateway/router settings?
     
    EDIT:
    Think I found the issue. I hate AT&T. Apparently if you use wireless set top boxes they require port 443 and you cannot change that. Ridiculous.
    https://forums.att.com/t5/AT-T-Internet-Features/Forwarding-port-443-for-WHS-conflict-with-connectToCiscoAP/td-p/3365983


    Wow that is pretty ridiculous. Can you bypass it and use your own router? That's what I did with Verizon. I turned my Verizon modem into a simple bridge and my router behind it gets the dhcp lease directly from verizon
    • Upvote 1
  7. I'm trying to configure this to block access by country.  I came across instructions on using GeoIP module on Ubuntu, but not being very conversant with linux I'm having trouble getting this to work.  Running nginx -V shows " --with-http_geoip_module=dynamic" so it's compiled with the right module, but it doesnt seem to have geoip-database and libgeoip1 installed.
     
    Any way to get this working, or do these modules need to be part of the letsencrypt container?

    Geoip is an nginx module and is included in this image. You may have to enable it in the nginx config or site config, that I'm not sure as I haven't used it myself
  8. 2 minutes ago, Squid said:

    Switching it to user, it will still pick up everything and do a delta of the existing backup(s), and save the files according to the share's include / exclude / split levels

    Switching it to another disk in the plugin itself, and it won't know about the existing backup(s)

     

    That's what I thought. Thanks so much.

     

    I guess switching to another disk would require manually moving the existing files to the new disk first.

  9. Squid, quick question for confirmation. . .

     

    I have been using the backup plugin for a long time and it was always set to backup to a share, on a specific disk, disk 3. Now disk 3 is getting full, and I'm thinking about changing the target location to "user" instead. Anything I should worry about? The share in question does not use a cache disk, and the same location is available through the /mnt/user path. It should just pickup all the existing files and append, right?

     

    Thanks

  10. Its a Plex problem. Rolled out with the new 1.5.3 update.

    https://forums.plex.tv/discussion/265492/transcoder-fails-when-transcode-is-on-a-network-share

     

    NOW WHY THE HELL DONT THEY JUST ROLLBACK the update for docker. Like this screwed a lot of people! Kinda really frustrated with this.

     

    And I cannot delete my config folder. I have wayy too much custom meta data. It would take forever to get it how it is now.

     

    Anyone know the full release version number for 1.5.2? (eg 0.9.12.4.1192-9a47d21) I can't find it anywhere, so I can just put the tag  on docker to get it back.

    This thread links to stable builds for arm but you can at least see the full version numbers in the links (I'm assuming the version numbers are the same for arm and x86-64):

    https://forums.plex.tv/discussion/221444/unofficial-armhf-arm64-multiarch-debian-package-e-g-rpi-2-3-bpi-odroid#latest

     

  11.  
    Ok with the 2 GB has worked so far that I can see it when I call "docker stats JDownloader 2" that the docker use "only" 2GB RAM.I use cache_dirs and when I cache all my stuff on the server 3 GB occupied by 12 GB.Once I use the JDownloader the cached ram increases higher than an additional 2GB. At least the live stats show.When a file is unpacked, the cached RAM increases even more. It is released after unpacking something again,but it should never be more than 5 GB be used. 3GB through my cached files and 2 GB by JDownloader.Can I limit that still somewhere


    Unused memory is wasted memory. Why are you trying to limit it? These days the OSes are smart enough to reallocate resources where necessary (except for VMs, they reserve their memory and not share)
  12. Well I just fixed it... it really helps if you read the setup instructions completely. Ugh. I missed the line that said "set your library to config on first run." Once I did that, I was able to add the environment variable and map it to my existing library just fine. All my books are showing up in the docker and are being served out over 13579 just fine.


    Glad to hear it worked
  13. I just set up the RDP-Calibre docker, and was able to get a preexisting library mapped into it. When I open the WebUI, I see all my books and it's great. I'm not exactly sure how to make the library available outside the docker, though. I enabled the web server under preferences in Calibre and set the server port to 13579, added a username/password, and I've mapped docker port 8081 to 13579. When I go to saidin:13579, I get prompted for a username/password, and once I enter it, I end up at the normal web interface where you can view the library and download books. However, when I click on either "Newest" or "All Books", I get the error below. If I click on "Random Book" I get a 404 page saying this library has no books. How do I make my library available outside the docker?
     
    Error: No books foundprintStackTrace.implementation.prototype.createException@http://saidin:13579/static/stacktrace.js:81:13printStackTrace.implementation.prototype.run@http://saidin:13579/static/stacktrace.js:66:20printStackTrace@http://saidin:13579/static/stacktrace.js:57:60render_error@http://saidin:13579/static/browse/browse.js:134:18booklist@http://saidin:13579/static/browse/browse.js:271:29@http://saidin:13579/browse/category/allbooks:34:17.ready@http://saidin:13579/static/jquery.js:392:6DOMContentLoaded@http://saidin:13579/static/jquery.js:745:3

     



    Don't enable the server in the gui. There is already a separate server instance running at the other port. You probably didn't set the library path variable in the container settings. It's described on the docker hub page.
  14. This docker seems to have a conflict with some of the browser cookies that other unraid dockers are storing for the same IP address.
    I get stuck on the login screen even though the user and password are correctly populated. I have to clear the cookies before I can pass the login screen.
     
    Is this a known issue that anyone is looking to resolve?

    If you have radarr installed, that might be why. You can set their passwords the same
  15. Thanks for the offers to help guys, but I got it fixed. Looks like I didn't have to wait a week afterall and when i updated my dockers this morning it worked without issue. It would be nice if the docker at least was allowed to stay running so the reverse proxy would still work even if there are issues with the certificates. 

    Container should run even if the certs are not generated. But nginx won't start because the config requires those certs.

    If you remove the two lines defining certs from your site config nginx should start
  16. Pretty sure I royally screwed something up.

     

    I tried to add a new subdomain and regenerate certs but I kept receiving a unknownhost DNS error that was newly logged since I made the change. In the process of troubleshooting the DNS I restarted the docker a few times and ended up getting this error:

     

    There were too many requests of a given type :: Error creating new authz :: Too many invalid authorizations recently.

     

    After a bit of googling it looks like I now have to wait a week and the Lets Encrypt docker wont even start now. There should really be a warning in the description about the rate limits and instructions how to put this in test cert mode so others don't make the same mistake I did. Is there any way around this to at least get things up and running again?


    It's a letsencrypt thing and outside of our control. You can contact them about it.

    Or you can get a free subdomain from duckdns and create as many sub-subdomains as you need

    With regards to the dns error, I can't say anything without logs or more info on your setup
×
×
  • Create New...