Jump to content

aptalca

Community Developer
  • Posts

    3,064
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by aptalca

  1. Key should be EXTRA_DOMAINS and don't forget the underscore in the middle If you go to edit one of the other variables like the PUID, you can see the format
  2. For google domains dns setting, see the link, second paragraph: https://haskovec.com/ssl-certificates-google-domains/
  3. FYI, the new version of this image supports multiple domains through a new environment variable. Details are on the docker hub page
  4. Every dns provider expects you to update your ip with them when it changes, including duckdns. Otherwise, how would they know what your new ip is? [emoji6] Most of them have an api and they provide tools so you can automate that part if you want to.
  5. Ouch, that hurt [emoji14] Jokes aside, great job on the vnc baseimage [emoji106]
  6. I have a few containers on my server most are test versions. Only the main one where 443 on the router is forwarded to will auto renew the certs. For others, when I get the email notice saying the cert is about to expire, I forward port 443 to the expiring one and restart the container. After renewal, I switch the port forward to the main one.
  7. It's certainly a dns setting issue. I have not used Google domains before so can't help you there. Perhaps you can search on letsencrypt forums for Google domains help
  8. Done factory has been on its way out for over a year. It had been optional and just got deprecated in sonarr. Users can easily add a volume map for it if they still need it
  9. I just got done testing. I am using a single cache drive, btrfs, a 500GB Samsung EVO (3D nand and pretty fast) I tried a copy from cache to cache, a 24GB image file. Server didn't break a sweat, load average did not go up to more than 6, which is perfectly fine on my 8 core, 16 thread machine. Then I tried a sab download, a 7.5GB file. During unrar, load average was at about 1.6. Then, while radarr was renaming and moving (cache to cache) load average went up to about 4 and stayed there. I'll try a larger file download in the near future to simulate the scenario where I had the issue with a cache pool. But so far, it looks like my issues are gone with a single cache drive.
  10. I don't think you had them set up right from the start. Sonarr and Radarr always had complete download handling. You were never supposed to use the drone factory for things that are fetched by Sonarr and Radarr, but only things that you added to sab yourself manually. The important thing to keep in mind is that, Sonarr and Radarr communicate with sab through its api and retrieve the location of the files. They then retrieve the files from that location and process them. First you need to make sure that your volume mappings are consistent between the containers. From your screenshots, they seem to be right (/mnt/user inside and outside in all three). The second thing is, you may have a post processing script. If so, make sure that the script does not move the files to a different location that sab doesn't know about, or that it doesn't rename the files or the folders. Your second screenshot shows that sab thinks the downloaded files are at /mnt/user/Usenet/Complete/Autoprocess Movies/<movie name> but radarr cannot find them at that location. Either they are moved, or they are renamed, which is the problem. I do notice that some of the folder names have an additional tag in their name "Obfuscated" is that removed by a post process script? I have sab download the files as is into a temp folder, and radarr retrieves them from there just fine. No post processing script in between. That is likely your issue.
  11. Unfortunately I had to go out of town right after so didn't get a chance to do tests yet but will let you know when I do
  12. @johnnie.black thanks so much, with your instructions, I was able to switch to a single cache drive from a 4 ssd raid1
  13. @johnnie.black what is the easiest way to go back to a single cache drive? I just want to do some comparison tests Stop all VM and docker services, copy all cache files to array, stop array remove all cache drives but one? Any other setting changes? Thanks
  14. I don't believe it's possible due to some ports being hardcoded in plex
  15. You're on the right track. Just keep in mind that the "location" is relative to the root. The way you set it up, when a user goes to http://yoururl/public/index.html the webserver will try to serve the file located at /config/publicwww/public/index.html Other things to keep in mind are, restarting the container after changes to the config files, and clearing the browser cache
  16. I started noticing this in the last few months. But then again, I also weren't downloading such large files in the past so can't be sure if the issue existed in the past
  17. If you don't have that specific tuner, it won't affect you
  18. After my last post I realized that my 2 VMs went into a paused state (they are never supposed to sleep). I guess I didn't realize this before because most of the files I was downloading previously were in the 4-6GB range so the issue was minimal. Lately I started downloading larger files in the 10-24GB range and now the issues are a lot more pronounced.
  19. Load average goes up to 42 and stays there during transfer
  20. Hi, I have a btrfs cache pool of 4 ssd drives, which hosts the docker image as well as downloaded data. I noticed that some of the docker apps were occasionally having issues like "database locked" and "write error, disk full?", etc. After thorough testing, I realized that whenever there is a large file transfer on the cache pool, where the file is read from and written to the cache drive, the server temporarily locks up. The unraid gui is unreachable, the docker apps stall and their guis are unreachable and ssh access is slow and occasionally hangs in the middle of a basic operation like "pf -es". This continues until the file transfer is over, and a few minutes later, everything is back to normal. This seems to happen with files larger than about 10GB. A typical scenario is, sabnzbd downloads a file over 10GB, during unrar of the file to the cache pool, everything else is locked up. Then, when the file is being moved from the sab temp folder to the Movies folder on the cache pool by radarr, again, everything else is locked up. Because of the lock up, it is difficult to trouble shoot. I don't know what else to try or test. I have sabnzbd only using certain cores and not all, so the issue should not be due to high cpu utilization during unrar. Therefore I believe it is due to disk io that is fully taken up by the file transfer. Attached is my diagnostics, which should include info from this morning's 20GB file transfer (unrar operation failed a couple of times due to disk full message although there was 100GB of free space, but then succeeded on the third try). I would appreciate any ideas or suggestions. Thanks PS. File transfers from the cache pool to the array are completely fine. Mover does not affect general server operations and neither does a regular copy to the array. tower-diagnostics-20170627-1153.zip
  21. It will always be community repos for me. . . my first love
  22. @gfjardim just fyi, I ran another preclear this time with the script 2017.06.23b and it completed successfully on the same 8TB that previously failed three times with earlier versions.
  23. What chbmb means is, if you don't tell us exactly what you did, we can't help you figure out why it's not working. Post your docker run command and the container log and we can help you troubleshoot. My guess is you are trying to port forward and the default port is taken up by something else, but it's just a wild guess since I have no idea what settings you used.
  24. Sorry, I forgot to respond to that. But there were a couple of issues. Url prefix should have been just the prefix, no forward slash. And the prefix option only applied to the calibre webserver, not the calibre gui. Calibre gui already has a prefix that can be used in reverse proxy situations. With the latest update the url prefix option has been deprecated. You can set that in the calibre gui webserver settings.
×
×
  • Create New...