Jump to content

thestraycat

Members
  • Posts

    202
  • Joined

  • Last visited

Everything posted by thestraycat

  1. That's interesting. I was wondering if that was the case, as nothing had changed with my config, and the containers hadn't updated in the time window of the issue... Were you effected ijuarez and if so are you working now?
  2. Hi everyone, A few problems today with Plex today and LetsEncrypt. Noticed this in my Letsencrypt container log. Anyone experienced anything similar? Nothings changed was working yesterday. Obviously references rate limits was wondering whether my network could have got confused/staled out a bit and that the container kept attempting to renew the cert and hit a rate limit or similar? Anyone had to fix anything similar? Processing /etc/letsencrypt/renewal/removedmyrealone.co.uk.conf- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Cert is due for renewal, auto-renewing...Plugins selected: Authenticator standalone, Installer NoneRunning pre-hook command: if ps aux | grep [n]ginx: > /dev/null; then s6-svc -d /var/run/s6/services/nginx; fiRenewing an existing certificateAttempting to renew cert (removedmyrealone.co.uk) from /etc/letsencrypt/renewal/removedmyrealone.co.uk.conf produced an unexpected error: urn:ietf:params:acme:error:rateLimited :: There were too many requests of a given type :: Error creating new order :: too many failed authorizations recently: see https://letsencrypt.org/docs/rate-limits/. Skipping.All renewal attempts failed. The following certs could not be renewed:/etc/letsencrypt/live/removedmyrealone/fullchain.pem (failure)- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -All renewal attempts failed. The following certs could not be renewed:/etc/letsencrypt/live/removedmyrealone/fullchain.pem (failure)- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -Running post-hook command: if ps aux | grep 's6-supervise nginx' | grep -v grep > /dev/null; then s6-svc -u /var/run/s6/services/nginx; fi; cd /config/keys/letsencrypt && openssl pkcs12 -export -out privkey.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -passout pass: && cat {privkey,fullchain}.pem > priv-fullchain-bundle.pemHook command "if ps aux | grep 's6-supervise nginx' | grep -v grep > /dev/null; then s6-svc -u /var/run/s6/services/nginx; fi; cd /config/keys/letsencrypt && openssl pkcs12 -export -out privkey.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem -passout pass: && cat {privkey,fullchain}.pem > priv-fullchain-bundle.pem" returned error code 1Error output from if:cat: {privkey,fullchain}.pem: No such file or directory1 renew failure(s), 0 parse failure(s)[cont-init.d] 50-config: exited 0.[cont-init.d] done.[services.d] starting services[services.d] done.nginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_sizenginx: [warn] could not build optimal proxy_headers_hash, you should increase either proxy_headers_hash_max_size: 512 or proxy_headers_hash_bucket_size: 64; ignoring proxy_headers_hash_bucket_sizeServer ready
  3. Hi guys - A quick question about bulk book import... I'm currently suffering from having my docker.img file totally fill up on large ebook imports... I run a 50gb Docker ISO image... And when i point a large zip file of comics etc to Calibre it fills it up very quickly... Would i be right in assuming that /tmp for calibre needs to be mapped outside of the image? It seems cailbre ebook import grinds to a half when the docker image is totally full (not to mention it effects other running containers) Havnt been able to confirm which temp folder calibre uses to do its mass imports but was assuming /tmp from within the container. I know calibre also has variables that can define this path https://manual.calibre-ebook.com/customize.html but not sure whether they are available to be used when running calibre as a container and whether they need to be built/passed through the unraid template to be used? Any help would be awesome...
  4. Amazing! - Really looking forward to getting this up...
  5. I get the same error... Anyone got a fix for where to set the hostname?
  6. Firstly sorry for the newb question... But Interested in installing the pack purely for iperf but was a little worried about it modifying/removing any dependencies that unraid may rely on although im sure they install to a different place? Not quite sure how 'plugins' install in terms of isolation from the OS but assume they dont have the isolation of containers or VM's at all? Just looking for peace of mind really that there safe to install and remove at a later date without doing damage to the system in anyway?
  7. I'm assuming that radarr's dronefactory wont process folders/files that havn't been created within Radarr. As in radarr wont auto process and auto import content from the dronefactory and make an entry for the movie unless it was prestaged in radarr?
  8. Hi Squid - Totally agree and that side is fine. Importing and processing of media works like a dream between radarr, sonarr, sabnzbd and deluge that's not the problem for me. It's using the dronefactory feature as auto-processing watch folder for downloads that happen outside of radarr/sonarr that is the issue.
  9. Pretty sure i've tried it but i'll give it a go now! I take it you mean change it on radarr, sonarr and sab? UPDATE: Just turned off radarr. Remapped the /downloads -> /mnt/cache/appdata/downloads/ Havn't done sabnzbd or sonarr yet as they shouldn't have anything to do with this test. and still nothing. Clicking on the drone factory button under "wanted" did take a minute or so to finish but nothing was processed. I have around 160 folders (each a movie in the drone factory at the moment for what it's worth, ive also tried moving them all out and keeping 1 film in the folder and restarting) Nothing. Can anyone post there container mappings, radarr/sonarr drone factory path and potentially there remote mappings for me to try emulating? UPDATE: Renamed the old radar drone factory -> radardrone2) created a new folder called radardrone. Copied 1 movie with a reasonably nice name into this folder for the radardrone to post process. Nothing. And no mention in the logs about it from forcing the radardrone factory to scan
  10. Hi guys, i have a weird issue with Radarr/Sonarr and the drone factory not picking up folder for processing (not sure if it's even still ment to be working as there was talk of depreciating it?) My paths are set as below in unraid: Sonarr (Unraid config) /downloads --> /mnt/user/appdata/downloads/sonardrone/ Radarr (Unraid config) /downloads --> /mnt/user/appdata/downloads/radardrone/ And within Radarr/Sonarr GUI, under dronefactory, there set as follows: Sonarr Drone Factory (Sonarr GUI) /downloads/sonardrone/ Radarr Drone Factory (Radarr GUI) /downloads/radardrone/ Everything's on the same server (192.168.1.125) and i'm fresh out of ideas! nothing in the logs corresponding to anything useable! I threw some attachments in to show, i can browse to the folders from within Radar and Sonar and can see the folders of films waiting to be processed but for some reason nothing happens when i manually trigger the dronefactory? Does the config look right to you? I'm starting to think the feature has been quietly depreciated... Can you see anything strange in the screenshots below?
  11. Hi guys, I'm having some strange permission issues with Sonarr. When sonarr creates files in the shares i have created. it creates the files as: Owner Name: nobody Group Name: users sonarr is set with defaults as far as i can tell: Set permissions = Yes File CHMOD mask: 0644 Folder chmod mask: 0755 however from a windows machine i cant edit the files it creates: "you require permission from Unix user\nobody to make changes to this file." Any ideas? I've checked the file permissions from unraid for files being created by Couchpotato and the files show up the same as they do for files created by sonarr (ownername = nobody | Group Name = users) And files editted from windows created by couchpotato edit fine. From windows the couchpotato files look like this: Security tab on a file shows: Everyone - Read & Execute, Read, Write, Special permissions greyed nobody - Read & Execute, Read, Write, Special permissions greyed users - Read & Execute, Read, Write, Special permissions greyed however in sonarr they look like this: Everyone - Read nobody - Read, Write users - Read The shares look identical from within Unraid any ideas why NTFS/ACL's arn't being set properly? i've tried setting sonarr as Set permissions = Yes File CHMOD mask: 0777 Folder chmod mask: 0777 and Set permissions = No File CHMOD mask: 0777 Folder chmod mask: 0777 Am i ment to be specifying the following values within sonarr with "99" & "100" or anything? chown user chown group
  12. Great stuff. It's just a matter of reassigning the new slot to the cache drive when it comes back up?
  13. Right - Thanks for the confirmation - What are the consequences of powering down -> swapping over the SSD to mainboard SATA0 port -> turning on Will i need to play around with anything prior?
  14. Hi guys, just looking for confirmation. I'm running 1tb EVO 850 as my cache/docker/VM drive formatted with BTRFS I'm running it off of my Dell H200 HBA on recent IT firmware (As this gives me SATA3 connectivity) I'm running Dynamix TRIM plugin. I'm getting the following error via email notifications: fstrim: /mnt/cache: the discard operation is not supported Is this because i'm running BTRFS or because it's connected via HBA on IT Firmware? Lastly, should i be running on a daily or weekly schedule? Is it a big deal if i can't issue the FSTRIM command to my SSD?
  15. Hmmm anyone else getting the problem when they click on "customization" all of the formatting breaks and the container cant recover even after a reboot? Tried it twice now from blowing away the container and deleting the persistant storage and reinstalling... WEIRD.
  16. Anyone had any luck running it with apache reverse proxy?
  17. Would love to have a convenient self-contained MediaWiki container if anyone fancies having a go... I run this particular container on my standalone docker host without issue... but have no idea of how to get it over to Unraid. Think it probably just needs redirecting to install in /appdata https://hub.docker.com/r/appcontainers/mediawiki/~/dockerfile/
  18. Ah right thanks for the confirmation - Anyone else get it? Or am i on my own with it...
  19. Just noticed this error in the logs when i turn off the container? Is it to be expected? Nov 6 17:10:13 MediaServer kernel: transmission-da[26536]: segfault at 48 ip 0000556061c5ee19 sp 00002ae9ef40b9e8 error 4 in transmission-daemon[556061c2e000+72000]
  20. Any chance i can get a screenshot of your container config?
  21. Anyone having a problem making changes to transmission that save for example adding a blocklist URL or changing values? Even when i edit the settings.json file with the container off and then turn the container on it still overwrites the settings which i believe is standard practice for the transmission daemon... But should i be turning off the transmission service with the container on, editting the settings.json file in the /defaults folder (in the container) and then enabling the service? I thought with there being a copy of settings.json kept in the appdata/transmission folder (outside the container) i should have been able to modify there?
  22. I did my tests on a completely different docker host with no previous install of SABnzBD... Everythings fine until i tick "Download all par2 files" and restart. Then i lose the greyed out multicore tick. Even when i untick "Download all par2 files" and restart the greyed out multicore tick dosn't return, have a sneaking suspicion that making ammendments to any of the related PAR options would do the same...
  23. @SparklyBalls Just did the exact same test as yourself following your vid letter for letter... got the same result as you. However, when i turn on "download all par2 files" under 'settings -> switches' I experience the issue i reported, i lose the greyed out tick. Can you see if it's the same your end? EDIT: Strangely when i remove the tick from "Download all par2 files" the tick under "Enable Multicore PAR2" dosn't come back either!
  24. Sorry I mean the parameter in my screenshot. I have par2 and ionice parameters set in sabnzbd but sabnzbd complains of problems with my par2 parameters (which should be valid if it is indeed using multicore as these parameters force multicore) I was under the impression that if sab found a compatible multicore par2 executable it would have put a tick in the greyed out box for multicore as well? If anyone is running the latest pull of this container can they see how there's shows up under settings -> switches?
×
×
  • Create New...