Aerodb

Members
  • Posts

    94
  • Joined

  • Last visited

Everything posted by Aerodb

  1. I'm having a similar issue and my log looks the same. i rolled back to a prior backup from a few days back when there was no issue and i had the same result. very interested if anyone else has had this issue.
  2. EDIT: if you have this issue on unraid, check the SWAG appdata directory etc/letsencrypt/live directory to be sure you don't have a folder with the -0001 ending. I changed the original file to anything else and the -0001 folder back to the original name. It started working right away. Seems there was some sort of permission or access issue. (ex. with two folders named examplefolder-0001 and examplefolder , changed examplefolder to examplefolder-01 and examplefolder-0001 to examplefolder . It worked right away and the swag log had no errors.
  3. Hello all, I have a new error and I think I have an idea what the issue is but I'm unsure how to resolve it. nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) When I check that directory, there is no file with the name fullchain.pem . I do see priv-fullchain-bundle.pem . I suspect this is a consolidated file and my thought is to point swag to this to resolve but I haven't been able to find which config to edit. Any guidance is greatly appreciated.
  4. Hey all, I was thinking that my UPS would run for much longer if some other dockers were not running during a power outage. I also figured user scripts would be my best bet at accomplishing this. Has anyone attempted this or found any resources for this?
  5. I do have that port forwarded in the router but i thought that was for inbound WAN traffic to the LAN IP. I'm talking about reaching the plex server from another computer on the same LAN, same subnet, same VLAN. all requests fail to load in the browser when i attempt to load the server-ip:32400 but i can reach all the other dockers on the server with the server IP and that containers port number. plex is the only one that i cant reach directly across the LAN. I have to manage it via the plex.tv URL.
  6. thank you for this update. This raises a question, does this remove the need to strip subs from the container/content prior to this plugin?
  7. that makes total sense. I can also confirm that i have used a container to move files so that's VERY likely what happened. thank you!
  8. can confirm this did fix the issue. thank you so much. two follow up questions, 1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on.
  9. so after changing the share associated to the files on the cache pool/drives to yes(use cache), I then started the mover. the mover finished and did not move the files. They remained on the cache pool drives. I really think this was related to the issue I mentioned. I don't think I made that up in my head, I must have read it somewhere that there was an issue with having multiple cache pools and using the mover on shares that have a space in the share name.
  10. 1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. 2- I don't believe they are open, its quite a few of them and have no idea what or why they would be open for this long. some have been on the machine, on the cache pool for longer than the machine has been up. (there have been restarts since they were written to the pool.) 3- I'm not sure how to answer this. Squid posted some user script templates/examples that I used and have had success with this thus far. I can provide the user script code if you think it would be helpful.
  11. Thank you for the advice. I was thinking about running it within a VM on a small unraid machine that will only run network apps. So I suspect having to reboot it will be very limited. Also, unfortunately I have already acquired my network card. A two port with an Intel 82576 Chip. BUT I'm wondering if I could use the motherboard NIC as the third sync port should I choose to set up a secondary pfsense VM on my primary multi use Unraid hardware. so maybe I could run pf sense on this stand alone box and fail back to my main machine in the event of an outage or planned maintenance?
  12. Hey All, I have userscripts set to move "Move Array Only Shares to Array" daily. I also just ran it manually. but i still have share data on my (3rd) cache pool. The related share has cache set to NO. The mover doesn't see to resolve this either. I feel like I read somewhere that there was a bug with multiple cache pools and spacing but I'm curious if there is solution anyone can offer. As a last resort, if you know how to move these files from the pool to the array without breaking anything via another docker or command I could attempt that.
  13. Hello all. I'm looking for a way to specify a port for the plex docker to run on. I have had an issue where i cant reach my plex docker instance . directly with the LAN-IP:32400. I am only able to manage the server through the plex.tv url. I believe this is due to the lack of a port assignment in the docker config but when i added that field it didn't change/fix the issue. I have the port configured in the plex settings, but i believe the server is not responding to the requests since the port mapping i added to the docker is likely not the correct way to do this. Any guidance or trouble shooting is greatly appreciated.
  14. I see your point and it makes sense. I'm not really attached to one container or another, the same for subs being within the container or standalone files. I was mostly using unmanic to have a uniform library in 265 for added storage compression. anything else is an added extra, but I'm excited to see what else I can use unmanic for.
  15. to be sure i understand your meaning... would this be an issue if I just switched to MKV containers rather than MP4?
  16. I assumed this issue was the same for me. I implemented the strip images plugin as you advised(first in the worker queue). I had the same issue when i re-added the job that had previously failed. log attached. Please let me know if you have any troubleshooting steps that i should take. Failed Job 9.24.21.txt
  17. Sorry for the delay, just got myself a newborn baby. still learning to keep up, lol. yes this worked perfectly! I still learning a ton on this too. but thank you!
  18. im getting this same issue(while CPU encoding). far more failed encoder jobs. but most (failed or successful) jobs have this error about 40 times each. set_mempolicy: Operation not permitted
  19. Hello All, I have been toying with the youtubeDL-material docker and I have has some issues with the media naming. It seems the upload dates are not replicating properly from youtube, or perhaps the date i see under the youtube video isnt the field the docker references (maybe the actual upload date and not the published date or something). I was hoping to use the "custom file output" function but after reading the documentation, I'm still a bit lost. Ill link to it below but if anything can advise me on what I should input, I would greatly appreciate it. https://github.com/ytdl-org/youtube-dl/blob/master/README.md#output-template (under the "format selection" segment about halfway down) Thank you in advance.
  20. I mapped it to a share as you detailed in the setup guide at the beginning of this thread. however a new issue as come up... Unmanic will run much fast now and seems to get more done in the same time now. but once it fills half my ram it sites idle until i restart the docker. I dont think it is clearing out old files once the work is completed on a job.
  21. so if youre advising the mapping being case accurate, yes i have confirmed that is done. it seems that the docker author has mentioned that i have done something wrong with my mapping to the cache. still trying to determine what he means. im not encoding to ram since i only have 32gb and use most of that for plex transcoding.
  22. Sounds like you have not set a mapped volume for your cache Can you elaborate a bit? if youre talking about the Encoding Cache Directory i left that blank since i dont have much ram to spare. should this be mapped to a folder? would that limit how much space it will use in the docker.img?
  23. Hey sir, I found that unmaniac was filling my docker image up. It was taking up +50gb of space in the docker image. I do not have debugging enabled and reinstalling the image cleared the issue for now. I do have file history enabled but I see have noticed that the conversion history working/showing is hit or miss. Any idea on what I can do to limit this issue?
  24. legit never saw that button. turns out its my instance of unmaniac running. not sure what is the cause. I have logs limited and mapping done correctly. Ill post on that thread to see if its a known issue. Thank you sir/Ma'am
  25. so I have confirmed my docker mapping, and I have log sizes limited and have confirmed they are not the issue. I'm pretty sure I'm in the 80% block. but i have a 100gb docker image file and I'm at 71% full now. Any advice on how I can root out what is causing this issue?