Aerodb

Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by Aerodb

  1. Hello All, I recently started getting a new error when attempting to access my Nextcloud instance from within the LAN. The domain URL I had did work but now will not resolve. Yielding a "this page can not be reached" from Chrome. Alternatively, typing the IP:port returns a 400 error stating "the plain HTTP request was sent to HTTPS port". I had this working via a spaceinvader one video and i should also mention that I can reach the instance from outside the LAN, such as from my cell->the cell network/internet-> my server(using the URL and SWAG). Just not from within the LAN. Nextcloud and SWAG are run within docker on a docker network. Nextcloud shows the following mapping (172.18.0.3:443/TCP <-> 192.168.86.49:444) and SWAG shows the following also ( 172.18.0.2:80/TCP <-> 192.168.86.49:180 AND 172.18.0.2:443/TCP <-> 192.168.86.49:1443 ) Any ideas where I should be looking?
  2. So my issue was somehow related to an expired certificate, something to do with mono. However it was resolved when I force updated the docker image.
  3. This is a lot like my issue. check you sonarr log and see what errors there are. i will share any fix i find if your issue is like mine.
  4. I am still seeing this issue. where you able to find a solution?
  5. I'm having a similar issue and my log looks the same. i rolled back to a prior backup from a few days back when there was no issue and i had the same result. very interested if anyone else has had this issue.
  6. EDIT: if you have this issue on unraid, check the SWAG appdata directory etc/letsencrypt/live directory to be sure you don't have a folder with the -0001 ending. I changed the original file to anything else and the -0001 folder back to the original name. It started working right away. Seems there was some sort of permission or access issue. (ex. with two folders named examplefolder-0001 and examplefolder , changed examplefolder to examplefolder-01 and examplefolder-0001 to examplefolder . It worked right away and the swag log had no errors.
  7. Hello all, I have a new error and I think I have an idea what the issue is but I'm unsure how to resolve it. nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) When I check that directory, there is no file with the name fullchain.pem . I do see priv-fullchain-bundle.pem . I suspect this is a consolidated file and my thought is to point swag to this to resolve but I haven't been able to find which config to edit. Any guidance is greatly appreciated.
  8. Hey all, I was thinking that my UPS would run for much longer if some other dockers were not running during a power outage. I also figured user scripts would be my best bet at accomplishing this. Has anyone attempted this or found any resources for this?
  9. I do have that port forwarded in the router but i thought that was for inbound WAN traffic to the LAN IP. I'm talking about reaching the plex server from another computer on the same LAN, same subnet, same VLAN. all requests fail to load in the browser when i attempt to load the server-ip:32400 but i can reach all the other dockers on the server with the server IP and that containers port number. plex is the only one that i cant reach directly across the LAN. I have to manage it via the plex.tv URL.
  10. thank you for this update. This raises a question, does this remove the need to strip subs from the container/content prior to this plugin?
  11. that makes total sense. I can also confirm that i have used a container to move files so that's VERY likely what happened. thank you!
  12. can confirm this did fix the issue. thank you so much. two follow up questions, 1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on.
  13. so after changing the share associated to the files on the cache pool/drives to yes(use cache), I then started the mover. the mover finished and did not move the files. They remained on the cache pool drives. I really think this was related to the issue I mentioned. I don't think I made that up in my head, I must have read it somewhere that there was an issue with having multiple cache pools and using the mover on shares that have a space in the share name.
  14. 1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. 2- I don't believe they are open, its quite a few of them and have no idea what or why they would be open for this long. some have been on the machine, on the cache pool for longer than the machine has been up. (there have been restarts since they were written to the pool.) 3- I'm not sure how to answer this. Squid posted some user script templates/examples that I used and have had success with this thus far. I can provide the user script code if you think it would be helpful.
  15. Thank you for the advice. I was thinking about running it within a VM on a small unraid machine that will only run network apps. So I suspect having to reboot it will be very limited. Also, unfortunately I have already acquired my network card. A two port with an Intel 82576 Chip. BUT I'm wondering if I could use the motherboard NIC as the third sync port should I choose to set up a secondary pfsense VM on my primary multi use Unraid hardware. so maybe I could run pf sense on this stand alone box and fail back to my main machine in the event of an outage or planned maintenance?
  16. Hey All, I have userscripts set to move "Move Array Only Shares to Array" daily. I also just ran it manually. but i still have share data on my (3rd) cache pool. The related share has cache set to NO. The mover doesn't see to resolve this either. I feel like I read somewhere that there was a bug with multiple cache pools and spacing but I'm curious if there is solution anyone can offer. As a last resort, if you know how to move these files from the pool to the array without breaking anything via another docker or command I could attempt that.
  17. Hello all. I'm looking for a way to specify a port for the plex docker to run on. I have had an issue where i cant reach my plex docker instance . directly with the LAN-IP:32400. I am only able to manage the server through the plex.tv url. I believe this is due to the lack of a port assignment in the docker config but when i added that field it didn't change/fix the issue. I have the port configured in the plex settings, but i believe the server is not responding to the requests since the port mapping i added to the docker is likely not the correct way to do this. Any guidance or trouble shooting is greatly appreciated.
  18. I see your point and it makes sense. I'm not really attached to one container or another, the same for subs being within the container or standalone files. I was mostly using unmanic to have a uniform library in 265 for added storage compression. anything else is an added extra, but I'm excited to see what else I can use unmanic for.
  19. to be sure i understand your meaning... would this be an issue if I just switched to MKV containers rather than MP4?
  20. I assumed this issue was the same for me. I implemented the strip images plugin as you advised(first in the worker queue). I had the same issue when i re-added the job that had previously failed. log attached. Please let me know if you have any troubleshooting steps that i should take. Failed Job 9.24.21.txt
  21. Sorry for the delay, just got myself a newborn baby. still learning to keep up, lol. yes this worked perfectly! I still learning a ton on this too. but thank you!
  22. im getting this same issue(while CPU encoding). far more failed encoder jobs. but most (failed or successful) jobs have this error about 40 times each. set_mempolicy: Operation not permitted
  23. Hello All, I have been toying with the youtubeDL-material docker and I have has some issues with the media naming. It seems the upload dates are not replicating properly from youtube, or perhaps the date i see under the youtube video isnt the field the docker references (maybe the actual upload date and not the published date or something). I was hoping to use the "custom file output" function but after reading the documentation, I'm still a bit lost. Ill link to it below but if anything can advise me on what I should input, I would greatly appreciate it. https://github.com/ytdl-org/youtube-dl/blob/master/README.md#output-template (under the "format selection" segment about halfway down) Thank you in advance.
  24. I mapped it to a share as you detailed in the setup guide at the beginning of this thread. however a new issue as come up... Unmanic will run much fast now and seems to get more done in the same time now. but once it fills half my ram it sites idle until i restart the docker. I dont think it is clearing out old files once the work is completed on a job.
  25. so if youre advising the mapping being case accurate, yes i have confirmed that is done. it seems that the docker author has mentioned that i have done something wrong with my mapping to the cache. still trying to determine what he means. im not encoding to ram since i only have 32gb and use most of that for plex transcoding.