Aerodb

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by Aerodb

  1. My current build to be upgraded: MB: ASRock X370 Taichi CPU: AMD Ryzen 7 1700 RAM: 64GB DDR4 Cooler: Noctua NH-D15 Chroma Black Case: Fractal Design Meshify 2 XL Cache: CORSAIR FORCE Series MP510 (960GB), Sabrent Rocket Q4 1TB, 2 X Addlink SSD 512GB Parity: 1x WD Red 14TB & 1x WD Red 18TB Array: 8x WD Red 14TB & 2x 10TB Red 10TB PSU: Seasonic 80Plus Gold 1300W This build has an issue with not having enough PCIe Lanes. With this planned upgrade, I wanted to resolve this issue. Much of the hardware is already owned and just core parts are being replaced, Mobo, CPU and RAM mostly. I prefer AMD CPUs but have had trouble finding an EPYC to fit into the budget. My hope is that you all can make suggestions to remove the lane concerns while keeping within the current budget. ~$700 USD. Also, I dont mind shopping ebay for used parts if they dont wear. current plan. https://pcpartpicker.com/list/mvkPHG PS, I really like what Alex did here, but I haven't had any luck finding this CPU or Mobo within budget.
  2. Was there any solution to this issue? I am getting the same issue. I would remove that nextcloud app, but have no idea how to do this without access to the nextcloud GUI.
  3. These Links dont resolve anymore. Does anyone have the script list?
  4. Docker Run- " docker run -d --name='binhex-plex' --net='host' --cpuset-cpus='1,2,3,4,5,6,7,8,9,10,11,12,13,14,15' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e HOST_HOSTNAME="Tower" -e HOST_CONTAINERNAME="binhex-plex" -e 'TRANS_DIR'='/config/transcode1' -e 'UMASK'='000' -e 'PUID'='99' -e 'PGID'='100' -e 'TCP_PORT_HTTP://192.168.86.49:32400/'='HTTP://192.168.86.49:32400/' -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='https://app.plex.tv/desktop#' -l net.unraid.docker.icon='https://raw.githubusercontent.com/binhex/docker-templates/master/binhex/images/plex-icon.png' -v '/mnt/user/Plex Media/':'/media':'rw' -v '/mnt/user/appdata/binhex-plex/':'/config':'rw' --log-opt max-size=50m --log-opt max-file=1 --cpu-shares=971 'binhex/arch-plex' 9a4ebb96316c3909d81b3a8f4a0d7159cb36bd7825cdb3c8f40f3615a643d2dd The command finished successfully! " Diag file attached tower-diagnostics-20221018-2138.zip
  5. Thank you for providing this. But this post indicates that the error can be ignored if im not using an USB tuner. which i am not. however its the only error I get before the container crashes. Any advice?
  6. Hello all, I am having the same error and wanted to see what the solution was for this situation. I do not have any orphan images to remove. " Critical: libusb_init failed " after this error shows in the log, the ram usage slowly creeps up to about 23GB and then the whole image freezes.
  7. Hello All, I recently started getting a new error when attempting to access my Nextcloud instance from within the LAN. The domain URL I had did work but now will not resolve. Yielding a "this page can not be reached" from Chrome. Alternatively, typing the IP:port returns a 400 error stating "the plain HTTP request was sent to HTTPS port". I had this working via a spaceinvader one video and i should also mention that I can reach the instance from outside the LAN, such as from my cell->the cell network/internet-> my server(using the URL and SWAG). Just not from within the LAN. Nextcloud and SWAG are run within docker on a docker network. Nextcloud shows the following mapping (172.18.0.3:443/TCP <-> 192.168.86.49:444) and SWAG shows the following also ( 172.18.0.2:80/TCP <-> 192.168.86.49:180 AND 172.18.0.2:443/TCP <-> 192.168.86.49:1443 ) Any ideas where I should be looking?
  8. So my issue was somehow related to an expired certificate, something to do with mono. However it was resolved when I force updated the docker image.
  9. This is a lot like my issue. check you sonarr log and see what errors there are. i will share any fix i find if your issue is like mine.
  10. I am still seeing this issue. where you able to find a solution?
  11. I'm having a similar issue and my log looks the same. i rolled back to a prior backup from a few days back when there was no issue and i had the same result. very interested if anyone else has had this issue.
  12. EDIT: if you have this issue on unraid, check the SWAG appdata directory etc/letsencrypt/live directory to be sure you don't have a folder with the -0001 ending. I changed the original file to anything else and the -0001 folder back to the original name. It started working right away. Seems there was some sort of permission or access issue. (ex. with two folders named examplefolder-0001 and examplefolder , changed examplefolder to examplefolder-01 and examplefolder-0001 to examplefolder . It worked right away and the swag log had no errors.
  13. Hello all, I have a new error and I think I have an idea what the issue is but I'm unsure how to resolve it. nginx: [emerg] cannot load certificate "/config/keys/letsencrypt/fullchain.pem": BIO_new_file() failed (SSL: error:02001002:system library:fopen:No such file or directory:fopen('/config/keys/letsencrypt/fullchain.pem','r') error:2006D080:BIO routines:BIO_new_file:no such file) When I check that directory, there is no file with the name fullchain.pem . I do see priv-fullchain-bundle.pem . I suspect this is a consolidated file and my thought is to point swag to this to resolve but I haven't been able to find which config to edit. Any guidance is greatly appreciated.
  14. Hey all, I was thinking that my UPS would run for much longer if some other dockers were not running during a power outage. I also figured user scripts would be my best bet at accomplishing this. Has anyone attempted this or found any resources for this?
  15. I do have that port forwarded in the router but i thought that was for inbound WAN traffic to the LAN IP. I'm talking about reaching the plex server from another computer on the same LAN, same subnet, same VLAN. all requests fail to load in the browser when i attempt to load the server-ip:32400 but i can reach all the other dockers on the server with the server IP and that containers port number. plex is the only one that i cant reach directly across the LAN. I have to manage it via the plex.tv URL.
  16. thank you for this update. This raises a question, does this remove the need to strip subs from the container/content prior to this plugin?
  17. that makes total sense. I can also confirm that i have used a container to move files so that's VERY likely what happened. thank you!
  18. can confirm this did fix the issue. thank you so much. two follow up questions, 1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on.
  19. so after changing the share associated to the files on the cache pool/drives to yes(use cache), I then started the mover. the mover finished and did not move the files. They remained on the cache pool drives. I really think this was related to the issue I mentioned. I don't think I made that up in my head, I must have read it somewhere that there was an issue with having multiple cache pools and using the mover on shares that have a space in the share name.
  20. 1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. 2- I don't believe they are open, its quite a few of them and have no idea what or why they would be open for this long. some have been on the machine, on the cache pool for longer than the machine has been up. (there have been restarts since they were written to the pool.) 3- I'm not sure how to answer this. Squid posted some user script templates/examples that I used and have had success with this thus far. I can provide the user script code if you think it would be helpful.
  21. Thank you for the advice. I was thinking about running it within a VM on a small unraid machine that will only run network apps. So I suspect having to reboot it will be very limited. Also, unfortunately I have already acquired my network card. A two port with an Intel 82576 Chip. BUT I'm wondering if I could use the motherboard NIC as the third sync port should I choose to set up a secondary pfsense VM on my primary multi use Unraid hardware. so maybe I could run pf sense on this stand alone box and fail back to my main machine in the event of an outage or planned maintenance?
  22. Hey All, I have userscripts set to move "Move Array Only Shares to Array" daily. I also just ran it manually. but i still have share data on my (3rd) cache pool. The related share has cache set to NO. The mover doesn't see to resolve this either. I feel like I read somewhere that there was a bug with multiple cache pools and spacing but I'm curious if there is solution anyone can offer. As a last resort, if you know how to move these files from the pool to the array without breaking anything via another docker or command I could attempt that.
  23. Hello all. I'm looking for a way to specify a port for the plex docker to run on. I have had an issue where i cant reach my plex docker instance . directly with the LAN-IP:32400. I am only able to manage the server through the plex.tv url. I believe this is due to the lack of a port assignment in the docker config but when i added that field it didn't change/fix the issue. I have the port configured in the plex settings, but i believe the server is not responding to the requests since the port mapping i added to the docker is likely not the correct way to do this. Any guidance or trouble shooting is greatly appreciated.
  24. I see your point and it makes sense. I'm not really attached to one container or another, the same for subs being within the container or standalone files. I was mostly using unmanic to have a uniform library in 265 for added storage compression. anything else is an added extra, but I'm excited to see what else I can use unmanic for.
  25. to be sure i understand your meaning... would this be an issue if I just switched to MKV containers rather than MP4?