• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

boosting1bar's Achievements


Newbie (1/14)



  1. Thank you very much, I’ll start there!
  2. Here's the syslog file as well, its too big to attach
  3. Change your repository to binhex/arch-plexpass. I think that's the only difference I see in my config vs. yours and mine is working. I think you're missing the plexpass version which would enable it.
  4. I've been running unRAID for almost a couple years now with no issues at all and about 4 months ago my server has started crashing every 24-48h. I've swapped out most of the hardware (had an old CPU and RAM in it initially, now new). I don't get any warning, happens whether my docker containers are up and running or not. When it locks up I can't get into the GUI so I tried to mirror the syslog to the USB to at least try to catch something. I've attached the diagnostics and have the syslog file I can add too if needed. Really appreciate any help! Version 6.10-rc2
  5. Yeah that's what I'm asking though, for your library you have numerous movies folders that match up to those categories that come in the default config right (documentary movies, anime movies, etc)? Since a lot of us only have one folder for movies or tv shows that's the only category that is going to populate and there's no way to further delineate that, correct? (No way other than redoing the Plex library hierarchy to multiple folders of categorized movies, I think)
  6. It is, you need to install it and log into the webUI to pull your server's data. Then use the IP of the container ( as a JSON API data source in Grafana.
  7. I'm stuck here but I think I know the answer. When you say library section, you mean you have numerous different folders in your library, correct? Sections like Documentary Movies, Horror Movies, etc. as opposed to one folder for Movies, one folder for TV Shows, etc. If that's the case that would obviously mean recreating the library into numerous folders for hundreds to thousands of movies for a lot of users. Is there a way to pull the Plex Genre tag from the files instead of library folders which would allow using the data that's already there? Love the UUD, really amazing work on such an awesome project!
  8. Yep just went through and uninstalled and reinstalled exporter again and getting the same error and "Not found" /metrics page. I've reboot again as well and still no change. I really appreciate your help but I'm not going to keep chewing up your time trying to track this down, I'm just going to delete and not junk up the thread!
  9. I've tried it on bridge and host and it kicks me the same error so far and shows localhost up and ip:9100 404 down. If I try to reach unraidIP:9100/metrics I just get a blank white page with text that says Not Found at the top, if I take off /metrics it just gives me a blank white page so it's getting some response and not just a can't load this page error. And I've checked the IP in the yml and it's correct. I even tried adding http:// but that breaks Prometheus it looks like. I don't have any errors in the Prometheus log for the container.
  10. I do, and it is enabled. I even tried toggling it on and off and rebooting my server and still getting that same 404 server down error
  11. So I was going to try switch over from UUD to this, deleted all the old stuff including the Grafana appdata folders etc. When I configure the .yaml then check Prometheus I have the localhost entry as up but the IP:9100 is down (404) and I can't figure out how I've broken it. Point me in the right direction?
  12. One last dumb question. I initially ran your command to start plotting as verbatim above and it was working fine. It looked like it was plotting onto a slow array drive instead of my ssd cache drive where I'd intended it to go. I stopped the running command and redid the command above as venv/bin/chia plots create -b 5000 -r 2 -n 1 -t /mnt/chiaminer/plotting -d /mnt/user/chia/plots where /mnt/chiaminer is my SSD cache pool set up specifically for this and /mnt/user/chia is the spinny drives in the array where they would settle after plotting. When I do that I get this error immediately after starting computing "Only wrote 384 of 1048572 bytes at offset 100662912 to "/mnt/chiaminer/plotting/plot-k32-2021-05-08-17-55-938e7086435dc79126efd7780afcf675878038a71141ee6eb2bb80c766687c3d.plot.p1.t1.sort_bucket_054.tmp"with length 100663296. Error 1. Retrying in five minutes." I also get a popup error of "Alert [servername] - Docker image disk utilization of 100% Docker utilization of image file /mnt/user/system/docker/docker.img" Do I need to just leave the plotting command verbatim? (I think the answer is verbatim, because the set up of the docker container specifies those directories and the console command doesn't need changed) Answered my own question with more poking around. It looks like it's putting the plotting drive on the array but when I compute shares I see it's dropping it in the proper location.
  13. Thanks for setting this up! I've got my container set up and running, seems to be plotting properly. I've run my plotting command and it's currently computing table 2. Just to be clear I can close that terminal window now and just let it run, right? I've been running a chia node in a Windows VM with the GUI but hoping to free up those resources and just let this run so I can more effectively use my array drives. I'm still learning my way through terminal stuff so I don't want to break this lol
  14. I'm connecting to my unRAID server remotely through my UDM Pro without any issues. I use NextDNS as my DNS provider and run their CLI client on the UDMP. Who is your DNS provider? That may well be the issue and not the UDMP. If it is indeed the UDMP just set up a NextDNS account and install the CLI client on your UDMP with this command: sh -c 'sh -c "$(curl -sL"'