Magic815

Members
  • Posts

    37
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Magic815's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. It looks like the above 'extra params' fixed the issue. And I realized that there is a general setting under the docker service in unraid where one can enable log rotation. I'm considering this issue solved via one of those two methods.
  2. Just to confirm my suspicion, my docker log check today shows that it is indeed growing by 28.3MB every time my container runs (which is daily). Any thoughts from anyone? Is there a better spot I should be asking this sort of question? *Edit: I'm going to give this suggestion a try, and then will report back:
  3. Hi all - So I'm running UnRAID v6.11.5, and am noticing an odd issue with one of my docker containers. I noticed over the past half a year that my docker memory utilization had been slowly but steadily creeping up from ~50% to 87% with no new docker containers being added to my server. When I hit "Container Size" in the GUI on the Docker tab at the point it reached 87%, I realized that Plex-Meta-Manager was showing ~16GB used in the 'Log' column. After some quick Google searching (and searching this forum) it sounded like it could be something misconfigured with that container, or some corruption that was causing the docker.img to get slowly flooded with logs from that container. So a few days ago, I ended up completely removing that container/image and started with a fresh one. With the new container up and running, the log size initially went back to 0 KB. However, the way I use this container is at 6:00am every day, it runs to update the collections and overlays of my Plex libraries. I'm already recognizing a pattern where each day (i.e. each 6:00am run) the log size grows by ~28.3MB. I'm on day 3, and so the log size is currently back to 82.8MB (see screenshot below). But I suspect it is just going to keep growing steadily until it eventually caps my docker utilization if I don't find a proper fix. I've attached my diagnostics file. Is there anything that looks off in my configuration of that docker container that may explain why the space consumed by logs just slowly gets more and more consumed after each daily run? (I've checked with other UnRAID users in the Plex-Meta-Manager discord server, and none of them report seeing slowly inflating log sizes). (As an aside, InfluxDB seems high too at 2.80GB, but I don't think I've noticed a pattern yet on that one growing steadily every day. Would be curious to hear if something looks off with that one as well.) magicserver-diagnostics-20240129-2043.zip
  4. Not sure if my issue is tied into the (hopefully soon) forthcoming fix, but I realized my mover was not running in v6.11.5 recently. My leading theory was that there might be an issue with me having two cache drives (one for appdata, which I call 'cache' and one for downloads which I call 'download_drive'). In the case of the following setting: Only move at this threshold of used cache space If I have that set to 50%, is it waiting until both drives reach 50%? Is there a way to trigger it based on just the 'download_drive' specifically, in my case.
  5. So twice now in 7 days, I've had my unraid server suddenly lock up (where the GUI takes 1-2 minutes to load a page, and I can see the CPU and RAM usage are at 100%). Both times it has happened around ~6am or ~7am in the morning, and when I started turning off docker containers one at a time - it was the Plex Media Server docker (repo: plexinc/pms-docker) that allowed my CPU to drop back to ~20% when I stopped the docker container. That makes me think that the Plex container is the culprit of my lockups. Does anything stand out to anyone in my attached diagnostics file? I do have my Plex transcodes sent to /dev/smh - not sure if that is related? Another theory is that some recurring task in Plex (maybe end credit detection?) is consuming all of my CPU or something in that 6am or 7am window? Regardless, I'm curious to see if those better than I at log-reading might see anything of note. Appreciate any help and insight you can give! magicserver-diagnostics-20230805-0838.zip
  6. Hi all - So twice now in 7 days, I've had my unraid server suddenly lock up. By lock up, I mean that it takes ~1-2 minutes to load the GUI dashboard, and I can see that CPU is at 100% and RAM is at 100%. Both times it has happened around ~6am or ~7am in the morning, which may be a sign of a pattern. And both times, turning off my Plex Media Server docker (repo: plexinc/pms-docker) allows my CPU to drop back to ~20%. But then a full server restart still seems required to release my RAM usage back to normal. That makes me think that the PMS docker might be related and/or causing the issue, but in case there might be some other issue, I figured it prudent to post my diagnostics file here. Does anything stand out to anyone? I do have my Plex transcodes sent to /dev/smh - not sure if that is related? Another theory is that some recurring task in Plex (maybe end credit detection?) is consuming all of my CPU or something? Regardless, I'm curious to see if those better than I at log-reading might see anything of note. Appreciate any help and insight you can give! magicserver-diagnostics-20230805-0838.zip
  7. I'm curious - do you both still have this issue? I'm on UnRAID 6.10.3, and would like to upgrade to 6.11.x at some point soon, but I'm worried it's going to mess up my Syncthing docker container. Also, question for everyone: What 'network type' did you select for your Syncthing docker container?
  8. So I changed it to "3.0 (qemu XCHI)" - hopefully that is the recommended one out of the two 3.0 options that were listed? After changing, I restarted the VM, and it seems like Z-Wave JS and Zigbee2MQTT connections are holding strong in Home Assistant, and I can still see all the end devices. Hopefully that means the switch didn't cause any issues with the existing ports? One interesting thing is my list only shows 3 line items now when I ran the command you gave me again: Is there any command I can run to verify I have 11 open USB ports now? Or I guess just something I'll find out for sure on when I plug in the Coral TPU via USB next week?
  9. Ah, I see. This is the result I get: I do indeed have USB Controller set to 2.0 (EHCI). I think for now I'll keep the two USBs linked as devices. However, if I run into issues in the future (like those with Conbee II are), and want to switch to serial-only for both USBs - what port numbers do you recommend I assign to each of those USBs (based on my above screenshot)? As an aside (and a dumb question): I am expecting to finally get a Coral TPU next week to use with Frigate in my Home Assistant VM. It will be USB connected, and I was planning to run it in 'device' mode via USB manager. Am I "out of" USB ports that I can pass through to my HA VM? Is that an issue whether I run it via device mode or serial-only?
  10. Can you expand on "needs to be available on the guest"? On one of my attempts, I thought I tried changing the 04 to 05 on one of the USBs, but it still didn't seem to work. Maybe there was an additional step I needed to take for the guest availability aspect? Knowing that I'm not using Conbee II, are there other benefits of switching to serial-only? I'm starting to wonder if device-mode seems to work with my USBs in unRAID 6.10.3, should I just let it lie as-is?
  11. Hmm, so I tried a couple detach/attach, and it didn't seem to help. However, I toggled off 'Connect as Serial Only' for both USBs - to put them back into device mode - and it seems to be working ok now? unRAID shows "Connected(Device)" again for both USBs - see below. And as far as I can tell, Z-Wave JS and Zigbee2MQTT inside HA don't show any errors - and my Z-Wave and Zigbee devices seem to be all in working order and connected. What behavior/error should I be on the lookout for to determine if switching to 'Serial Only' is even required for me? Was that only needed for Conbee II, and doesn't apply to the types of USB devices I have? In case it matters: In my specific case, my two USBs have the exact same controller/ID: "Silicon Labs CP210x UART Bridge". Does that preclude me from being able to run both as serial or something? Or does it mean I don't even need to bother with serial-only in the first place?
  12. So I think I'm still having an issue with one of my USBs, after following the instructions of switching them to serial mode. This is what I see in USB manager after doing an upgrade to unRAID 6.10.3: Any thoughts on what else I could try?
  13. Are you potentially having the same issue I did here? Where 100MB wiredtiger log files just continue to accumulate and fill it all up? I'd recommend you dig through the appdata folder and see if you see them. https://forums.unraid.net/topic/78060-support-linuxserverio-unifi-controller/?do=findComment&comment=1154741 I had to blow away my entire container, deleted the appdata folder, reinstall it, and then restore from an old backup to get it working again. If you are indeed having the same issue, I'd be really curious to hear from others on why this seems to suddenly happen to people. It's literally a drive and server killer.
  14. Does just the torrent container need to be stopped, or do I need to stop the entire docker service and VM service? What's interesting through is that my 'Data' has always been filled with seeded torrents, and my cache drive utilization used to always drop in the morning after the mover run. One change I did make a few weeks back was switching from the 'delugevpn' docker to the 'qbittorrent-vpn'. (I had been previously running the deluge one for years.) Do people set up some of sort of script to stop the torrent container when the mover is running - and maybe that was something I set up with deluge years ago and forgot about? For completeness, I've attached my latest diagnostics file with the mover logging enabled. magic-diagnostics-20220811-0930.zip
  15. So I could be wrong, but am trying to pinpoint the exact root cause here. I've been using Unraid for a couple years with no cache drive issues previously (and am currently on version 6.9.2), but I've noticed in the last week or so that my cache drive (a 500GB SSD) is staying "fuller" than usual. My theory is that the Data folder is not consistently being moved from the cache to array? In looking at the shares I have set up, the cache ends up holding appdata (65GB), domains (11GB), system (44GB). And then Data will temporarily store itself on cache (in theory), before moving to the data drives once a day via the scheduler. However, I currently see 384GB of it filled: By doing some digging, I found that the Data folder on the cache drive is currently holding ~246GB of files (it's all >Downloads>completed from my various 'arr applications). That Data, plus all of my shares marked "Prefer:Cache" seems to add up to roughly the 384GB on drive space used. However, when I total up all of the new files that have downloaded in the last day (i.e. what should be at most temporarily sitting in the cache drive's Data folder), it only adds up to ~5GB. And when I look at the cache drive's Data folder, I'm seeing torrent files that were downloaded ~9 days ago still in there. I have the mover scheduled to run at 4:30am daily: I've attached my diagnostics file. Would it be helpful for me to enable mover logging, and then post another diagnostics file in the morning after the 4:30am trigger occurs? Any help is appreciated! magic-diagnostics-20220810-1639.zip