Corvus Posted January 17 Share Posted January 17 Hey guys, This is the second time this has happened. The first time, I shrugged it off as an anomaly - maybe a rogue process or docker I forgot to turn off. However, this time, I have nothing out of the ordinary running. I woke up this morning to find notifications on my Unraid web UI saying: Unraid Cache disk disk utilization: 17-01-2024 07:13 Alert [NAS] - Cache disk is low on space (91%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:15 Alert [NAS] - Cache disk is low on space (92%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:17 Alert [NAS] - Cache disk is low on space (94%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:18 Alert [NAS] - Cache disk is low on space (95%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:19 Alert [NAS] - Cache disk is low on space (96%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:20 Alert [NAS] - Cache disk is low on space (98%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:21 Alert [NAS] - Cache disk is low on space (99%) SanDisk_SDSSDHP256G_132803400467 (sdf) Unraid Cache disk disk utilization: 17-01-2024 07:23 Alert [NAS] - Cache disk is low on space (100%) SanDisk_SDSSDHP256G_132803400467 (sdf) This was followed up by another notice saying: 'Notice [NAS] - Cache disk returned to normal utilization levelSanDisk_SDSSDHP256G_132803400467 (sdf)' at 08:25. I'm guessing this was Mover doing its thing. My cache is 2x 256 SSDs in RAID_0 making up 512GB total. Right now, the Cache is sitting on 223GB free. Every time I've noticed that the cache hits 100%, my dockers fail to work until I completely reboot the server. I've only just learned about the Disk Activity plugin, so I unfortunately can't rely on that (unless it happens again soon). The only Dockers I had running were Plex, transmission (with no torrents pending or downloading), Radarr, Sonarr, nzbget, bazarr and tautulli. I've attached my diagnostics (I'm yet to reboot the server, however will do so after I post this). How can I see what was responsible for temporarily eating all my cache, and what was (ostensibly) writing to my array? Please be clear and use step-by-step explanations, as I'm a noob and don't know my way around Linux. Thanks for any help. nas-diagnostics-20240117-1634.zip Quote Link to comment
itimpi Posted January 17 Share Posted January 17 Not sure why you were getting the excessive usage, but at the very least you should set the Minimum Free Space setting on the cache to avoid it filling up completely as btrfs file systems tend to misbehave if they get too full. Quote Link to comment
Corvus Posted January 17 Author Share Posted January 17 52 minutes ago, itimpi said: Not sure why you were getting the excessive usage, but at the very least you should set the Minimum Free Space setting on the cache to avoid it filling up completely as btrfs file systems tend to misbehave if they get too full. Thanks for the tip. However it's currently greyed out for me so I'm not able to enter any value. Could it be because the system is currently performing a parity check (after reboot)? Quote Link to comment
itimpi Posted January 17 Share Posted January 17 1 hour ago, Corvus said: However it's currently greyed out for me so I'm not able to enter any value If I remember correctly you need to have the array stopped to change it. Quote Link to comment
Corvus Posted January 17 Author Share Posted January 17 2 hours ago, itimpi said: If I remember correctly you need to have the array stopped to change it. I'll do it after the parity check finishes tomorrow. In the meantime, can someone please tell me how I could find out how this happens? Quote Link to comment
itimpi Posted January 17 Share Posted January 17 I would think that Plex is the most likely culprit! Do you have something like Plex transcode folder set so that it is internal to the docker.img file? What about the Plex tmp folder? Quote Link to comment
Corvus Posted January 17 Author Share Posted January 17 4 minutes ago, itimpi said: I would think that Plex is the most likely culprit! Do you have something like Plex transcode folder set so that it is internal to the docker.img file? What about the Plex tmp folder? I don't think so. How do I check? If that were the case, wouldn't this be happening all the time, as I usually have multiple users streaming off my server remotely every day? Quote Link to comment
itimpi Posted January 17 Share Posted January 17 3 minutes ago, Corvus said: How do I check? I do not use Plex so I am guessing. You might be ale to correlate the timing with specific Plex activity. I think that the paths inside the container is set at the Plex level. You then need to match those to the volume mappings you have for the Plex container. Note that I also mentioned the Plex temp folder - I believe that can get large while Plex is scanning. Quote Link to comment
Corvus Posted January 17 Author Share Posted January 17 8 minutes ago, itimpi said: I do not use Plex so I am guessing. You might be ale to correlate the timing with specific Plex activity. I think that the paths inside the container is set at the Plex level. You then need to match those to the volume mappings you have for the Plex container. Note that I also mentioned the Plex temp folder - I believe that can get large while Plex is scanning. I see. How do I find the Plex temp folder? Quote Link to comment
itimpi Posted January 17 Share Posted January 17 6 minutes ago, Corvus said: How do I find the Plex temp folder? No idea. You should ask in the support thread for the container you are using. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.