Jump to content

Huge issues with shares and cache settings


Recommended Posts

So I tried to come up with a more descriptive title but not without writing a small novel.

 

I've just upgraded to 6.9.1 (from 6.8.3) and I've started having huge issues that are related to caching somehome. My first sign was that the server basically crashes. The Web UI stops displaying the array, Dockerized services stops working although SMB seems to at least honor existing connections though not so sure about new ones. A reboot and everything will be working fine. For a litte while (maybe 15 minutes?). The second or third time I noticed a warning from the common issues plugin stating that rootfs was full. So I started troubleshooting the cause for that. Eventually I discovered that all space was being used by /mnt/user/cache and this is where it gets really frustrating.

 

One of my Docker container tries to keep a rather big amount of data in sync with a remote source. But for some reason it has started to download data that should already be downloaded and instead of dumping it into my /mnt/user/data/<targetfolder> share it downloads it to /mnt/user/cache/<targetfolder> and since I don't have a share called cache it fills up rootfs.

 

My first thought was that some had gone wrong with the config for the Docker container but no the mount point is specified as /mnt/user/data/<targetfolder>. So my next step was to check my shares and then I notice that both my appdata and data share have their cache pool setting set to "prefer". But since I don't actually have any cache pools the list is empty. At this moment my idea was to change the cache pool setting from prefer to no. But I can't. That setting is disabled. I've tried stop both docker and vms but stil no change. I've tried booting into safe mode but still no possibility to change this setting.

 

What makes this even weirder is that if I try to create a new share the setting for cache pool is still disabled but this time it is forced to no.

 

It seems that there has been two changes, one to how the system handles the "prefer" setting and one related to changing this setting. Am I doing something wrong? Is there a way to fix this?

Link to comment

You don't have any cache pools defined at all.

2 hours ago, DeepThoughts said:

/mnt/user/data/<targetfolder> share it downloads it to /mnt/user/cache/<targetfolder> and since I don't have a share called cache it fills up rootfs.

I think you mean here /mnt/cache/<targetfolder>

 

Check the app you're talking about and ensure that there are no references anywhere (hit show more settings) that reference /mnt/cache instead of /mnt/user/

Link to comment
31 minutes ago, Squid said:

You don't have any cache pools defined at all.

I think you mean here /mnt/cache/<targetfolder>

 

Check the app you're talking about and ensure that there are no references anywhere (hit show more settings) that reference /mnt/cache instead of /mnt/user/

 

Already did that. No references at all to /mnt/cache. (You are correct that I meant /mnt/cache)

 

29 minutes ago, Squid said:

Did you ever have a cache pool?

 

Nope. I've never had a cache pool. It's in my plans but no I've never had one.

 

---

 

After disabling Docker and booting in safe mode and what not it seems that I needed to reinstall all my Docker apps. So far this seems to have solved the most pressing issue. Now it seems that it isn't creating a /mnt/cache/data folder and populating it with the huge files. But the root cause somehow still remains. Because it still creates /mnt/cache/appdata and there is stores my settings for the containers. This wouldn't be much of an issue since we are talking about a couple of hundred megabytes not terabytes. However... If I go to the scheduler settings it's not possible to configure the Mover because "No Cache device present!". My assumption is that these files won't survive a reboot but perphaps I'm mistaken?

 

EDIT: Spoke to soon. The issue is not resolved for the container. :(

Edited by DeepThoughts
Link to comment

Now I'll be the first to admit that I don't know much about how Unraid does its special sauce on top of Linux so I might be totally of base. But let me list the things that stick out to me and present a theory that could explain why you are unable to recreate it.

 

My key points:

 

1. In 6.8.3 /mnt/cache were never created

2. It isn't only my container that handles large data that's affected. It also creates the /mnt/cache/appdata folder and my appdata-share is only for Docker-configs

3. During my upgrade to 6.9.1 I didn't change anything regarding the configuration of the containers

4. The affected shares are locked to cache pool preferred

5. I'm unable to create a new share with cache pool preferred, only cache pool no is possible

6. There seems to have been some changes to the underlying tech if I read the section about multiple pools correctly https://unraid.net/blog/unraid-6-9-stable

 

My theory...

 

Points 1, 2, and 3 indicates to me that the issue isn't with the Docker templates. If the templates were wrong they would have also been malfunctioning on 6.8.3. To me point 2 says that either I've somehow added the same typo to every Docker template I've ever added to the system or the issue isn't with the templates but rather with the share. Sorry, but to me the typo theory is far fetched, I should have noticed it earlier than today in that case. Which leads me to belive that the issue is related to the share it self.

 

Now add to this that the cache pool setting is disabled. So here is my guess... In 6.9 the cache pool feature is changed, perhaps mostly in the case of no cache pool. Given this change the cache pool setting gets disabled, at least when you have no cache pool present (the presumption being that it's always no in that case). This would explain why I can't create a new share with anything other than cache pool "no". However it wasn't taken into account that there might be cases where cache pool is set to prefer despite no cache pool is present. So what happens is that given the tech changes made (point 6) it now starts to utilize /mnt/cache despite no pool being present. And since no pool is present at /mnt/cache it starts filling up rootfs.

 

While writing this down it struck me that if my theory is correct it might be that the disabled control only is disabled on the client side. So I fired up the trusty browser devtools and located the cache pool setting. It was disabled client side so I reenabled it, changed both my affected shares to cache pool no, rebooted and started up the problematic container. So far so good, it actually seems to have fixed the issue.

 

This does seem like a bug though so I should probably figure out a good and short way of explaining this issue and reporting it. Regardless, thank you for taking your time. Sometimes you need to describe the issue to someone else to actually figure out where stuff goes wrong.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...