Jump to content

Cache Mess up?


Jaster

Recommended Posts

It seems my cache/mover isn't really working or at least not as I expect it to.

1. I attached a screen of the displayed settings, it seems the data is RAID5, while system and Meta Data are RAID1. Is this correct?

2. I do have some files on the cache, as I want to change the settings, I did set all shares not to use cache and tried running the mover. Nothing happens. Data remains on the chace drives.

3. The Cache size is displayed as having 1.9TB; the drives I use are 2x500, 2x250, 2x120, which is about 1.7TB but without any raid level.

4. Shares including Cache drives display a very weired and inconsinstent value of free space. While I have around 11TB free on the array, shares including chace display one of the following values:

- 11 (should be correct)

- 2.3 (a little more than the cache has)

- 13.3 (Array free space + cache Free space)

None of the shares has any disks expluded and all

5. It seems that cache dirs force spin up of the array anyway. I do have the Cache Folders PlugIn installed, but do not fully understand how to fine tune the caching there.

 

Any advies is higly appriciated.

Cache Settings.png

knowlage-diagnostics-20181216-0546.zip

Link to comment
32 minutes ago, Jaster said:

I did set all shares not to use cache and tried running the mover. Nothing happens. Data remains on the chace drives.

Mover only moves cache-yes shares from cache to array, and moves cache-prefer shares from array to cache. It won't move cache-no or cache-only.

 

Setting a share to cache-no will make Unraid not write any more of that share data to cache, but in general, all User Share settings are only applied to new data and nothing is done with existing data. So setting those to cache-no won't get them moved.

 

Set them to cache-yes, run mover, then when they have finished moving you can set them to cache-no if that is what you want. I will have more to say about your diagnostics after I have a look.

Link to comment
46 minutes ago, Jaster said:

it seems the data is RAID5,

Though better in latest kernels btrfs raid5/6 is not as reliable as raid1/10, I use it myself but make sure you have backups of any important data there.

 

47 minutes ago, Jaster said:

3. The Cache size is displayed as having 1.9TB; the drives I use are 2x500, 2x250, 2x120, which is about 1.7TB but without any raid level.

That is because of using raid5, it's still not supported by btrfs fi usage, used space will show correctly, free space will show without accounting for parity.

Link to comment

You have set your docker image size to the ridiculous value of 150G. That may be the largest I have ever seen! And of course that will leave you with less space for other things on cache.

 

When someone does this it is usually a sign that they were having trouble with their docker image filling up and so they increased its size. This isn't the solution, it just makes it take longer to fill it up. If you have your dockers configured properly it is very unlikely that you would ever need more than 20G.

 

If you have been having this problem, I suggest you take a look at the Docker FAQ. There is a whole section dedicated to this. Currently you are only using 2.8 of that 150, but that may be because you started it over and haven't filled it up yet. I suggest you reduce it to 20G and if you fill it up, work with us to try to figure out what you have done wrong with your dockers.

Link to comment

Sorry for my delayed response, I forgot to subscribe and was busy with streaming issues :(

I would go for raid 10 as raid5 isn't doing the job; EDIT from what I read, I need disks with the same size, so I'd rather not go for 10.

Shall I just go for 1 and hope mover does the job? I Suppose to have about 1TB of cache data (mostly VM images).

 

What about the spin (no spin down) issue, any advice?

Link to comment
8 minutes ago, Jaster said:

After setting shares to Use cache:yes and prefer, I ran the Mover twice.

The Result is very strange -> some files remained on the array, others where moved to the cache and some remaining are on the cache only!

What is happening?

cache folder.png

Your description and that screenshot don't really provide enough details to comment.

 

Any share you set to prefer should wind up on cache if there is room. Any share you set to yes should wind up on the array. Also, mover can't move open files, so you may have to stop docker/vm services to get some things moved.

 

If this is still puzzling you, take some time to study this:

 

https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?page=2#comment-537383

 

Link to comment

I do understand how the mover and settings should be working. And I understand, some files can not be moved while beeing opened.

So here is the full /apps share view.

I set it to cache:yes and ran mover.

Then I set it to cache:prefer and ran mover once again.

The screen shot show the result.

While mover was running, all dockers except from krusader have been down.

If I inspect the deeper levels, I do see some files on the cache, others on the array, but barley anything is on both.

 

I attached diagnostics and tried to have the mover log running, but ended up with this:

Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 8388616 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(418) : eval()'d code on line 73

when tring to access the system log.

full settings.png

knowlage-diagnostics-20181216-1344.zip

Link to comment
5 minutes ago, Jaster said:

While mover was running, all dockers except from krusader have been down.

That means the docker service was running. The docker service must be stopped, which will remove the docker tab from the GUI. Then all the docker files can be consolidated to either the parity protected array (cache yes) or the cache pool (cache prefer).

Link to comment
9 minutes ago, Jaster said:

Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 8388616 bytes) in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(418) : eval()'d code on line 73

when tring to access the system log.

Since the log is already 100% full, that error (while IMHO should not appear) is pretty much expected.  A reboot is the easiest way to fix the log being full issue.

Link to comment

You are right, the dashboard didn't show the mover as running, so I got the diagnostics.

Now it has completed, but ALL data is on the cache now. I would expect it to be on both, cache and array, but it is not.

 

Also I would assume, I don't have to shutdown the docker engine to have the nightly job sync all the files. Anything in access won't be synced, but that should rule out itself with a couple of runs spread over some days. Or am I getting something wrong? Maybe the docker.img could be lost, but everything else should be good...

Link to comment
1 minute ago, Jaster said:

You are right, the dashboard didn't show the mover as running, so I got the diagnostics.

Now it has completed, but ALL data is on the cache now. I would expect it to be on both, cache and array, but it is not.

Why would you expect that? The 'apps' user share is set to cache-prefer, so everything should be moved to cache.

Link to comment
23 minutes ago, trurl said:

Why would you expect that? The 'apps' user share is set to cache-prefer, so everything should be moved to cache.

cuz I'm to tiered! sry!

 

since raid5 is "dangerous". Is there a comfortable way for backups (appdata + VM's)?... I'm looking for a cron or something simillar.

Link to comment
1 hour ago, Jaster said:

Also I would assume, I don't have to shutdown the docker engine to have the nightly job sync all the files.

What do you mean by sync all the files?

 

You only need to stop the docker service if the core docker files need to be moved, which would only happen if you changed the relevant share settings from yes to prefer or the other way around.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...