vfs_cache_pressure


NAS

Recommended Posts

I have recently started to play again with vfs_cache_pressure and the other tunables in combination with the cache directory addon.

 

Initial  impressions are that as before dropping it down to 1 makes a big difference to browsing responsiveness although it is a bit subjective for my liking.

 

I have ordered another 16GB of ram just to test this and was wondering if anyone else is intested in discussing this age old topic again?

Link to comment

I'd be very interested in seeing test results at various values.

 

In the Dynamix Plugins section of the Upgrading to UnRAID v6 guide, I added some comments and recommendations for CacheDirs, including the following:


CacheDirs modifies vm.vfs_cache_pressure, a system parameter governing how aggressively the file and folder dir entries are kept in a cache. The Linux system default is '100', which is considered a "fair" value. Lower values like '1' or '10' or '50' will keep more dir entries cached, '200' would allow them to be more easily dropped, resulting in more drive spinups. The most aggressive would be '0', but unfortunately it may introduce a small risk of out-of-memory conditions, when other memory needs cannot be satisfied because dir entries are hogging it!

By default, CacheDirs sets it to '10', which is a good value for most users. If you set it to '100', then it will remain the same as the Linux default value.

If you wish to change it, add a -p # (that's a lowercase p and a number) to the User defined options field on the Folder Caching settings page. For example, to set it to more aggressively protect your cached dir entries, enter -p 1 in the options field. To avoid any possible side effects, add the parameter -p 100, which will restore it to the system default.


Link to comment

It never even occurred to me CacheDirs would be changing the default. I assumed the 10 value came from a unRAID changed default.

 

So previously I set it manually using `sysctl` which I will now revert and apply ` -p 1`.

 

What I am unclear on is how to quantify the results. The various comparisons I can find online involve using dd to flood out the ram and then timing ls. I am sure on a vanilla system this is a reasonable enough test but I want to test on a busy non artificial live platform.

 

Given that I think I need to track cached inode stats and have a reasonable idea what number represents a complete in memory cache of everything. I also want to somehow work out how much RAM this actually needs with a view to being able to provide advice to the pro user that wants to use a cache pressure of 0.

Link to comment

Thanks for that link. It is both useful in itself and probably the bug that inspired me to re-look at cache pressure again.

 

Based on 4kB per inode approximation in that thread if we work backwards that allows us to make our first rule of thumb:

 

For every 250,000 files and folders, with a cache pressure of 0 to cause inodes to not be expelled due to normal use, you need 1GB of dedicated RAM.

 

Now before I move on I find this interesting. If I `df -i` to show inodes per disk only XFS disks give info on the md

 

e.g.

reiserfs

Filesystem     Inodes IUsed IFree IUse% Mounted on

/dev/md9            0     0     0     - /mnt/disk9

xfs

Filesystem     Inodes IUsed IFree IUse% Mounted on
/dev/md10         56M   67K   56M    1% /mnt/disk10

 

and the totals of user0/user do not tally with the sum of the drives


 

Filesystem     Inodes IUsed IFree IUse% Mounted on

shfs             301M  972K  300M    1% /mnt/user0
shfs             534M  1.4M  532M    1% /mnt/user

 

I would like to know why this is

  • Upvote 1
Link to comment

Rather than wait for a reply on the above lets move on.

 

We can see a lot of inode stats using:

 

cat /proc/slabinfo | grep inode

 

Using my cache share with a pressure of 0

 

mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_inode         1198999 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
proc_inode_cache   16498  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

 

then we tree and see:

 

6085 directories, 310076 files

 

then we ls -R

 

real    0m4.753s
user    0m1.146s
sys     0m0.795s

 and finally we look at slabinfo again

 

btrfs_inode        35483  37680   1072   30    8 : tunables    0    0    0 : slabdata   1256   1256      0
mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_inode         1198994 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
proc_inode_cache   16510  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

 

Whilst I am not sure that these numbers are the only ones that matter what I can see is that the ones that would appear to be important, inode_cache, xfs_inode etc all appears to be static using pressure 0.

 

Now if we copy a multi GB file and look again

 

btrfs_inode        35483  37680   1072   30    8 : tunables    0    0    0 : slabdata   1256   1256      0
mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_inode         1199018 1199898    960   17    4 : tunables    0    0    0 : slabdata  70817  70817      0
udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
fuse_inode        138619 138690    768   21    4 : tunables    0    0    0 : slabdata   6698   6698      0
cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
reiser_inode_cache 111210 111210    736   22    4 : tunables    0    0    0 : slabdata   5055   5055      0
rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
inotify_inode_mark    960   1242     88   46    1 : tunables    0    0    0 : slabdata     27     27      0
sock_inode_cache    1911   1975    640   25    4 : tunables    0    0    0 : slabdata     79     79      0
proc_inode_cache   16484  17506    632   25    4 : tunables    0    0    0 : slabdata    701    701      0
shmem_inode_cache  16927  16968    680   24    4 : tunables    0    0    0 : slabdata    707    707      0
inode_cache        58436  58772    576   28    4 : tunables    0    0    0 : slabdata   2099   2099      0

 

the numbers are all but identical.

 

This allows us to make a reasonable assumption that our second rule of thumb is correct (as per docs):

 

With a cache pressure of 0, assuming you have enough RAM, inodes will not be flushed by other file actions as is usual.

  • Upvote 1
Link to comment

At one point I had 4gb and 8gb of ram with cache_pressure=0 and I would run out of ram. 
With my particular case I was rsyncing data with a dated backup using --link-dest=
I had to back off to cache_pressure=10. 
There's another kernel parameter to expand the dentry queue. At that time, with that kernel, I could not find an advantage. 
What I did find to provide an advantage was an older kernel parameter sysctl vm.highmem_is_dirtyable=1
That changed the caching/flushing behavior of the system, however, I'm not sure how that would aid in cache_pressure vs cached inodes.
It's not just about inodes themselves. From what I had read in the past it the dentry queue came into play too. 

 

Link to comment

Intersting update.

 

The the exact same setup as before with the exception of moving to 6.3.3 and adding 16GB more RAM some of the stats have increased

 

btrfs_inode        18189  19950   1072   30    8 : tunables    0    0    0 : slabdata    665    665      0
mqueue_inode_cache     72     72    896   18    4 : tunables    0    0    0 : slabdata      4      4      0
v9fs_inode_cache       0      0    648   25    4 : tunables    0    0    0 : slabdata      0      0      0
xfs_inode         1481005 1481272    960   17    4 : tunables    0    0    0 : slabdata  88842  88842      0
udf_inode_cache        0      0    728   22    4 : tunables    0    0    0 : slabdata      0      0      0
fuse_inode        1769711 1769843    768   21    4 : tunables    0    0    0 : slabdata  85799  85799      0
cifs_inode_cache       0      0    736   22    4 : tunables    0    0    0 : slabdata      0      0      0
nfs_inode_cache        0      0    936   17    4 : tunables    0    0    0 : slabdata      0      0      0
isofs_inode_cache      0      0    624   26    4 : tunables    0    0    0 : slabdata      0      0      0
fat_inode_cache      644    644    712   23    4 : tunables    0    0    0 : slabdata     28     28      0
ext4_inode_cache       0      0   1024   16    4 : tunables    0    0    0 : slabdata      0      0      0
reiser_inode_cache 402622 402622    736   22    4 : tunables    0    0    0 : slabdata  18301  18301      0
rpc_inode_cache        0      0    640   25    4 : tunables    0    0    0 : slabdata      0      0      0
inotify_inode_mark    184    184     88   46    1 : tunables    0    0    0 : slabdata      4      4      0
sock_inode_cache    1675   1675    640   25    4 : tunables    0    0    0 : slabdata     67     67      0
proc_inode_cache   53632  53775    632   25    4 : tunables    0    0    0 : slabdata   2151   2151      0
shmem_inode_cache  16848  16848    680   24    4 : tunables    0    0    0 : slabdata    702    702      0
inode_cache         7840   7840    576   28    4 : tunables    0    0    0 : slabdata    280    280      0

 

I can only assume i have cached some inodes that are excluded from cache_dirs. Will see how the numbers change over time

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.