Jump to content

cache_dirs - an attempt to keep directory entries in RAM to prevent disk spin-up


Recommended Posts

The issue is low memory and fragmentation of it.

Keep in mind each disk used by the md driver uses low memory buffers.

The more disks, the more there will be low memory that is unavailable for applications.

 

So you could have 4gb (or 8GB) in my case.

With many disks and many files, there is a recipe for issue.

 

If you add the drop_cache scriptlets into the mover, it should help alleviate some of the low memory conditions that cache_dirs aggravates.

 

Link to comment

and some more food for thought.


root@unRAID:~# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784    3571484     545300          0      95188    3258136
Low:        869096     571428     297668
High:      3247688    3000056     247632
-/+ buffers/cache:     218160    3898624
Swap:            0          0          0

[1]+  Done                    find /mnt/disk1 -type f > /mnt/disk1/filelist.txt

root@unRAID:/mnt/disk1# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784    3620960     495824          0      82564    3086296
Low:        869096     587548     281548
High:      3247688    3033412     214276
-/+ buffers/cache:     452100    3664684
Swap:            0          0          0

[1]+  Done                    find /mnt/disk2 -type f > /mnt/disk2/filelist.txt

root@unRAID:/# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784    3620732     496052          0      88324    3107092
Low:        869096     587072     282024
High:      3247688    3033660     214028
-/+ buffers/cache:     425316    3691468
Swap:            0          0          0

root@unRAID:/# wait;date;free -l
[1]+  Done                    find /mnt/disk3 -type f > /mnt/disk3/filelist.txt
Tue Feb 11 10:50:39 EST 2014
             total       used       free     shared    buffers     cached
Mem:       4116784    3666124     450660          0     209284    2964328
Low:        869096     667556     201540
High:      3247688    2998568     249120
-/+ buffers/cache:     492512    3624272
Swap:            0          0          0

root@unRAID:/# wc -l /mnt/disk*/filelist.txt 
   654959 /mnt/disk1/filelist.txt
   162707 /mnt/disk2/filelist.txt
   273430 /mnt/disk3/filelist.txt
  1091096 total
  
root@unRAID:/# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784    3688312     428472          0     197536    2997952
Low:        869096     656340     212756
High:      3247688    3031972     215716
-/+ buffers/cache:     492824    3623960
Swap:            0          0          0

root@unRAID:/# echo 1 > /proc/sys/vm/drop_caches;free -l
             total       used       free     shared    buffers     cached
Mem:       4116784     736708    3380076          0      15104     240596
Low:        869096     461776     407320
High:      3247688     274932    2972756
-/+ buffers/cache:     481008    3635776
Swap:            0          0          0

root@unRAID:/# echo 2 > /proc/sys/vm/drop_caches;free -l
             total       used       free     shared    buffers     cached
Mem:       4116784     523648    3593136          0      15264     282864
Low:        869096     207076     662020
High:      3247688     316572    2931116
-/+ buffers/cache:     225520    3891264
Swap:            0          0          0

root@unRAID:/# echo 3 > /proc/sys/vm/drop_caches;free -l
             total       used       free     shared    buffers     cached
Mem:       4116784     420928    3695856          0      15052     239084
Low:        869096     147608     721488
High:      3247688     273320    2974368
-/+ buffers/cache:     166792    3949992
Swap:            0          0          0

root@unRAID:/# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784     470988    3645796          0      15296     289432
Low:        869096     147596     721500
High:      3247688     323392    2924296
-/+ buffers/cache:     166260    3950524
Swap:            0          0          0

root@unRAID:/# sync

root@unRAID:/# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784     493464    3623320          0      15432     311960
Low:        869096     147752     721344
High:      3247688     345712    2901976
-/+ buffers/cache:     166072    3950712
Swap:            0          0          0

Link to comment
  • 2 weeks later...

I just had cache_dirs go crazy.

 

I was pre clearing a disk and then cache_dirs was causing a crazy system load. killed it via the GUI and it didnt work.

 

Eventually i had to ps -ef | grep cache_dirs and then kil -9 the process

 

There are some corner cases where cache_dirs can seriously cause problems.

 

I really wish there was a cleaner way to tell the kernel "dont drop inodes, here is 4GB just for the task"

Link to comment

I really wish there was a cleaner way to tell the kernel "dont drop inodes, here is 4GB just for the task"

 

There's a kernel tunable for it. With the right setting, the kernel will only take away the memory when there is pressure for memory.

Part of the issue may be the 32/64 bit architecture. Once we are 64 bit, the low memory issue, may not be an issue.

 

 

There will always be CPU / Memory contention with a pre-clear. It runs to the disk unthrottled.

I think with badblocks, it's more throttled. Plus there's an ability to put a sleep every so often in the cycle which could ease up contention.

Link to comment

I am sourcing some new disks to try and free up a server to test 64bit and cache dirs. Its a huge bunch of work though so I cant see it being done shy of 2 months

 

I assume the kernel tunable you are referring to is cache pressure. if so I have never seen 0 do what it is documented to do but as you say yet again that could be 32bit and PAE.

 

I just find it hard to believe that we cant fix it so one video stream, one preclear and once cache dirs on an i5 with 16GB of RAM cause see stuff like this:

 

load average: 5.91, 5.53, 5.50

 

 

 

 

Link to comment

Part of the issue may be the 32/64 bit architecture. Once we are 64 bit, the low memory issue, may not be an issue.

 

Interesting as whenever anyone states memory issues, and thats why we just want a 64 bit version of unRAID that is LIKE the 32bit 5 version of unRAID with no other bells and whistles, bunch of people jump all over on what memory issues?

 

But since we can't get a 64 bit counterpart its really hard to show a before and after.

Link to comment

Absolutely agree its a new product with an evolution of code and internal changes.

 

I suppose a fix is a fix though, I fully expect most people to move to v6 that can (especially if it turns out to be a free upgrade).

 

 

The thing that is missing with the cache_dirs conundrum is direct  quantitative results that people can post to show cache dirs is a problem and here is it doing x, y and z.

 

Typically when there is an issue you get lame stats (like my uptime ones) or the kitchen sink (like a few posts back). Neithr of which easily shows that cache dirs is the problem (although we know it is cause if you stop it the problems go away)

Link to comment

I am sourcing some new disks to try and free up a server to test 64bit and cache dirs. Its a huge bunch of work though so I cant see it being done shy of 2 months

 

I assume the kernel tunable you are referring to is cache pressure. if so I have never seen 0 do what it is documented to do but as you say yet again that could be 32bit and PAE.

 

I just find it hard to believe that we cant fix it so one video stream, one preclear and once cache dirs on an i5 with 16GB of RAM cause see stuff like this:

 

load average: 5.91, 5.53, 5.50

 

 

Well we have the ionice command that can nice down a pre-clear.

http://linux.die.net/man/1/ionice

 

However, you will always choke the array when running a preclear no matter how much memory is available.

cache_dirs can do it also, depending on how many directories, files, depth and whatever else is going on in the array.

 

An answer to running parallel pre-clears is to use badblocks to write the 0's with a timeout parameter that will cause a context switch.

My thought however is to use a laptop or another machine to do the preclear and keep the array available until you are actually ready to use it.

 

We're talking about a few things here though.

preclear will choke your machine and it's not fair to even discuss that here.

64 bit will not solve nor assist this issue.

 

cache_dirs does what it's designed to do. find down the filesystem tree and access every file's inode.

The memory issue with cache_dirs depends on the size of your array, width and files.

64 bit "MAY" alleviate the out of memory by not eating up low memory. This is yet to be seen.

Link to comment

Part of the issue may be the 32/64 bit architecture. Once we are 64 bit, the low memory issue, may not be an issue.

 

Interesting as whenever anyone states memory issues, and thats why we just want a 64 bit version of unRAID that is LIKE the 32bit 5 version of unRAID with no other bells and whistles, bunch of people jump all over on what memory issues?

 

But since we can't get a 64 bit counterpart its really hard to show a before and after.

 

I had problems with 32 bit unraid, cache_dirs and no plugins. 

In fact I turned off cache_dirs and still had issues unless I would drop the cache before and after massive file operations down a tree.

 

I keep saying it depends on

 

1. How wide your array is (how many disks).

2. How much buffering the md driver is set for

3. How many files you have on the whole array.

4. What the kernel tunings are set for regarding memory management, pressure and buffering.

Link to comment

Obviously I agree with points 1-4 that goes without question.

 

What I have concern about is that we cant quantify 1-4 in such a way that we can dynamically recommend defaults or provide a set of commands that definitively shows when cache_dirs is going OTT.

 

We have always know cache_dirs was a kludge (albeit its a very skilled one) but we should be able to evolve it so that it at least doesn't create system instability or pre warn some array is getting to wide to be cached.

 

Thinking out loud...

 

Fundamentally the problem is a simple one. unRAID can and does create situations where one folder can be 24 disks wide. A process shouldn't have to spin up the whole or large parts of the array to get a dir listing, that is crazy. Equally a user shouldn't have to control data placement on specific disks and this goes aainst the ease of use of unRAID

 

Even if we could come up with a way to keep the cache on a cache drive that would give people options. I would happily pay $50 for a SSD for the purpose as the real world perceived improvement for me with cache_dirs working is nothing short of amazing.

 

I dont think the kernel allows this kind of split in page file would it i.e. a page file dedicated to inodes/dentry?

Link to comment

Thinking out loud...

 

Fundamentally the problem is a simple one. unRAID can and does create situations where one folder can be 24 disks wide. A process shouldn't have to spin up the whole or large parts of the array to get a dir listing, that is crazy. Equally a user shouldn't have to control data placement on specific disks and this goes aainst the ease of use of unRAID

 

Even if we could come up with a way to keep the cache on a cache drive that would give people options. I would happily pay $50 for a SSD for the purpose as the real world perceived improvement for me with cache_dirs working is nothing short of amazing.

 

I dont think the kernel allows this kind of split in page file would it i.e. a page file dedicated to inodes/dentry?

 

 

This all may be pointless when 64 bit is mainstream and we remove the dependency on low memory.

 

 

I doubt the kernel would page(swap) out inodes.  It's counter productive.  The kernel can just read the disk.

I remember years ago there was the ability to put the superblock on a different drive with ext2 or 3.

But since directories are actually files and they succumb to being buried in the file tree, I'm not sure it's worth it.

 

Another choice would be to have the user share filesystem cache directory data on the cache drive or on tmpfs (Which CAN be swapped out).

Instead of going to the filesystem, the data is available in ram. (tmpfs) or on a known spinning disk (cache).

 

I do something like this for another project which has nothing to do with caching a directory for this purpose.

For my purpose it's to monitor a directory for changes and run an event on them (remote or local).

 

The issue then becomes synchronization.

Perhaps the usershare filesystem only caches the stat blocks once you visit them. I can say this though a find down a whole tree accessing the disk mount point takes a long time as it is.

 

What does this buy you? Not spinning a drive up?

 

Now space/size

I have


   654959 /mnt/disk1/filelist.txt
   162707 /mnt/disk2/filelist.txt
   275314 /mnt/disk3/filelist.txt
  1092980 total

 

on 3 data disks.


root@unRAID:/etc/cron.d# ls -l --si /mnt/disk1/filelist.txt /mnt/disk2/filelist.txt /mnt/disk3/filelist.txt
-rw-rw-rw- 1 root root 54M 2014-02-11 08:50 /mnt/disk1/filelist.txt
-rw-rw-rw- 1 root root 21M 2014-02-11 09:42 /mnt/disk2/filelist.txt
-rw-rw-rw- 1 root root 34M 2014-02-19 21:01 /mnt/disk3/filelist.txt

with filenames at this size.

Consider the size of a stat structure is 144 bytes.

We have 157,389,120 for the stat block.

 

So we need 300MB of memory to store all that stat information.

 

I'll have to write a program to catalog all these files into a .gdbm file, sqlite table and maybe a mysql flat file to see how much space it will take.

 

While it's feasible to do this, and have a file catalog simultaneously, Gotta think of all the hours involved in building another cache mechanism to prevent a spin up.

At that point, Is it worth it?

I'm not sure you can have your cake and eat it too until we have a total flat memory model.

Link to comment

very interesting ideas indeed.

 

I think we keep this as a theoretical excercise until we can get some solid v6 64bit testing under our belt.

 

I do think the/some of unRAID community has a different take on this to most of the Linux community as a whole. We treat inodes as expensive (to save disk spin up) whereas most treat it as cheap.

 

Both views are correct although I suspect if someone had 24 disks in their PC they might quickly change their view.

 

I don't want to reinvent the wheel for what could be an edge case although if we did your approach sounds eminently sensible.

 

I have been playing with this:

 

http://smyl.es/generating-inode-report-for-linux-ubuntu-centos-etc-with-inodes-shellbash-script/

 

it errors on unRAID but thats only due to lack of tmux and the results it produces are still spot on.

 

This brings me to an idea to help reduce the problem.

 

Regardless of what mechanism caches the inode entries a user should be able to estimate the load each included directory brings to the process in terms of inode count and ultimately RAM.

 

I think we would have to script something as so far all the off th shelf counters dont understand the concept of inode count to xx folders deep only and memory usage needs some thought too

Link to comment

Regardless of what mechanism caches the inode entries a user should be able to estimate the load each included directory brings to the process in terms of inode count and ultimately RAM.

 

I think we would have to script something as so far all the off th shelf counters dont understand the concept of inode count to xx folders deep only and memory usage needs some thought too

 

Here's an interesting read.

http://www.makelinux.net/books/lkd2/ch12lev1sec7

http://web.cs.wpi.edu/~claypool/courses/4513-B02/samples/linux-fs.txt

http://www.ibm.com/developerworks/library/l-virtual-filesystem-switch/

 

Now we need to find out how many of these structures are kept in low memory, then the size of each structure, to estimate the maximum amount of files to structures that are cached.

 

At that point it gets weighed against free low memory, which can be fragmented at some point of time, thus causing issues.

 

It's not just a hard limit of how many files/directories to disks.

It's what other activity is causing memory pressure and/or low memory fragmentation.

 

Which is why dropping all cache before and after the mover in general alleviates a lot of issues.

Link to comment

As an acedemic test, I've scanned [find] (ftw64'ed) down my whole array.  3 data disks with tons of files.

I did various tests. The first so far is caching all of the stat() blocks from a find (ftw64) down the whole /mnt tree.

 

With my various tests I found that storing this ever growing gdbm cache on a disk, proved to slow it down immensely.

 

The first test is a ftw and storing the inode as the key to a filename as the data.

I only did this test because I had the code already. I use it in another scenario to find the seed inode file to a huge 4TB disk of data on an ftp server where files are hard lnked. This is mostly for rsyncing data to a remote ftp server while preserving the links and thus space.

I''m only presenting the history to support the variance in final intended program. It was a quick way to test something, get a benchmark as to time and space.

 

This was for my /mnt/disk1 disk only.

fetched 159396 records, deleted 0 records, stored 499364 records

files processed: 543300, stores: 499321, duplicates: 0, errors: 0

real    19m23.062s
user    0m7.680s
sys     0m42.020s

root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/ftwcache# ls -l --si /tmp/ftwinocache.gdbm 
-rw-rw-r-- 1 root root 89M 2014-02-23 09:03 /tmp/ftwinocache.gdbm

 

On an ARRAY disk it went from 19m to 75m.

There's a big benefit from doing this on the ram drive even if it takes up a couple hundred MB.

 

I did a few other tests.

A sweep following the first sweep usually was much faster. this the caching if dentries helps a great deal.

I did not receive any OOM errors on a 4 disk wide array.

 

I re-wrote the structure that is stored.

This time the filename is the key, the stat() block is the data.

This is something I've been intending to do for a long time.

I'E. my own locate program using a gdbm or sqlite file to catch the array, then insert the md5 in the record somehere.

files processed: 2373800, stores: 2208671, duplicates: 0, errors: 0
files processed: 2373900, stores: 2208770, duplicates: 0, errors: 0
fetched 0 records, deleted 0 records, stored 2208833 records

real    118m42.174s
user    0m39.050s
sys     3m51.320s

root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/ftwcache# ls -l --si /mnt/cache/.ftwstatcache.gdbm 
-rw-rw-r-- 1 root root 347M 2014-02-23 20:37 /mnt/cache/.ftwstatcache.gdbm

 

 

I'm sure it would take longer if the ftwstatcache.gdbm were on an array drive. I did not test that.

It would be faster on a cache drive and ultimately faster on an SSD cache drive.

 

What this proves is that given 2 million files and directories, the stat structures and filenames take up about 350MB.

A little more then my earlier math, but close.  I did not account for the size of filenames in the prior math.

root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/ftwcache# free -l
             total       used       free     shared    buffers     cached
Mem:       4116784    3716964     399820          0      54388    3136936
Low:        869096     597000     272096
High:      3247688    3119964     127724
-/+ buffers/cache:     525640    3591144

 

free -l shows that I still have plenty of low memory which leads me to believe array width and md driver buffering have a big part to play in this too.

 

I haven't explored how fast a lookup will be with gdbm.  For a locate type database it's not going to matter that much.

But as a feasible stat cache in the user share filesystem, it could be a problem wading through 2 million records.

 

There are other possibilities here in that cache_dirs is re-engineerd to only cache very specific parts of the array keeping those entries in ram on purpose via some configurable method.  I'm sure there's a way to do it now, but I don't know if you can give it an array of directories and/or decide the depth.

 

Using sqlite is going to take more time. It's more useful, but at the cost of cpu cycles and time.  Writing to a disk with sqlite and journaling is slow.  Using the ramdisk makes it much much faster. 

 

It's probably not much slower then the old updatedb I used to do for a locate database.  While I did not use cache-dirs, I used locate which functioned the same 'once' then cached everything for a quick lookup.  This is exactly what I intend to duplicate one day and store the md5sums.  At least with sqlite, pulling data out of the DB will be quicker and be more useful.

 

There is a final choice of mysql.  Earlier tests with my inode cache proved that mysql with 'flat files' I.E. no DB process, function quite fast. Faster then sqlite and a lil faster the gdbm.  However that now gets into a heavier load with library dependency.

 

Ideally this is the best choice. If limetech were to compile the mysql flat file libraries as shared libraries and link the php application to them, we would have some pretty rich local flat file db access without the need of a DB process.

 

sqlite is simpler. You can statically compile binaries, a shell tool and even a loadable bash module to access the database with sql.

with GDBM I have a bash loadable module that allows access to the gdbm, however it's not sql access.

 

I know I'm deviating off the core topic here, but I wanted to present my findings for the following reasons.

 

1.  Presents an idea of memory requirements to cache the stat data for an array.

2.  Possible ideas of how the data can be cached, with a few mechanisms.

3.  Provide food for thought.  Yes the usershare filesystem could possibly cache all of the stat blocks it visits.

    A. Should it?

    B. Think of the memory requirements.

    C. Think of the time it takes to look the data up.

    D. Think of the man hours it's going to take to program it and then test it.

    E. Is it really worth all that to save spinning up a disk?

 

[/code]

Link to comment

The more I read about this the more I feel like Alice and the rabbit hole.

 

Paraphrased interim reply:

 

"There are other possibilities here in that cache_dirs is re-engineered to only cache very specific parts of the array keeping those entries in ram on purpose via some configurable method.  I'm sure there's a way to do it now, but I don't know if you can give it an array of directories and/or decide the depth. "

 

I think this is a fundamental. cache dirs or a supplemental tool should, on initial setup, help the user understand what the best setup for them is. Blindly accepting ALL or hoping that setting a max depth of 3 etc is far from ideal. A user should be able to refine it and I suspect many problem come from just this. Obviosuly presenting good info needs a whole array scan so its not a 2 second job but I think users would find it worth it.

 

 

The experiments you made are very interesting. Timings aside the MB numbers of the results are comparatively small. Certainly users with large amounts of memory could accomidate a few hundred MB of reserved space. the days when 512MB was a lot of memory are gone.

 

I am considering risking moving one of my main arrays to v6 beta just to keep the momentum up with this. I am not exicted about it but the only other option i have is vmware which I am not sure would be a good test bed for this low level stuff

 

Link to comment

Well the point of these test is to show how much the raw stat structures and names take up.

The dentry structure is larger then a stat structure and to cache the name/stat another way is feasible but at a cost.

 

An variant sqlite table of the name, inode, mtime, size and space for md5 proves to take longer and is larger.

The price of sql access.

sqlite table
files processed: 2208100, sql inserts: 2208088, sql duplicates: 0, sql errors: 12
files processed: 2208200, sql inserts: 2208188, sql duplicates: 0, sql errors: 12

selects 0, deletes 0, duplicates 0, inserts 2208248, errors 12 

real    138m42.533s
user    8m3.230s
sys     6m49.720s

root@unRAID:/mnt/disk1/home/rcotrone/src.slacky/ftwcache# ls -l --si /mnt/cache/ftwstatcache_sqlite.sqlite3 
-rw-r--r-- 1 root root 696M 2014-02-24 06:29 /mnt/cache/ftwstatcache_sqlite.sqlite3

 

I'm running a second pass. The second pass will not insert any new data, just find duplicates. It's more of a timing test.

I should probably install mlocate again and do that as a size / time test.

 

The issue isn't XEN or VMWARE, it's a 64 bit flat memory model with no low memory bounds.

Link to comment

700MB still sits well with me. Perhaps I am an extreme case but 8GB of RAM would still sit well with me as it represents very little percentage investment and a very real world gain.

 

....

The issue isn't XEN or VMWARE, it's a 64 bit flat memory model with no low memory bounds.

 

To be fair testing for a memory thing on a system that has its own memory gotchas is not ideal.. is the point I was making. Virtualsiation does a good job at making your machine not know it isnt on bare metal but under the hood quite a bit of cleverness is happening.

 

You are probably right though it might not make a difference in this case.

Link to comment

So as I was doing another test last night, I set the vm.vfs_cache_pressure=0 and bam. I crashed. That was no fun. I had to do a hard power cycle.

 

While I did have something monitoring low memory for a while. at some point, it was exhausted.

I have 4GB and I was only scanning the /mnt/disk partitions on 3 disks.

 

At the default of cache_pressure=10 I had no issues.

So this may be the magic number to adjust per array usage and prevent crashed.

 

I had another thought that if cache_dirs monitored low memory with free -l | grep Low it could make an emergency judgement call and either pause or drop the page cache (not the direntry/inode cache). This would free up ram rapidly and possibly defragment low memory.

 

Another choice is a separate program that monitors low memory and does the emergency cache drop.  But at what level?

Link to comment

Much as I am loathed to admit this the combination of "vm.vfs_cache_pressure=0", a ram based OS and no page file is probably a recipe for disaster.. just as you have seen it.

 

unRAID in general would probably benefit from some sort of sanity cehcking RAM monitor and some more thought into OOM given all the addons that are about now and how cheap 4TB disks are.

 

I also wonder about buying an SSD solely for page file. Conventional wisdom is SSD and page file is not a good idea but unRAID isnt a conventional setup. A 30GB+ SSD based page file would cost a few tens of bucks. It is a bit of a brute force approach but its a cheap one that will only get cheaper

Link to comment

Much as I am loathed to admit this the combination of "vm.vfs_cache_pressure=0", a ram based OS and no page file is probably a recipe for disaster.. just as you have seen it.

 

unRAID in general would probably benefit from some sort of sanity cehcking RAM monitor and some more thought into OOM given all the addons that are about now and how cheap 4TB disks are.

 

I also wonder about buying an SSD solely for page file. Conventional wisdom is SSD and page file is not a good idea but unRAID isnt a conventional setup. A 30GB+ SSD based page file would cost a few tens of bucks. It is a bit of a brute force approach but its a cheap one that will only get cheaper

 

If root was on tmpfs rather then rootfs, it would be worth it.

It's not currently worth it unless you do some other migration off rootfs and use TMPFS more.

 

I.E. If you are going to move /var to a tmpfs by some fancy moving.

While the swap file will help with really memory hungry apps, you'll see that for normal unraid usage, it will hardly come into play.  It will not serve you as you think... yet.

 

I voted to have /var on tmpfs, Tom only wanted /var/log on tmpfs.

if I had my way all of the root would be on tmpfs so unused portions could be swapped out.

 

>> Conventional wisdom is SSD and page file is not a good idea,

and I've also read to the contrary that a page/swap file on SSD makes good sense since it's the fastest piece of static space.

Link to comment

What does -u      =  also scan /mnt/user (scan user shares) actually do different than without that option?

 

Also, my disks aren't spinning down although I don't have force them busy set up, I suspect caching might not be set up correctly. Any other option which might cause this?

Link to comment

What does -u      =  also scan /mnt/user (scan user shares) actually do different than without that option?

normally, /mnt/user/... is not scanned,since it is only in memory (and only in memory)

if -u is used, it too is scanned in addition to /mnt/diskX/...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...