Jump to content

[v4.7] How much to fill up each drive? Performance issues when >99%?


Recommended Posts

I’m looking to get a discussion going on how much free space should be left on each drive. Figure it may be relevant for a few of us now with the Thai floods and all.

 

I’ve been in the bad practice of filling my drives up to 99%+ some times, and have noticed performance issues with writing to the array when drives are extremely full – has anybody else noticed this effect as well, and is there a good reason for it? The performance issues I’m seeing sometimes are when writing to the user share from standard XP and W7 machines, where I’ll see very very low write speeds (below 1MB/s reported in Terracopy) and/or the share become unavailable for 10 to 30 seconds (This has happened even when trying to write small files like MP3 files).

 

Only filling drives up to 95% or so seems to improve the situation – but as said, I’d like to hear whether others are getting this as well, and would also be interested in the technical details of why this happens. If good info pops up, maybe I’ll make an effort to add it to the FAQ (Yes, I checked and didn’t find any mention).

 

Link to comment

Ive' always had issues with reiserfs and filesystems with lots of small files.

 

I generally stop adding around 98% unless I have to. But 98% on a 2TB drive can mean almost 100GB or so.

 

I believe what the 10 to 30 seconds pause is the kernel searching the filesystem tree for free blocks or perhaps it is the filesystem adding area to the superblock.

 

I get the pause all the time on nearly full filesystems with allot of files.

What I've begun to experiment with is sweeping the filesystem with find before I add a file through one of the windows machines.

 

I do this because one of the workstations is a torrent client and the timeout while waiting for drive spin up, then sweeping down the superblock to add a file is so long that torrents time out.. Actually any operation to the SMB share times out, thus causing them to fail.

 

So before I do any write operation, I do a dir/s/p, or a do  find on the unraid server.

 

I know of cache_dirs, but I have way too many files for that to be of use.

I think my file catalog is now at 12,000,000 on an 18T array.

 

Link to comment

Yes, all filesystems will have problems at high fill. I know it’s a bad habit – but I’m sure I’m not the first, and I won’t be the last to try it either. No criticism intended to unraid over the pauses / unavailability issues – it’s just that windows with FAT32 and NTFS handles 99.999% fill and severe fragmentation a lot better. The real issue is noobs like myself just need to know about this behavior so we can avoid the problem, but this advice does not seem to be offered in the FAQ (don’t think it’s mentioned anywhere else in the WiKi either).

 

Thanks for that insight – I’m relieved to learn these pauses are in fact related to drive fill >99%. Wonderful with problems that are as easily solved as this one.

 

Now, in the nitty gritty of the proposed dir/s workaround, I’ll just make the observation that in some cases where either the server is really low on RAM, of if one does use CacheDirs with a depth setting of that takes most of the available memory, it might fail (imagine dir/s to a user share with millions of files, then while that dir/s is working cachedirs keeps running in background, refreshing the cache with the sweeps on all the other userdirs, and overwriting the cache of the usershare one was scanning with dir/s). On the other hand, it sounds like it would be worth a try for some users and if it doesn’t work, one could try adjusting cachedirs to a low enough depth setting where it doesn’t need all the available RAM (or add more).

 

If anyone else is reading – feel free to chime in on what max drive fill percentage You can use without problems.

 

PS. Love the x-mas avatar, Weebo  :)

 

Link to comment

I think the whole drive full 99% delay when adding new files is also a factor of drive size (and speed).

 

Searching the filesystem for free blocks/inodes on a 99% full 500GB drive will take less time then on a 2TB 5400 RPM drive.

 

On my system cachedirs has minimal effect with 20 drives and over 12 million files.

 

The whole dir/s reads the drive (or share) into memory and after that has completed I know I can add a new file without a pause.

 

As far as cache_dirs and using up all available ram, This isn't the case. There is a specific fixed set up dentry elements in the kernel.

Even with my 8GB of ram, it cannot cache all the filesystem stat info.

 

I did try changing some kernel parameters on boot.

 

What ended up happening is allocation of low memory for dentries was increased, This helped a little, but if I did anything else memory intensive such as a big rsync, it would cause an out of memory condition later on.

 

I'll add again, it failed with OOM not because of filling up ram, but because of kernel storage in low memory.

Link to comment

it is too late for me to catch if it has been said already but if memory serves me correct I  remember reading somewhere from Joe L that with the filesystem unraid uses that you need to leave at least as much space as the biggest file on that drive.....basically every file being read needs the same amount of space avail

 

and if im totally off then what can i say....its late lol

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...