piotrasd Posted August 31, 2014 Share Posted August 31, 2014 Add rules to run mover - depended of free space (not time - like now) Example: run mover if free space on cache disk is lower than 15% (with possibility setup % ) Quote Link to comment
limetech Posted August 31, 2014 Share Posted August 31, 2014 That's an interesting idea now that it's possible to have redundancy via cache pool. Previously we wanted to get data off the cache as soon as practical because if the cache device fails you lose all the data on it. But with multi-device cache pool, not so critical to get data off. I like it. Quote Link to comment
bnevets27 Posted August 31, 2014 Share Posted August 31, 2014 Also solves the problem of the cache drive getting full and therefore not allowing any writing to the share Sent from my SM-N9005 using Tapatalk Quote Link to comment
itimpi Posted August 31, 2014 Share Posted August 31, 2014 Also solves the problem of the cache drive getting full and therefore not allowing any writing to the share As long as you have Min Free Space set to be more than the largest file you want to copy to the share, then unRAID already handles this. Once the free space falls below the Min free Space value unRAID stats writing directly to the array data drives. Quote Link to comment
NAS Posted August 31, 2014 Share Posted August 31, 2014 There is at least one outstanding bug where this does not actually happen in at least some cases. Quote Link to comment
itimpi Posted August 31, 2014 Share Posted August 31, 2014 There is at least one outstanding bug where this does not actually happen in at least some cases. I believe that there is a bug in the current beta 7 GUI code to do with recognizing suffixes when trying to set the min free space value. Not sure if that is relevant if the value is already set (or you set it manually via a text editor). Quote Link to comment
NAS Posted August 31, 2014 Share Posted August 31, 2014 I think the unit bug is fixed in b7. However there is a bug where if 0 is not set as a minimum a non infinite value is used. Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug. I only point it out as currently what is supposed to happen (and has happened in the past) doesn't always work. This ticket is still a good idea although to be fair it been had at least twice before. I say this just to point out how well this new process is working and it shouldnt be lost again. Quote Link to comment
limetech Posted August 31, 2014 Share Posted August 31, 2014 Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug. After reading this post and having a moment of lucidity I think it's dawned on me how this can happen If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache. The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug. Sound like this is your scenario? And I was just about to release -beta8.... Quote Link to comment
BRiT Posted August 31, 2014 Share Posted August 31, 2014 Release -beta8 anyways. Quote Link to comment
NAS Posted August 31, 2014 Share Posted August 31, 2014 Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug. After reading this post and having a moment of lucidity I think it's dawned on me how this can happen If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache. The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug. Sound like this is your scenario? And I was just about to release -beta8.... Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one. Quote Link to comment
itimpi Posted September 1, 2014 Share Posted September 1, 2014 Regardless of this i often see the cache dir run out of space being filled up with small files. i.e. another bug. After reading this post and having a moment of lucidity I think it's dawned on me how this can happen If new files are being written to cached share, and those files are beyond the 'slit level' in the directory hierarchy, code that prevents split kicks in and forces new files to stay on device where parent exists, in this case the cache. The free space is not taken into consideration in this case - but with the cache disk it should be - that's the bug. Sound like this is your scenario? And I was just about to release -beta8.... Sounds like that could be it. Get b8 out and we can look at this for b9. Nice one. Looks as though the fix was easy once the cause was identified as it appears to have made beta 8. Quote Link to comment
jumperalex Posted September 1, 2014 Share Posted September 1, 2014 Keep in mind the "min free space" (when working correctly) only stops you from writing to the cache and then either (I can't remember) fails to allow the copy operation or defaults to writing directly to the array. That is "OK" but it would be a much more elegant and seemless option (yes option, not forced choice) if the mover script could kick off after you've hit that min-free-space (or %) limit so the user doesn't have to deal with it. I'd also say this should have been, and should be, the functionality all along regardless of the cache's new redundancy ability. Consider that "moving stuff off the cache as quickly as possible" makes sense if you are of the mindset that the cache might take days to fill. But what about after large writes (and/or if your running with a smaller SSD as a cache) that can get filled well before the standard daily mover? Of course nothing stops us setting the mover to run more often, but that seems a bit kludgy afterll In fact if you want to talk about getting the data off the cache as quickly as possible, then doing it automaticallyafter a huge amount of data has been copied to it seems like a no brainer. Quote Link to comment
ptr727 Posted May 3, 2019 Share Posted May 3, 2019 Yes please. The cache needs to be transparent, and never interfere with file operations. Set cache high and low watermark, e.g. % of space, or absolute space. High watermark start moving files, e.g. less than 10% free start moving files. Low watermark stop moving files, e.g. more than 50% free stop moving files. Pick files to move by age and access count, e.g. move least accessed files first. Many well behaved apps and bulk copy apps, e.g. robocopy, will reserve space before copying file contents. This is an ideal opportunity for "thin provisioning" systems to allocate the storage in a physical location with enough space. E.g. min free space set 2GB, app creates a new file, app sets the file size to 4GB, before the app starts writing the content, move the file creation to a drive with space, then when the write starts happening, there is enough space. In this scenario the only failure case will be create, write, write, out of space. Quote Link to comment
remotevisitor Posted May 4, 2019 Share Posted May 4, 2019 8 hours ago, ptr727 said: Many well behaved apps and bulk copy apps, e.g. robocopy, will reserve space before copying file contents. And there lies the problem. You see this as a single operation, but in reality it is 2 operations at the file system level: create/open file seek to file position required to allocate file of given size. In order to create the file Unraid has to choose on which drive it is going to create it so by the time the file seek operation comes along the disk has already been selected; hence the need for the minimum free space setting. 1 Quote Link to comment
ptr727 Posted May 5, 2019 Share Posted May 5, 2019 On 5/3/2019 at 11:53 PM, remotevisitor said: And there lies the problem. You see this as a single operation, but in reality it is 2 operations at the file system level: create/open file seek to file position required to allocate file of given size. In order to create the file Unraid has to choose on which drive it is going to create it so by the time the file seek operation comes along the disk has already been selected; hence the need for the minimum free space setting. I didn't say it is easy, but it can be done (time, money, resources), e.g. SMB handle is different to FS handle, can be remapped as needed. A cache that needs reserved space in anticipation of a large file is wasted space, e.g. thin provisioned VM image grows beyoind size and fails permanently, e.g. copy a file that is too big and fail permanently. An alternative is obviously to support SSD drives as data storage, then there is no longer a need to use a cache of SSD's when the main array is made of SSD's. Quote Link to comment
unr41dus3r Posted April 19, 2020 Share Posted April 19, 2020 Sorry for reanimate this old thread, but was this feature implemented? I mean that mover starts copying files in the moment a certain amount of % are reached? As i know the maximum free space setting helps that the cache cant run completly full but wouldnt trigger the mover job or am i wrong? Quote Link to comment
Squid Posted April 19, 2020 Share Posted April 19, 2020 You want the mover tuning plugin Quote Link to comment
itimpi Posted April 19, 2020 Share Posted April 19, 2020 4 minutes ago, ryperx said: Sorry for reanimate this old thread, but was this feature implemented? I mean that mover starts copying files in the moment a certain amount of % are reached? As i know the maximum free space setting helps that the cache cant run completly full but wouldnt trigger the mover job or am i wrong? Have you looked at the Mover Tuning plugin? If I understand what you want the plugin should do the job. Quote Link to comment
unr41dus3r Posted April 20, 2020 Share Posted April 20, 2020 Thanks u 2, this feature i was looking for Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.