Jump to content

[Plugin] Mover Tuning


Recommended Posts

Bug spotted: new "Automatic Age" function does not take size of "Cache<-Array / Use cache Prefer" shares  into account to calculate the size of files from "Cache->Array/ Use cache Yes" to move. (I personaly don't use cache prefer, yet)

 

It will be corrected soon ;)

Edited by Reynald
Link to comment

@Reynald thanks for your work! How does "Automatic Age" works now?

Let's say in scheduler I set: 

  • Move files off cache based on age? - Yes
  • Only move at this threshold of used cache space: 70

  • Move files that are greater than this many days old: Auto

In my understanding mover will move files only if cache usage >= 71%

It will sort all files by age and move them until cache usage decreases to 69% or 68%? Am I missing something?

Link to comment

Thank you @Freender

 

Your understanding is correct.

 

With these these settings it could achieve your figures, depending of file size. What the script does in your example:

For each share with a pool usage > 70%, it moves oldest files until the pool usage is < 70%.

 

So depending on how fast you fill the cache between two schedule, it can start moving anywhere from 71 to 100%.

Depending of files size around 70%, it can stop lower than 70%, i.e. if you have a 20GB blue-ray at 71% of a 200GB cache, mover will stop at 61%. 

 

I have not test it that much because I don't fill the cache quickly. If you try, please report how it works :)

Link to comment

I believe I found the root cause of my issue.

My cache pool config -  zfs raid z1 with 3 NVMEs (2 TB each). Meaning my raw total size is 6 TB however usable capacity is <4 TB ( 2 TB lost due to redundancy).

       

age_mover:712

           #Get the size threshold in bytes
            if is_zfs_mountpoint_fstype "/mnt/$CACHEPOOLNAME"; then
                CACHESIZETHRESH=$(( $(zpool list -p -o size $CACHEPOOLNAME | tail -n 1) * $DEFAULTTHRESHOLD /100 ))
            else
                CACHESIZETHRESH=$(( $(df --output=size --block-size=1 /mnt/$CACHEPOOLNAME | tail -n 1) * $DEFAULTTHRESHOLD /100 ))
            fi

 

In script we use zpool list to determine total capacity.  The problem is, for raid z1 it shows 6 TB (raw size) which is not accurate.

 

zpool list -p -o size cache
         SIZE
5995774345216

 

I believe instead of zpool list we should use something like that (this shows real formatted size)

zfs list -Hp -o available,used cache | awk '{print $1 + $2}'
3868846981120

 

  • Thanks 1
Link to comment

Last test results:

1) Updated age_mover based on comment above and restarted mover.

 

 #Get the size threshold in bytes
            if is_zfs_mountpoint_fstype "/mnt/$CACHEPOOLNAME"; then
                CACHESIZETHRESH=$(( $(zfs list -Hp -o available,used $CACHEPOOLNAME | awk '{print $1 + $2}') * $DEFAULTTHRESHOLD /100))
                #CACHESIZETHRESH=$(( $(zpool list -p -o size $CACHEPOOLNAME | tail -n 1) * $DEFAULTTHRESHOLD /100 ))

 

2) I have /mnt/media share mover override, a part from that very standard parameters:

  • age - Auto
  • Mover threshold: 68%
  • Pool Usage was 69% at the moment I ran mover.

 

3) From the log I can see CACHESIZETHRESH: 2630815947161 which is accurate.

 

4) The script was running for a while, pool at 65% now however mover is still running... In my understanding it should stop moving at <= 68%

NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
cache          5.45T  3.55T  1.91T        -         -    10%    65%  1.00x    ONLINE  -
  raidz1-0     5.45T  3.55T  1.91T        -         -    10%  65.0%      -    ONLINE
    nvme1n1p1  1.82T      -      -        -         -      -      -      -    ONLINE
    nvme2n1p1  1.82T      -      -        -         -      -      -      -    ONLINE
    nvme0n1p1  1.82T      -      -        -         -      -      -      -    ONLINE

 

Log:

mvlogger: *********************************MOVER START*******************************
mvlogger: Age supplied: -1
mvlogger: Size supplied: 0
mvlogger: Sparness supplied: 0
mvlogger: SKIP FILES PATH: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: SKIP FOLDERS: /mnt/cache/media/downloads/*
mvlogger: No Skip File Types Argument Supplied
mvlogger: No Before Script Argument Supplied
mvlogger: No After Script Argument Supplied
mvlogger: CTIME Argument: no
mvlogger: No Original Mover Threshold Percent Supplied
mvlogger: No Test Mode Argument Supplied
mvlogger: No Ignore Hidden Files Argument Supplied
mvlogger: Threshold Percent: 68
mvlogger: No Script to Run.
mvlogger: CACHETHRESH is blank
mvlogger: Share Name Only: backup
mvlogger: Cache Pool Name: cache
mvlogger: cache Threshold Pct:
mvlogger: OVERALL Threshold: 68
mvlogger: Share Path: /mnt/cache/backup
mvlogger: cache is zfs.
mvlogger: Pool Pct Used: 69 %
mvlogger: DFTPCT LIMIT USED FOR SETTING: 68
mvlogger: Threshold Used: 68
mvlogger: Adding Skip Folder List
mvlogger: Skip Folder List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Adding Skip File List
mvlogger: Skip File List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Skipfiletypes string: find "/mnt/cache/backup" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: Share Name Only: isos
mvlogger: Cache Pool Name: cache
mvlogger: cache Threshold Pct:
mvlogger: OVERALL Threshold: 68
mvlogger: Share Path: /mnt/cache/isos
mvlogger: cache is zfs.
mvlogger: Pool Pct Used: 69 %
mvlogger: DFTPCT LIMIT USED FOR SETTING: 68
mvlogger: Threshold Used: 68
mvlogger: Adding Skip Folder List
mvlogger: Skip Folder List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Adding Skip File List
mvlogger: Skip File List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Skipfiletypes string: find "/mnt/cache/isos" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: Share Name Only: media
mvlogger: -----Updating-Mover-Based-On-"media"-Settings-----
mvlogger: Age supplied: -1
mvlogger: Size supplied: 0
mvlogger: Sparness supplied: 0
mvlogger: SKIP FILES PATH: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: SKIP FOLDERS: /mnt/cache/media/downloads/*
mvlogger: No Skip File Types Argument Supplied
mvlogger: No Before Script Argument Supplied
mvlogger: No After Script Argument Supplied
mvlogger: CTIME Argument: no
mvlogger: No Original Mover Threshold Percent Supplied
mvlogger: No Test Mode Argument Supplied
mvlogger: No Ignore Hidden Files Argument Supplied
mvlogger: Threshold Percent: 0
mvlogger: Cache Pool Name: cache
mvlogger: cache Threshold Pct:
mvlogger: OVERALL Threshold: 68
mvlogger: Share Path: /mnt/cache/media
mvlogger: cache is zfs.
mvlogger: Pool Pct Used: 69 %
mvlogger: DFTPCT LIMIT USED FOR SETTING: 68
mvlogger: Threshold Used: 68
mvlogger: Adding Skip Folder List
mvlogger: Skip Folder List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Adding Skip File List
mvlogger: Skip File List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Skipfiletypes string: find "/mnt/cache/media" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: FINDSTR is: find "/mnt/cache/media" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: -----Reverting-"media"-Mover-Settings-----
mvlogger: Age supplied: -1
mvlogger: Size supplied: 0
mvlogger: Sparness supplied: 0
mvlogger: SKIP FILES PATH: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: SKIP FOLDERS: /mnt/cache/media/downloads/*
mvlogger: No Skip File Types Argument Supplied
mvlogger: No Before Script Argument Supplied
mvlogger: No After Script Argument Supplied
mvlogger: CTIME Argument: no
mvlogger: No Original Mover Threshold Percent Supplied
mvlogger: No Test Mode Argument Supplied
mvlogger: No Ignore Hidden Files Argument Supplied
mvlogger: Threshold Percent: 68
mvlogger: Share Name Only: pictures
mvlogger: Cache Pool Name: cache
mvlogger: cache Threshold Pct:
mvlogger: OVERALL Threshold: 68
mvlogger: Share Path: /mnt/cache/pictures
mvlogger: cache is zfs.
mvlogger: Pool Pct Used: 69 %
mvlogger: DFTPCT LIMIT USED FOR SETTING: 68
mvlogger: Threshold Used: 68
mvlogger: Adding Skip Folder List
mvlogger: Skip Folder List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Adding Skip File List
mvlogger: Skip File List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Skipfiletypes string: find "/mnt/cache/pictures" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: Share Name Only: time_machine
mvlogger: Cache Pool Name: cache
mvlogger: cache Threshold Pct:
mvlogger: OVERALL Threshold: 68
mvlogger: Share Path: /mnt/cache/time_machine
mvlogger: cache is zfs.
mvlogger: Pool Pct Used: 69 %
mvlogger: DFTPCT LIMIT USED FOR SETTING: 68
mvlogger: Threshold Used: 68
mvlogger: Adding Skip Folder List
mvlogger: Skip Folder List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Adding Skip File List
mvlogger: Skip File List Path: /mnt/user/backup/mover-ignore/mover_ignore.txt
mvlogger: Skipfiletypes string: find "/mnt/cache/time_machine" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: FINDSTR is: find "/mnt/cache/time_machine" -depth -not -path "/mnt/cache/media/downloads/*" | grep -vFx -f <(sed 's/\/*$//' '/mnt/user/backup/mover-ignore/mover_ignore.txt')
mvlogger: Share Name Only: time_machine_pc
mvlogger: Cache Pool Name: cache
...                                                                                                                           mvlogger: CACHESIZETHRESH: 2630815947161
mvlogger: CACHESIZETHRESH: 2630815947161
mvlogger: CACHESIZETHRESH: 2630815947161

 

 

Link to comment
12 hours ago, FlyingTexan said:

So I'm having the same issue of mover not working. I put the "Move now button follows plug-ins" to no and it's still not working.

If you put this setting to no, then the plugin is inactive when you push "Move now" button. If it doesn't run, it's likely your schedule is not correctly set, or the shares not configured to use mover.

Please see some tips there and revert back ;) 

 

Link to comment
14 hours ago, Freender said:

In script we use zpool list to determine total capacity.  The problem is, for raid z1 it shows 6 TB (raw size) which is not accurate.

 

zpool list -p -o size cache
         SIZE
5995774345216

 

I believe instead of zpool list we should use something like that (this shows real formatted size)

zfs list -Hp -o available,used cache | awk '{print $1 + $2}'
3868846981120

 

 

Good catch. I will insert your proposal. You may as well fork/PR as well if you like.

Link to comment
4 hours ago, Freender said:

Last test results:

1) Updated age_mover based on comment above and restarted mover.

 

 #Get the size threshold in bytes
            if is_zfs_mountpoint_fstype "/mnt/$CACHEPOOLNAME"; then
                CACHESIZETHRESH=$(( $(zfs list -Hp -o available,used $CACHEPOOLNAME | awk '{print $1 + $2}') * $DEFAULTTHRESHOLD /100))
                #CACHESIZETHRESH=$(( $(zpool list -p -o size $CACHEPOOLNAME | tail -n 1) * $DEFAULTTHRESHOLD /100 ))

 

2) I have /mnt/media share mover override, a part from that very standard parameters:

  • age - Auto
  • Mover threshold: 68%
  • Pool Usage was 69% at the moment I ran mover.

 

3) From the log I can see CACHESIZETHRESH: 2630815947161 which is accurate.

 

4) The script was running for a while, pool at 65% now however mover is still running... In my understanding it should stop moving at <= 68%

NAME            SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
cache          5.45T  3.55T  1.91T        -         -    10%    65%  1.00x    ONLINE  -
  raidz1-0     5.45T  3.55T  1.91T        -         -    10%  65.0%      -    ONLINE
    nvme1n1p1  1.82T      -      -        -         -      -      -      -    ONLINE
    nvme2n1p1  1.82T      -      -        -         -      -      -      -    ONLINE
    nvme0n1p1  1.82T      -      -        -         -      -      -      -    ONLINE

 

Log: [...]

 

 

1)2)3) thanks for context. 

4) You are right, unless you have a very big file that represents some % of disk space. I think I have the same behaviour with too much space freed.
The way I handle Automatic Age threshold is to sum the sizes up when looping customFilelist in this function: https://github.com/R3yn4ld/ca.mover.tuning/blob/master/source/ca.mover.tuning/usr/local/emhttp/plugins/ca.mover.tuning/age_mover#L353

Then if total size > threshold, I delete lines in the file list:

    while IFS= read -r CUSTOMLINE; do
        #PULL out file size
        ###echo "$CUSTOMLINE"
        CUSTOMLINESIZE=$(echo "$CUSTOMLINE" | sed -n 's/.*;\(.*\)/\1/p')
        CUSTOMLINEDATE=$(echo "$CUSTOMLINE" | sed -n 's/\([^;]*\).*/\1/p')
        CUSTOMLINETHRESH=$(echo "$CUSTOMLINE" | sed -n 's/^[^;]*;\([^;]*\);.*/\1/p')

	# Do not add the size of an Hardlink (hardlinks have all the same date). Bordel effect: if 2 files have exactly the same date, shall not happen, if so add previous size)
	if [ "$PREVIOUSLINEDATE" -ne "$CUSTOMLINEDATE" ] ; then
		# Append file size to total cache size
        ((TOTALCACHESIZE += $CUSTOMLINESIZE))
	fi

	# Check if total cache size exceeds the threshold and autoage is selected. *
        # Date has also to be different from previous file to prevent breaking hardlinks
        if [ "$TOTALCACHESIZE" -ge "$CUSTOMLINETHRESH" ] && [ "$AGE" -eq -1 ] && [ "$PREVIOUSLINEDATE" -ne "$CUSTOMLINEDATE" ] ; then
            mvlogger "Threshold met, removing $CUSTOMLINE"
            # Delete the line from the file (in-place), escape special characters from filename
            sed -i "/^$(sed 's/[\/&]/\\&/g' <<< "$CUSTOMLINE")$/d" "$CUSTOM_MOVER_FILELIST"
        fi
        PREVIOUSLINEDATE=$CUSTOMLINEDATE
    done <$CUSTOM_MOVER_FILELIST

 

Maybe the modification date is not a good indicator for hardlinks.

In next version (I'm on it), ctime timestamp will gain in precision (digits) and the inode will be checked (all from the first 'findstr' scan). For mtime it's not as easy because I get it from 'stat -c' 

 

Next version will scan all files on "cache prefer/cache<-array" and "cache yes/cache->array" shares, sort them in reverse, and delete the most recent for which threshold is unmet

  • Upvote 1
Link to comment

First, @Reynald thanks :)

 

Question:

What are the settings for the following scenario:

I want the Mover to trigger when SSD/s becomes almost full automatically. IE: 95%

But, at the same time, I want it scheduled every day at 4:00 AM, to move if threshold is bigger than 30%.

 

The rationale is the following: When moving a high amount of data to the array. Trigger automatically at 95%.

But as a maintenance schedule, move, the not prefer cache files to the array, on the night.

 

 

Link to comment
2 minutes ago, mpiva said:

I want the Mover to trigger when SSD/s becomes almost full automatically. IE: 95%

There is never an automatic trigger, it's always schedule-based.

Use min free space on the share to have things to to array when too full. 

  • Upvote 1
Link to comment
1 hour ago, Kilrah said:

There is never an automatic trigger, it's always schedule-based.

Use min free space on the share to have things to to array when too full. 

 

IDK, every time the cache is almost full, it triggers automatically in my setup. It doesn't wait till the scheduled time, My question is related to the %, if it applied, to the schedule, and the automatic trigger.

 

Sorry misunderstood, gotcha.

 

How min free space on the shared work on the cache, I thought the parameter was related to the storage in the raid.

 

Probably what I want can be solved with multiple schedules.

 

One is 4:00 AM with 50%

and other every hour with 95%

 

Again, the rationale, Is, that if you put 95% in the schedule at night, and you have it 80% full at night, it won't trigger, but if you move that 20% left the next day, you will be out of space, in the middle of the day, forcing to run the mover in the day with the accompanying server degradation at that hour)

 

 

 

 

 

 

Edited by mpiva
Link to comment

Mover tuning plugin will not start by itself, yet.

 

1 hour ago, mpiva said:

 

IDK, every time the mover is almost full, it triggers automatically in my setup. It doesn't wait till the scheduled time, My question is related to the %, if it applied, to the schedule, and the automatic trigger.

 


I'm interested to know how you achieved this. 

Assuming you won't put more than 5-10% per hour on the cache, you can set a hourly mover schedule and set threshold to 90~95 in mover tuning.
 

Link to comment
19 minutes ago, Reynald said:

Mover tuning plugin will not start by itself, yet.

 


I'm interested to know how you achieved this. 

Assuming you won't put more than 5-10% per hour on the cache, you can set a hourly mover schedule and set threshold to 90~95 in mover tuning.
 

I think, that was the setting I have before, and that's why I rationalize it as auto-triggered.

I was just editing my post when you responded.  

 

Is the below possible?:

 

Probably what I want can be solved with multiple schedules:

 

One is 4:00 AM with 50% (real schedule)

and other every hour with 95% (pseudo auto-trigger)

 

The rationale, Is, that if you put 95% in the schedule at night, and you have it 80% full at night, it won't trigger, but if you move that 20% left the next day, you will be out of space, in the middle of the day, forcing to run the mover in the day with the accompanying server degradation at that hour)

 

And if you have 95% every hour, it will trigger when it reaches that threshold. Independent of the night, when the server degradation is preferred.

 

With both schedules, you're fresh every day to fill your raid with 50% (45%) of the cache without worrying about the mover, and the mover runs only at night, giving you no degradation during the day, and in the case, you almost fill the cache, the hour schedule will act as a fallback in case you moved to much.

 

 

Edited by mpiva
Link to comment

My use case:

1) I want to avoid running mover during "business hours" as much as possible.

Mover is always running at night regardless of threshold for all shares (I care about Age only for /mnt/user/media)

 

2) For /mnt/user/media share I have an override to move files older than XXX days.

10 minutes before mover starts I run custom script to adjusts XXX days to value which reduces utilization to let's say 70%.

Advantages: For all shares except /mnt/user/media mover moves everything at night.

Disadvantage: It sometimes move to much staff (sometimes  I download 400 GB downloaded on a single day and only a few GB required to bring utilization down to 70%)

 

I like Reynald's approach for auto age, it's more granular, however #2 can only be achieved if we implement threshold per share (now it's global)

 

3) During the day I am fine with utilization up to 95%. At this point (95%) sabnzbd automatically pauses all downloads (I pointed sabnzbd directly to /mnt/cache/media/downloads to see real cache utilization) to prevent cache overflow. 

Once this happens I receive an alert and manually run mover. It would be great if we ran it automatically (technically I can run custom script in cron but it's better to implement this out of the box)

Link to comment
21 hours ago, Freender said:

I like Reynald's approach for auto age, it's more granular, however #2 can only be achieved if we implement threshold per share (now it's global)

At first, before forking this plugin, I was doing the same with a script building a custom "Skip files list".

 

I think the code is ready for Threshold per share, but not yet the UI. Plus the UI isn't ready for overriding settings for a share which is not "cache->array". I need to finish the "cache<-array" age_mover rewritting and then can proceed with these "feature request" ;)

Link to comment
22 hours ago, Freender said:

Once this happens I receive an alert and manually run mover. It would be great if we ran it automatically (technically I can run custom script in cron but it's better to implement this out of the box)


Yes, I plan to implement a "failsafe" setting, using inotify to check cache usage whenever a file is open for writting or closed after writing, just as "File Integrity plugins" works.

Link to comment

@Reynald I am on Unraid 7.0.0-beta.2 and want to say thank you for the continued work on CA Mover Tuning.  I have done the following:

  1. Removed the prior version
  2. Installed the forked version via your link posted on this thread, I think on page 60 or so
  3. Move Now button follows plug-in filters: NO
  4. Test Mode: YES

Now to confirm:

  1. Mover will not run on schedule
  2. I have manually invoked Mover on Main -> Move (which thanks to you + other folks, is working now!)

Is there anything else I need to do for now?

  • Like 1
Link to comment

Hi, thank you.

 



3. Move Now button follows plug-in filters: NO

4. Test Mode: YES

That means that mover plugin will not be invoked when you press the move now button (3) and that it will only create files in /tmp/Mover (4) so you can check what mover does when invoked by a schedule

 



1. Mover will not run on schedule

2. I have manually invoked Mover on Main -> Move (which thanks to you + other folks, is working now!)

Is it really working? It shall not execute if "Move Now button follows plug-in filters" is set to NO

 

Link to comment
1 hour ago, Reynald said:

Hi, thank you.

 

 

 

That means that mover plugin will not be invoked when you press the move now button (3) and that it will only create files in /tmp/Mover (4) so you can check what mover does when invoked by a schedule

 

 

 

Is it really working? It shall not execute if "Move Now button follows plug-in filters" is set to NO

 

I am running Mover from Main -> Move right now.  I was unsuccessful invoking Mover with, Plug-in Filters = Yes and Test Mode = No.  I am watching the Cache Pool used reduce in size and my Spinners Pool used increase in size.  This means to me that Mover is working, correct?

 

However, I did have my "Only move at this threshold of used cache space: " 50%.  My cache pool was less than 50% utilized so this may have been the reason why Mover did not execute.  I will lower the setting and fill up my cache pool + rerun with your suggested settings. 

 

FYI - I am very new to Unraid and home lab/server stuff.  Just started this journey a few days ago 

Edited by ramjam824
clarity
Link to comment
15 minutes ago, ramjam824 said:

However, I did have my "Only move at this threshold of used cache space: " 50%.  My cache pool was less than 50% utilized so this may have been the reason why Mover did not execute.  I will lower the setting and fill up my cache pool + rerun with your suggested settings. 

 

That is probably the reason. Your actual settings are not using the plugin but the original unraid mover. 

Welcome to the unraid journey :) 

Link to comment

ed : I got it working by doing the workaround commands above, but I thought they weren't needed anymore in the fork?

Using @Reynald's version, mover has stopped moving. 

 

If I "move now" I just see the below line in the log

 

Jul 19 08:32:37 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start -1 0 0 '' '' '' '' no 95 '' '' 80

 

if I run mover manually from a console, I get the following error

 

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 119: /usr/local/sbin/move: No such file or directory

 

Running 7.0.0-beta.2 and tuning version Reynald 2024.07.10   (Will this still show up as an update when its not in CA? Or do I have to manually update somehow?)

 

image.thumb.png.76fa7437b44e81a3bb7bec040efbb060.png

 

Edited by Terebi
Link to comment
8 hours ago, Terebi said:

ed : I got it working by doing the workaround commands above, but I thought they weren't needed anymore in the fork?

Using @Reynald's version, mover has stopped moving. 

[...]

if I run mover manually from a console, I get the following error

 

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 119: /usr/local/sbin/move: No such file or directory


Yesm the commands are in the fork:

https://github.com/R3yn4ld/ca.mover.tuning/blob/794d8b0fad8ea1c2db8096dbd753ecf75c8a6e22/plugins/ca.mover.tuning.plg#L142

 

Can you post the output of installation script please?
 

Quote

Running 7.0.0-beta.2 and tuning version Reynald 2024.07.10   (Will this still show up as an update when its not in CA? Or do I have to manually update somehow?)

Yes it will show as an update as long as you installed it with plugin manager. It makes no difference if it comes from CA repository (I'm waiting for @Squid validation/integration of my repo) or by URL in plugin manager.

 

I'm struggling with next version however (looping on the custom filelist, lines are missed in my beta), it's nearly a complete rewriting for filelist :)

Coming soon:

  • Automatic (by age) Array -> Cache handling:
    A file list is created with a little more datafield than previously and sorted by:
    • Modification time
    • Cache: only/prefer/yes/no
    • Inodes (for hardlinks)
  • Hardlink better handling

Coming after that:

  • Live monitoring of used spaced
  • Improved process time (actually on my machine: about 5 minutes for 22.4TB in 18409 files ) Edit: I switched to GNU awk for list processing, for same size and number of files, it takes 48 seconds :)

 

Edited by Reynald
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...