Jump to content

[Plugin] Mover Tuning


Recommended Posts

22 minutes ago, AngelAbaddon said:

Just done the update, looks like CTIME is fixed. I think the folder thing is likely going to take some time to find a graceful solution to if one is possible, in my case it's still deleting some of the empty usenet folders (/mnt/cache_data/data/usenet/incomplete for example), but I'm happy to stick with the placeholder file for now, but thanks for working on it.

 

Bigger issue I've discovered is Share overrides aren't reverting to global settings:

[...]
My global threshold settings are 70%, but the backup share overrides that to 0% (I want this moved every run regardless). In the prior version, once the backup share was run, it would revert the settings to the global ones so data is set back to 70% (can actually see this in the log from the live run that occured earlier this morning), but it looks like thats being missed as data is treating the threshold as 0% still.

 

Edit: Just noticed the backup folder does not exist - guessing it's to do with the folder deletion as I didn't have a placeholder in there. I'll get this set back up and drop one in and see what happens.

I'm going to investigate the share overrides, I didn't touch this part for a while... 

Link to comment
6 minutes ago, Reynald said:

I'm going to investigate the share overrides, I didn't touch this part for a while... 

I've updated my initial post, but it looks like it's linked to the folder deletion, the override breaks down and doesn't revert if the main share folder is missing on cache.

  • Like 1
Link to comment

Cool, looks like you have it in hand and your approach sounds sensible.

 

Don't worry about adding to mover_tuning if it's going to be a hassle, I'm sure you have better things to work on. It was only a nice to have but I can still see directories affected from syslog so it's super low priority.

Link to comment
39 minutes ago, Silver226 said:

Cool, looks like you have it in hand and your approach sounds sensible.

 

Don't worry about adding to mover_tuning if it's going to be a hassle, I'm sure you have better things to work on. It was only a nice to have but I can still see directories affected from syslog so it's super low priority.

Done :D

Testing ignore file list and releasing

Link to comment

2024.08.15.2118

  • Fix ignore list reserved space double quoting (Thanks silver226 see forum post) (R3yn4ld)
  • Better empty folder cleaner (R3yn4ld)
    • Rewritten to rmdir parent directory of a moved file if empty (drawbacks: will let multidirectory dirs alive)
    • Added option to enable/disable empty folder cleaner added in Settings UI
    • UI improvements, settings sorted
  • 2024.08.15.0025
  • Even Better cache priming (hopefully) (R3yn4ld)
    • Rewriten Ignore filelist from file and filetypes filtering functions (major)
    • Improve calculating size of filtered files and filetypes
    • Update calculation from basic/bc to numfmt. Removed bc option
    • Added verification for not breaking hardlinks when an hardlinked file is filtered
  • Added testmode to cleaning empty folder function, and a min depth of 2.
  • Fix ctime bug (R3yn4ld)

@Silver226you may have some inconsistency with your _byte thing. Now, file and folder relies on du -sh (a little less precise but get compressed size instead of apparent size on zfs systemfiles). But overall, you bring me on the right track, thank you for that

  • Upvote 1
Link to comment

Is "mover" supposed to be moving everything from my /mnt/cache/system folder off the cache onto the /mnt/user0/? 

 

Aug 15 13:00:00 Tower move: Moving "/mnt/cache/./system/docker/zfs/graph/4dd15260460199225bd2c1cb40588f4dddaa75e32cf4bde7772e996091a87004/opt/venv/lib/python3.12/site-packages/future/moves/collections.py"  to  /mnt/user0/

 

 

I also have a lot of files under zfs and containers.

image.png.de1c6d9fa56440f5a33f3bdfb57cc704.png

 

I also don't understand why /mnt/cache/./system/docker/zfs/graph is so lage.

 

Thank you.

Edited by Jaybau
Link to comment
13 hours ago, Jaybau said:

Is "mover" supposed to be moving everything from my /mnt/cache/system folder off the cache onto the /mnt/user0/? 

 

Aug 15 13:00:00 Tower move: Moving "/mnt/cache/./system/docker/zfs/graph/4dd15260460199225bd2c1cb40588f4dddaa75e32cf4bde7772e996091a87004/opt/venv/lib/python3.12/site-packages/future/moves/collections.py"  to  /mnt/user0/

If your system share is set to Cache->Array (cache:yes) or Array->Cache (cache:prefer), then if your usage is above occupation threshold then yes, mover is moving older files from cache.

Good settings for Appdata and System share are to set just a Primary (Cache) and no Secondary (cache:only). You can then use the advanced settings "Rebalance share" of Mover Tuning to move back the files where they're suppose to be:

image.thumb.png.626a21f23094977f0634c5f720f60fb9.png

 

I can't answer your question about ZFS docker folder as I myself use a vDisk for docker (in previous implementations of ZFS I've found it laggy and it was even crashing my kernel). 

Edited by Reynald
  • Thanks 1
Link to comment

I've set everything up as I would like it (I think). I saw the following log:

Quote

Aug 16 17:51:08 homelabserver move: ************************************************************ ANALYSING MOVING ACTIONS ***********************************************************

Aug 16 17:51:09 homelabserver move: Deciding the action (move/sync/keep) for each file. There are 8836937 files, it can take a while...

How long will this take do you think? Rough estimate? Will it do this every time mover runs (I have it set hourly)?

Link to comment

Wow it can take... a while.

What is "as I would like" ?

If 9 millions files on SSD it can be not that long. If on spinners, hum, please tell us how long it was, that's interesting.

Yes, at the moment and until filelist is store in database, it will be everytime mover runs. (it won't overlap and skip if a mover instance is already runing)

I have an optimisation idea I can set quickly if something else than auto age is selected (processing like 2023 version), but it depends of usecases

Link to comment
12 hours ago, Reynald said:

Good settings for Appdata and System share are to set just a Primary (Cache) and no Secondary (cache:only). You can then use the advanced settings "Rebalance share" of Mover Tuning to move back the files where they're suppose to be:

image.thumb.png.626a21f23094977f0634c5f720f60fb9.png

 

 

Thank you!  I think that's the problem.  My cache started to fill up (probably because I wasn't running the mover frequently enough or too large of a file), so then the mover decided to move my System share to the array to try and make more room.  And my System share is huge!  And now that the mover moved all the files it should have moved, the mover is now moving the System share from the array back to the cache.

 

Thank you for pointing me in the right direction.

  • Upvote 1
Link to comment
4 hours ago, Reynald said:

Wow it can take... a while.

What is "as I would like" ?

If 9 millions files on SSD it can be not that long. If on spinners, hum, please tell us how long it was, that's interesting.

Yes, at the moment and until filelist is store in database, it will be everytime mover runs. (it won't overlap and skip if a mover instance is already runing)

I have an optimisation idea I can set quickly if something else than auto age is selected (processing like 2023 version), but it depends of usecases

It took about 75 minutes so I changed it to every 4 hour. It would be great if it would be a bit quicker 🙈

Link to comment

fe7b0c31-5252-4f0b-9f70-971cf3f9f186.jpg

 

Is this what you mean? The bigger issue I'm seeing is that my syslog is getting filled really fast.


 

Quote

 

Aug 17 08:02:49 homelabserver root: drwxr-xr-x 2 2024/04/25 09:57:45 mnt/user0/home/nextcloud/data/appdata_ocvphk56jmym/preview/8/b/5/8/a

Aug 17 08:02:49 homelabserver root: drwxr-xr-x 2 2024/04/25 09:57:45 mnt/user0/home/nextcloud/data/appdata_ocvphk56jmym/preview/8/b/5/8/a/d

Aug 17 08:02:49 homelabserver root: drwxr-xr-x 12 2024/04/25 09:57:45 mnt/user0/home/nextcloud/data/appdata_ocvphk56jmym/preview/8/b/5/8/a/d/f

Aug 17 08:02:49 homelabserver root: drwxr-xr-x 0 2024/04/25 09:57:45 mnt/user0/home/nextcloud/data/appdata_ocvphk56jmym/preview/8/b/5/8/a/d/f/703192

 

 

 

Second I'm seeing a bug (I think) in the plugin. I set my settings:

7fb110a9-3309-408b-8a16-ab25ebc151ae.jpg

 

I have one share that I enabled an override for:

195e634a-6f80-4d9a-9902-7fa943f157e4.jpg

 

When the plugin runs, it works until it hits that share.

ac7c718e-428f-44c8-af64-e749a1b90000.jpg

 

Then it takes that custom override settings for all the following shares:

bc33e307-8e1d-4cb9-a987-c581697a4a34.jpg

 

 

 

Edited by Soulplayer
Link to comment
4 hours ago, Soulplayer said:

fe7b0c31-5252-4f0b-9f70-971cf3f9f186.jpg

 

Is this what you mean? The bigger issue I'm seeing is that my syslog is getting filled really fast.


 

 

 

Second I'm seeing a bug (I think) in the plugin. I set my settings:

7fb110a9-3309-408b-8a16-ab25ebc151ae.jpg

 

I have one share that I enabled an override for:

195e634a-6f80-4d9a-9902-7fa943f157e4.jpg

 

When the plugin runs, it works until it hits that share.

ac7c718e-428f-44c8-af64-e749a1b90000.jpg

 

Then it takes that custom override settings for all the following shares:

bc33e307-8e1d-4cb9-a987-c581697a4a34.jpg

 

 

 

I had this problem with the override not reverting, check that the folder removal hasn't deleted the overriden shares root from the cache, that caused it to skip the reverting stage for me.

Link to comment
10 minutes ago, AngelAbaddon said:

I had this problem with the override not reverting, check that the folder removal hasn't deleted the overriden shares root from the cache, that caused it to skip the reverting stage for me.

Could you explain this a bit more? What do you mean with the folder removal?

Link to comment
34 minutes ago, Soulplayer said:

Could you explain this a bit more? What do you mean with the folder removal?

Within the main mover tuning settings there's an option for "clean empty folders". In my case, that was set to Yes. My "Backup" share was overriden from 70% to 0%, so everything had been moved to the array and left that share empty on the cache. This meant that the "clean empty folders" process deleted the folders off the cache all the way to the root (/mnt/cache_data/backup/ in my case), so when the mover next ran, it didn't find the share at all on cache and threw up the following error in the logs:

 

/mnt/cache_data/backup does not exist. Is the share still used? Consider removing /boot/config/shares/backup.cfg if not.

 

If the share isn't empty after it ran, it may be something else causing the issue, just was very similar to my case.

 

Link to comment

I have a pair of 1TB drives in a pool called "Protective Cache" this is showing as 100% used however inspecting the files on the pool it's only about 660GB used.

There are three shares using this as Primary storage, with the generic "array" as secondly. Each drive has "Mover action" set to Protected-cache --> Array

I have the Mover Tuning plug in installed and selected configuration items are:

 Only move if above this threshold of used Primary (cache) space: 80%

Free down/prime up to this level of used Primary (cache) space: 80%
Move files off Primary (cache) based on age? Yes

Move files that are greater than this many days old: Auto
Move All from Primary->Secondary (cache:yes) shares when disk is above a certain percentage: Yes
Move All from Primary->Secondary shares pool percentage: 95%

When I run Mover I get the following in the SystemLog:

camnomis-unraid emhttpd: shcmd (99): /usr/local/sbin/mover |& logger -t move &

camnomis-unraid root: Starting Mover

camnomis-unraid root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start

camnomis-unraid move: *********************************************** Mover Tuning Plugin version version=2024.08.17.1225 *********************************************

camnomis-unraid move: Log Level: 1 Aug 17 14:53:48 camnomis-unraid move: ----------------------------------------------------------------- Global settings ---------------------------------------------------------------

camnomis-unraid move: Using global moving threshold: 80 %

camnomis-unraid move: Using global freeing threshold: 80 %

camnomis-unraid move: Primary threshold to Move all Primary->Secondary shares to secondary: 95 %

camnomis-unraid move: Age: Automatic (smart caching) Aug 17 14:53:48 camnomis-unraid move: ***************************************************************** FILTERING FILES ***************************************************************

camnomis-unraid move: ------------------------------------------------------------ Processing appdata share -----------------------------------------------------------

camnomis-unraid move: Primary storage: protected-cache - size: 932GiB - used: 100 % (931GiB)

camnomis-unraid move: Secondary storage: user0

camnomis-unraid move: Share Information: Name: appdata - Path: /mnt/protected-cache/appdata

camnomis-unraid move: Moving threshold: 80% (746GiB) ; Freeing threshold: 80% (746GiB)

camnomis-unraid move: Mover action: protected-cache->user0 (cache:yes). Move All from Primary->Secondary shares option is selected and pool is above move all threshold percentage: 100% > 95%.

camnomis-unraid move: => Moving all files from protected-cache to user0

camnomis-unraid move: Updated Filtered filelist: /tmp/ca.mover.tuning/Filtered_files_2024-08-17T145348.list for appdata

camnomis-unraid move: ************************************************************ ANALYSING MOVING ACTIONS ***********************************************************

camnomis-unraid move: Deciding the action (move/sync/keep) for each file. There are 130007 files, it can take a while...

camnomis-unraid move: No new files will be moved/synced from primary to secondary
camnomis-unraid move: No new files will be moved/synced from secondary to primary

camnomis-unraid move: Cleaning lock and stop files
camnomis-unraid move: ****************************************************************** WE ARE DONE ! ****************************************************************

I have quickly exhausted my ability to try and work out what is going on, can someone please help with where I should start looking for a resolution.

Link to comment

Hi,

 

Sorry for late answer @Camnomis.

 

Your cache is in a state I've not tested that much:

Move All from Primary->Secondary shares option is selected and pool is above move all threshold percentage: 100% > 95%.

 

I will investigate and push a revised version.

 

In the meantime, you may change this setting to no:

Move All from Primary->Secondary (cache:yes) shares when disk is above a certain percentage: No

 

Your cache will be freed down to 80%. If you want to mimic the option, set the freeing threshold to 0% and it will move everything of the cache as would the Move All option

Link to comment

not sure whats going on, using current version 2024.08.17.1225 when i run mover start this is all i was getting.

 

*************************************************** Mover Tuning Plugin version 2024.08.17.1225 *************************************************
----------------------------------------------------------------- Global settings ---------------------------------------------------------------
Using global moving threshold: 75 %
Using global freeing threshold: 0 %
Age: no = 0 ; daysold: -1
Clean Folders: yes
Skip filetypes: !qB
***************************************************************** FILTERING FILES ***************************************************************
-------------------------------------------------------------- Processing Data share ------------------------------------------------------------
Settings override:
------------------
Using share moving threshold: 50 %
Using share freeing threshold: 0 %
Age: no = 0 ; daysold: 
Skip filetypes: !qB
------------------------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 59 % (557GiB)
Secondary storage: user0
Share Information: Name: Data - Path: /mnt/cache/Data
Moving threshold: 50% (450GiB) ; Freeing threshold: 0% (0B)
Mover action: cache->user0 (cache:yes). Pool is above moving threshold percentage:  59% >= 50%.
=> Will smart move old files from cache to user0. Nothing will be moved from user0 to cache
Skipping Filetypes from List. File sizes are taken into account for the calculation of the threshold
    Ignored filetypes are using 148MiB
Updated Filtered filelist: /tmp/ca.mover.tuning/Filtered_files_2024-08-17T220632.list for Data
------------------------------------------------------------ Processing appdata share -----------------------------------------------------------
Restore global settings
-----------------------
Using global moving threshold: 75 %
Using global freeing threshold: 0 %
Age: no = 0 ; daysold: -1
Clean Folders: yes
Skip filetypes: !qB
------------------------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 59 % (557GiB)
Secondary storage: none
Share Information: Name: appdata - Path: /mnt/cache/appdata
Mover action: no action, only cache used (cache:only).
=> Nothing will be moved. Share usage is taken into account in the calculation of the threshold for other shares.
cannot open 'cache/appdata': dataset does not exist


i left it sitting there for over an hour and nothing ever happened.. I just happened to hit enter on the terminal window and then the script started.,

 

cache/appdata used: B
--------------------------------------------------------- Processing appdata_backup share -------------------------------------------------------
Primary storage: user0 - size: 39TiB - used:  52 % (20TiB)
Secondary storage: none
Share Information: Name: appdata_backup - Path: /mnt/user0/appdata_backup
Mover action: no action, only user0 used (cache:no).
=> Skipping
------------------------------------------------------------- Processing system share -----------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 59 % (557GiB)
Secondary storage: user0
Share Information: Name: system - Path: /mnt/cache/system
Moving threshold: 75% (675GiB) ; Freeing threshold: 0% (0B)
Mover action: cache->user0 (cache:yes). Pool is below moving threshold percentage: 59% < 75%.
=> Skipping
************************************************************ ANALYSING MOVING ACTIONS ***********************************************************
Deciding the action (move/sync/keep) for each file. There are 198 files, it can take a while...
A total of 195 files representing 426GiB will be moved/synced:
- 195 files representing 426GiB will be moved/sync from cache to secondary
*********************************************************** LET THE MOVING SHOW BEGIN ! *********************************************************
Moving "/mnt/cache/./Data/asdfasdf.mp4"  to  /mnt/user0/ 
Not deleting dir containing 99 files: /mnt/cache/Data/qewrtwert
194 files remaining from caches to array  423GiB


mover finally finished moving things and it errors at the end.

 

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
------------------------------------------------------------------- Cleaning up -----------------------------------------------------------------
Cleaning lock and stop files
****************************************************************** WE ARE DONE ! ****************************************************************

 

Edited by RonneBlaze
Link to comment

for whatever reason its getting hung up on my appdata share, when starting from console it stops at 
"cannot open 'cache/appdata': dataset does not exist" if i hit enter to script continues. full log below. I have to unraid servers running and only one of them is having this issue.

 

*************************************************** Mover Tuning Plugin version 2024.08.17.1225 *************************************************
----------------------------------------------------------------- Global settings ---------------------------------------------------------------
Using global moving threshold: 50 %
Using global freeing threshold: 0 %
Age: Automatic (smart caching)
Clean Folders: yes
Skip filetypes: !qB
***************************************************************** FILTERING FILES ***************************************************************
-------------------------------------------------------------- Processing Data share ------------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 14 % (131GiB)
Secondary storage: user0
Share Information: Name: Data - Path: /mnt/cache/Data
Moving threshold: 50% (450GiB) ; Freeing threshold: 0% (0B)
Mover action: cache->user0 (cache:yes). Pool is below moving threshold percentage: 14% < 50%.
=> Skipping
------------------------------------------------------------ Processing appdata share -----------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 14 % (131GiB)
Secondary storage: none
Share Information: Name: appdata - Path: /mnt/cache/appdata
Mover action: no action, only cache used (cache:only).
=> Nothing will be moved. Share usage is taken into account in the calculation of the threshold for other shares.
cannot open 'cache/appdata': dataset does not exist

numfmt: invalid number: ‘’
cache/appdata used: B
--------------------------------------------------------- Processing appdata_backup share -------------------------------------------------------
Primary storage: user0 - size: 39TiB - used:  53 % (20TiB)
Secondary storage: none
Share Information: Name: appdata_backup - Path: /mnt/user0/appdata_backup
Mover action: no action, only user0 used (cache:no).
=> Skipping
------------------------------------------------------------- Processing system share -----------------------------------------------------------
Primary storage: cache - size: 900GiB - used: 14 % (131GiB)
Secondary storage: user0
Share Information: Name: system - Path: /mnt/cache/system
Moving threshold: 50% (450GiB) ; Freeing threshold: 0% (0B)
Mover action: cache->user0 (cache:yes). Pool is below moving threshold percentage: 14% < 50%.
=> Skipping
************************************************************ ANALYSING MOVING ACTIONS ***********************************************************
Deciding the action (move/sync/keep) for each file. There are 1 files, it can take a while...
No new files will be moved/synced from primary to secondary
No new files will be moved/synced from secondary to primary
Cleaning lock and stop files
****************************************************************** WE ARE DONE ! ****************************************************************

 

Link to comment

Hi,
We're going to sort this out


First I'm curious:

5 hours ago, RonneBlaze said:

not sure whats going on, using current version 2024.08.17.1225 when i run mover start this is all i was getting.

 

*************************************************** Mover Tuning Plugin version 2024.08.17.1225 *************************************************
----------------------------------------------------------------- Global settings ---------------------------------------------------------------
Using global moving threshold: 75 %
Using global freeing threshold: 0 %
Age: no = 0 ; daysold: -1
Clean Folders: yes
Skip filetypes: !qB

 

So you are emptying your Primary storage when threshold is 75%? Why not let data on cache?

 

5 hours ago, RonneBlaze said:

 

Primary storage: cache - size: 900GiB - used: 59 % (557GiB)
Secondary storage: none
Share Information: Name: appdata - Path: /mnt/cache/appdata
Mover action: no action, only cache used (cache:only).
=> Nothing will be moved. Share usage is taken into account in the calculation of the threshold for other shares.
cannot open 'cache/appdata': dataset does not exist


 

Can you try in terminal: 

zfs list -Hpo used cache/appdata

This is the command used to get share usage on cache. It seems there is a problem with the zfs dataset

 

6 hours ago, RonneBlaze said:

 

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1050: [: : integer expression expected
/usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 1067: [: 1: unary operator expected
Warning: no action for 
------------------------------------------------------------------- Cleaning up -----------------------------------------------------------------
Cleaning lock and stop files
****************************************************************** WE ARE DONE ! ****************************************************************

 

It seems that there are malformed lines in Mover_action_{date}.list

Here the script complain that NBLINKS and FILEPATH are empty. I'd be interested to see those lines (you can PM me the Mover_action_concerned.list if mp4 filenames of Data contain sensitive or personal information like kids names etc... 😇)

Link to comment
17 minutes ago, Reynald said:

It seems that there are malformed lines in Mover_action_{date}.list

Here the script complain that NBLINKS and FILEPATH are empty. I'd be interested to see those lines (you can PM me the Mover_action_concerned.list if mp4 filenames of Data contain sensitive or personal information like kids names etc... 😇)

I have the same messages in my log.
PM is on the way.
Maybe more examples for you to find the cause.

  • Like 1
Link to comment
7 hours ago, Reynald said:

Hi,
We're going to sort this out


First I'm curious:

So you are emptying your Primary storage when threshold is 75%? Why not let data on cache?

 

Can you try in terminal: 

zfs list -Hpo used cache/appdata

This is the command used to get share usage on cache. It seems there is a problem with the zfs dataset

 

It seems that there are malformed lines in Mover_action_{date}.list

Here the script complain that NBLINKS and FILEPATH are empty. I'd be interested to see those lines (you can PM me the Mover_action_concerned.list if mp4 filenames of Data contain sensitive or personal information like kids names etc... 😇)

not sure where the 75% came from, i dont remember ever changing them.. 

looks like my appdata folder is currupt somehow, when i run zfs list -Hpo used cache/appdata i get 
cannot open 'cache/appdata': dataset does not exist


fixed the zfs list -Hpo used cache/appdata issue it now reports a size back

 

where is the Mover_action_concerned.list located at? 

Edited by RonneBlaze
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...