[Plugin] Mover Tuning


Recommended Posts

On 5/18/2023 at 9:16 PM, hugenbdd said:

###2023.05.18
- Fixed a issue where SED was seeing [] and {} inside of the filepath string, by double quoting the echo'd variable.
- Added softstop as a command to gracefully exit the mover from the command line.  Checks for a (empty) file under /var/run/moversoft.stop before sending each file to the binary mover.  Will exit the loop if the file exists and mover will stop once the current file is done moving.  This is more graceful than the original stop (Which still exists) that just kills the PID.

 

Example: /usr/local/emhttp/plugins/ca.mover.tuning/age_mover softstop

 

Thank you for the update and support! it is working again 😍

Link to comment
6 hours ago, vtmikel said:

Did something change with respect to how hard links are handled?  After the upgrade to the plugin, mover ran for a very long time and my hard links seems to have been unlinked and the content is duplicated.

Moved to a filelist based system instead of sending the find output to the mover binary.  The first release of this had an issue with some special characters in the file name.  But that was fixed Thursday.

Link to comment
54 minutes ago, Fuggin said:

I just want the mover to automatically move the contents off my cache-yes shares at 70%, regardless of time of day

It won't do that, mover tuning never triggers a move, it just filters what's moved when the standard schedule runs the mover.

Edited by Kilrah
Link to comment
4 hours ago, memymeme12 said:

I'm running into a near identical issue. 

versions:

unraid 6.11.5

<!ENTITY name "ca.mover.tuning">

<!ENTITY author "hugenbdd">

<!ENTITY version "2023.05.18">

 

I am booted in safe mode right now to test mover to see if the hard links work, I pulled that mover tuning info from 

/boot/config/plugins/ca.mover.tuning.plg

 

May 20 12:58:03 tower move: file: /mnt/cache/locationA/fileA.stuff [38,112]

May 20 12:58:03 cache used 50 GB

May 20 12:58:03 array used 0 GB

 

May 20 13:03:55 tower move: file: /locationA/fileA.stuff [38,112] has 2 dangling link(s)

May 20 13:03:55 cache used 50 GB

May 20 13:03:55 array used 50 GB

 

May 20 13:22:41 tower move: file: /mnt/cache/locationB/fileB.stuff

May 20 13:22:41 cache used 50 GB

May 20 13:22:41 array used 50 GB

 

May 20 13:54:41 tower move: error: move, 392: No such file or directory (2): lstat: /mnt/cache/locationA/fileA.stuff

May 20 13:54:41 tower move: error: move, 392: No such file or directory (2): lstat: /mnt/cache/locationB/fileB.stuff

May 20 13:54:41 cache used 0 GB

May 20 13:54:41 array space 100 GB

 

May 20 13:54:42 tower root: mover: finished

May 20 13:54:42 tower root: Restoring original turbo write mode

May 20 13:54:42 tower kernel: mdcmd (49): set md_write_method 1

May 20 13:54:42 cache used 0 GB

May 20 13:54:42 array space 100 GB

 

fileA.stuff is linked to fileB.stuff pre move. 

It looked like mover fully writes both fileA.stuff and fileB.stuff from cache to array as in if file a is 50 gb, 100 gb gets written to my array.

 

I am also getting the lstat for files that are not hard linked, and I am having issues where files are getting skipped when they should not be. 

Can you DM logs from /tmp/Mover and also from syslog?

 

I don't know why hardlinks would be any different than a normal file.  It's using the same find that the old mover is, it's just that it's sending it to a filelist file.  Then I loop through a filelist and send each file to the binary mover. 

 

(I wonder if it's moving on to the next file and starting a new file before the previous is done.). I might need to check and see if any "mover" is still running before moving forward...)

Link to comment
29 minutes ago, memymeme12 said:

logs sent, tested it with a single small file this time, hard links lost after move. 

Thanks

For those with hardlinks.  I would suggest to pause or not move the script for a few days while I work on this.

 

I have to recreate how the binary mover handles hardlinks. (i.e. tracking inodes and how to handle them in the filelist)

Link to comment

@hugenbdd hey sorry but i can't find the /mover inside my /tmp folder and something even strange that i found is that when i invoke mover myself by clicking move now button it doesn't throw those errors. so it seems like it only throws those errors when it automatically runs at scheduled time.

Link to comment
11 hours ago, hugenbdd said:

Thanks

For those with hardlinks.  I would suggest to pause or not move the script for a few days while I work on this.

 

I have to recreate how the binary mover handles hardlinks. (i.e. tracking inodes and how to handle them in the filelist)

 

I can confirm that my hardlinks were lost for many (potentially all?  Though to tell) files moved since the update.  I used jdupes to identify the duplicate files that were once hard links (took 24 hours due to the size of the share).  Good news is, jdupes is reporting exact copies, so no corruption.

 

I'm happy to provide my logs if it helps.

 

Question / Request: with the new architecture, is it possible for the mover to only move files, based on criteria in the tuning plugin, up til the threshold being met, and then stop?  Or, does it already work like this?

  • Like 1
Link to comment
1 hour ago, vtmikel said:

 

I can confirm that my hardlinks were lost for many (potentially all?  Though to tell) files moved since the update.  I used jdupes to identify the duplicate files that were once hard links (took 24 hours due to the size of the share).  Good news is, jdupes is reporting exact copies, so no corruption.

 

I'm happy to provide my logs if it helps.

 

Question / Request: with the new architecture, is it possible for the mover to only move files, based on criteria in the tuning plugin, up til the threshold being met, and then stop?  Or, does it already work like this?

Not yet but this will enable us to write the code for that.  With the original find, it just sends the full output to the binary mover.  With a file list, we are able to manipulate it any way we want before we send them to the mover.

Link to comment
5 hours ago, KnifeFed said:

Am I understanding correctly that the latest version breaks the creation of hardlinks for all users and it's not being rolled back?

File attached fixes the hardlinks issue but will not give status updates or soft stop.

 

replace the file below with the attached file.  Once someone is able to test I will package it up in a new release.

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover

 

 

age_mover

Link to comment
1 hour ago, hugenbdd said:

File attached fixes the hardlinks issue but will not give status updates or soft stop.

 

replace the file below with the attached file.  Once someone is able to test I will package it up in a new release.

/usr/local/emhttp/plugins/ca.mover.tuning/age_mover

 

 

age_mover 22.76 kB · 0 downloads

 

I have performed a run of the mover using the fixed age_mover.  Hard linked files were maintained.

  • Like 1
Link to comment

Same issue for me this morning.  Mover not working:

 

May 23 13:52:02 Tower kernel: 
May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' ''
May 23 13:52:02 Tower root: Log Level: 1
May 23 13:52:02 Tower root: mover: started
May 23 13:52:02 Tower root: Hard Link Status: false
May 23 13:52:02 Tower root: mover: finished
May 23 13:52:02 Tower root: Restoring original turbo write mode
May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto
May 23 13:52:02 Tower kernel: 

 

My settings are set to move at 95% usage of cache and I am at 96%.  I installed a new plugin version right before running this latest attempt.  Let me know if I can provide anything to help here.

Link to comment
57 minutes ago, Andiroo2 said:

Same issue for me this morning.  Mover not working:

 

May 23 13:52:02 Tower kernel: 
May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' ''
May 23 13:52:02 Tower root: Log Level: 1
May 23 13:52:02 Tower root: mover: started
May 23 13:52:02 Tower root: Hard Link Status: false
May 23 13:52:02 Tower root: mover: finished
May 23 13:52:02 Tower root: Restoring original turbo write mode
May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto
May 23 13:52:02 Tower kernel: 

 

My settings are set to move at 95% usage of cache and I am at 96%.  I installed a new plugin version right before running this latest attempt.  Let me know if I can provide anything to help here.

Can you DM the log file in /tmp/Mover/ Mover_Tuning_<DATE/TIME>.log as most entries are no longer in the syslog.

 

I have not changed this part of the code and can't determine what's going on without more info.

Link to comment

I noticed a lot of updates in the last weeks, and I recently began noticing that the Mover is leaving random empty directories on the cache.  It seems to be moving the files, but leaving the directories sometimes.  As a result, my shares always show that something is in an unprotected state.

 

Is there anything recently done that could have done this?  I haven't changed anything...

 

EDIT:  After the automatic nightly mover these directories remained for over a week, but manually invoking the Mover cleared them out instantly.  All better now...

Edited by House Of Cards
  • Like 1
Link to comment
7 hours ago, House Of Cards said:

I noticed a lot of updates in the last weeks, and I recently began noticing that the Mover is leaving random empty directories on the cache.  It seems to be moving the files, but leaving the directories sometimes.  As a result, my shares always show that something is in an unprotected state.

 

Is there anything recently done that could have done this?  I haven't changed anything...

 

EDIT:  After the automatic nightly mover these directories remained for over a week, but manually invoking the Mover cleared them out instantly.  All better now...

Yes, I'm purposely skipping them for this release.  I will be including them in the next release.

Link to comment
19 hours ago, Andiroo2 said:

Same issue for me this morning.  Mover not working:

 

May 23 13:52:02 Tower kernel: 
May 23 13:52:02 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 30 1 0 /mnt/user/system/Mover_Exclude.txt ini '' '' no 95 '' ''
May 23 13:52:02 Tower root: Log Level: 1
May 23 13:52:02 Tower root: mover: started
May 23 13:52:02 Tower root: Hard Link Status: false
May 23 13:52:02 Tower root: mover: finished
May 23 13:52:02 Tower root: Restoring original turbo write mode
May 23 13:52:02 Tower kernel: mdcmd (55): set md_write_method auto
May 23 13:52:02 Tower kernel: 

 

My settings are set to move at 95% usage of cache and I am at 96%.  I installed a new plugin version right before running this latest attempt.  Let me know if I can provide anything to help here.

 

Update: Mover was running this morning when I woke up.  Looks like my issue was a rounding error.  Mover tuning was reporting 95% usage but Unraid was reporting 96%.  I must have been right on 95% usage and not enough to trigger the mover.

  • Like 1
Link to comment
May 27 08:05:07 husky root: Hard File Path: /mnt/cache/TV/TV/Shark Tank/Season 14/<removed>.mkv
May 27 08:05:07 husky root: LINK Count: 1
May 27 08:05:07 husky root: Hard Link Status: false
...
May 27 08:35:18 husky  move: file: /mnt/cache/TV/TV/Shark Tank/Season 14/<removed>.mkv
May 27 08:35:30 husky root: mover: finished

When mover fires off there is a bit of log spam for hard link stuff..  then the actual mover log.   Any way a user can turn off the hard link set of logs?

 

Link to comment
4 hours ago, zoggy said:
May 27 08:05:07 husky root: Hard File Path: /mnt/cache/TV/TV/Shark Tank/Season 14/<removed>.mkv
May 27 08:05:07 husky root: LINK Count: 1
May 27 08:05:07 husky root: Hard Link Status: false
...
May 27 08:35:18 husky  move: file: /mnt/cache/TV/TV/Shark Tank/Season 14/<removed>.mkv
May 27 08:35:30 husky root: mover: finished

When mover fires off there is a bit of log spam for hard link stuff..  then the actual mover log.   Any way a user can turn off the hard link set of logs?

 

I left a few echo's in there.  I'll move them to the mover logs under /tmp/Mover on the next release.  If you need it removed before the release, you can just comment out (#) the echo's in age_mover file.

Link to comment
On 5/24/2023 at 9:33 AM, Andiroo2 said:

 

Update: Mover was running this morning when I woke up.  Looks like my issue was a rounding error.  Mover tuning was reporting 95% usage but Unraid was reporting 96%.  I must have been right on 95% usage and not enough to trigger the mover.

 

More on this one...it looks like Mover ignored the Tuning settings for file ages.  It moved everything available, and not just the files older than 30 days.  I'm not over the "move all cache-yes" files threshold either.  If you are interested, the logs I DM'd show the mover using a 30-day cut-off, but many more files were moved. 

Link to comment

HI all

EDIT: I can see many outhers have thees problems. 

Can i provide something?

I been using this for some time and all working graat.

 

I have followed the trash guide:
https://trash-guides.info/Downloaders/qBittorrent/Tips/How-to-run-the-unRaid-mover-for-qBittorrent/

 

But today i saw that the mover has moved my files but no the folders? 

Can i make a rollback?

image.thumb.png.1c1c135bc560c2c7fa179a6936c116a4.png

My settings:

image.thumb.png.55695cb8e6c734a429963cea9b137500.png

Edited by DanielPT
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.