hugenbdd
-
Posts
450 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by hugenbdd
-
-
11 hours ago, Swarles said:
Oh okay during boot makes sense, I have found the issue.
line 113 of "ca.mover.tuning.plg"
if [[ ! -f /usr/local/bin/mover.old ]]; then mv /usr/local/bin/mover /usr/local/bin/mover.old; fi
I uninstalled the plugin and rebooted and it would appear that unraid does not initialise a "mover" file in "/usr/local/bin/mover" (on 6.12.8 at least). "move" is located in "bin" and "mover" is located in "sbin".
I don't know enough about unraid to know why this is the case or what the configuration is in older OS. It's probably important for people on older OS. I can confirm that this is not an issue to worry about though.
We could change it so it checks if "mover" exists rather than if "mover.old" doesn't exist, thoughts @hugenbdd?
If it is really bothering anyone, for the time being you can remove the line in "/boot/config/plugins/ca.mover.tuning.plg" or suppress its output.the mover.old file is so that we can restore the original mover binary. The location changed several releases ago but I'm not certain what version. If we remove this then it will not work for older releases. I will try and work on a way around this to satisfy older and newer releases this week. Which I think will be a "longer" if statement, and to check for the file before mv'ing it and also creating another if to create a copy for the sbin location file.
-
10 minutes ago, g.strange42 said:
Just to give more Info, I tried reinstalling it twice, and stopping all the dockers that use the cache.
Could you post a screen shot of your settings?
Could be due to Priority for mover process or Priority for disk I/O, something may have changed with those in this release.... But I won't know till I get a chance to test more.
-
14 minutes ago, g.strange42 said:
I've just had to remove this, it's killing the mover transfer speed, from MB/s to Kb/s. It only seems to have started doing this since Unraid 6.12.8.
Are there any known issues around this, and updates on the horizon?
I just installed 6.12.8 this morning. I will keep an eye on my transfer speeds.
- 1
-
6 minutes ago, Terebi said:
Hrm, Im not sure what the plugin is doing today, but if anything I came up with is useful to you, please do incorporate it.
This is just sorting all of the files in the relevant dir by date, then listing the files up until the quota to keep on cache is filled.
Step 4 first pass keeps track of hardlinks so they aren't double counted against the quota.
Then step 4a lists all the files, including all of the hardlinks to be ignored by the mover.
I believe the mover is breaking the hardlinks, moving all the files (twice for hardlinks?) then repairing the hardlinks. It doesn't seem like what I came up with helps with that, but im just dipping my toes in.
Okay, I'll look at it closer when I get a chance.
Currently, because I do not know how the unRAID binary is moving hardlinks, I just pipe the whole filelist into the mover binary and it takes care of it. As opposed to the file list looping through each one (for non hardlinked).
-
5 minutes ago, Terebi said:
I believe I have solved the hardlinks problem. Their space will not be double counted, and all copies will be listed
Pinging people from this thread that expressed interest in this idea previously. @NGHTCRWLR @Andiroo2@dopeytree See two posts up for more details
#!/bin/bash # Define variables TARGET_DIR="/mnt/cache/data" OUTPUT_DIR="/mnt/user/appdata" OUTPUT_FILE="$OUTPUT_DIR/moverignore.txt" MAX_SIZE="500000000000" # 500 gigabytes in bytes EXTENSIONS=("mkv" "srt") # Ensure the output directory exists mkdir -p "$OUTPUT_DIR" # Cleanup previous temporary files rm -f "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_filtered_metadata.txt" # Step 1: Change directory to the target directory cd "$TARGET_DIR" || exit # Step 2: Find files with specified extensions and obtain metadata find "$(pwd)" -type f \( -iname "*.${EXTENSIONS[0]}" -o -iname "*.${EXTENSIONS[1]}" \) -printf "%i %n %p\0" > "$OUTPUT_DIR/temp_metadata.txt" # Step 3: Sort metadata by date in descending order sort -z -rn -o "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_metadata.txt" # Step 4: Get the newest files up to the specified size limit (part 1) total_size=0 while IFS= read -r -d $'\0' line; do size=$(echo "$line" | cut -d ' ' -f2) if ((total_size + size <= MAX_SIZE)); then #echo "$line" total_size=$((total_size + size)) # Step 4a: List hardlinks for the current file inode=$(echo "$line" | cut -d ' ' -f1) find "$(pwd)" -type f -inum "$inode" -not -path "$line" -printf "%i %n %p\0" | \ while IFS= read -r -d $'\0' hardlink; do echo "$hardlink" done else break fi done < "$OUTPUT_DIR/temp_metadata.txt" > "$OUTPUT_DIR/temp_filtered_metadata.txt" # Step 5: Get the newest files up to the specified size limit (part 2) cut -d ' ' -f3- "$OUTPUT_DIR/temp_filtered_metadata.txt" | tr '\0' '\n' > "$OUTPUT_FILE" # Step 6: Cleanup temporary files rm "$OUTPUT_DIR/temp_metadata.txt" "$OUTPUT_DIR/temp_filtered_metadata.txt" echo "File list generated and saved to: $OUTPUT_FILE"
Is this something we could incorporate into the general plug-in to change the way we handle hardlinks today?
-
3 hours ago, Schaka said:
First of all, I'm sorry for hijacking this thread. I tried to find a solution in other threads, but everything seems outdated. I'm still on 6.11.5.
I was looking to manually trigger the mover via cli/script and expected it to be blocking, so that I may shut down docker containers before running it so files won't be in use.
There seem to be two problems where calling mover via ssh just seems to do nothing (no disk activity), using the GUI it does run but then calling mover stop does nothing again.
What is the correct way (maybe the way this plugin does it) to call the mover? Can I just call one of the scripts the plugin uses and it will be blocking? I'd love to utilize the cache again, but unless I can get the mover to run after my nightly backup, I don't see another way of doing this atm.
Should be able to call it from the command line.
"SoftStop" tries to stop the mover once the current file has been transfered. (i.e. the code checks to see if softstop was triggered before moving to the next file in the list.). As long as you are not using hardlinks. Hardlinks does not loop through a filelist to move files but rather sends the full filelist to the binary mover file.
root@Tower:~# mover status
Log Level: 1
mover: not running
root@Tower:~# mover stop
Log Level: 1
mover: not running
root@Tower:~# mover softstop
Log Level: 1
Soft Stop Requested
mover: not running
root@Tower:~# mover start
Log Level: 1
mover: started
mover: finished -
New Release thanks to @Swarles
###2023.12.19
- Change "while read" lines in age_mover to "while IFS= read -r" to fix trailing white spaces (Swarles)
- Fix where sometimes mover would not run to mover.old scrip (Swarles)
- Log if "share.cfg" doesn't exists to help trouble shooting (Swarles)
- Check for ca.mover.tuning.cfg file and additional logging. (Swarles) -
24 minutes ago, CS01-HS said:
I noticed folder exclusion using the File list path option stopped working.
Enabling test mode I see this in the find command (where my exclude file is exclude.txt)
grep -vFx -f <(sed 's/\/*$//' '/boot/extras/mover_tuner/exclude.txt')
I don't see how this can work with the -x option:
-x, --line-regexp Select only those matches that exactly match the whole line. For a regular expression pattern, this is like parenthesizing the pattern and then surrounding it with ^ and $.
Even files in exclude.txt won't satisfy the exact match unless they're prefaced with the cache path, e.g. /mnt/cache/sharename/file.txt vs simply sharename/file.txt, which used to work.
Am I missing something?
I believe it should have always been /mnt/cache and not the sharename. At least that is how I coded and tested it. If sharename/file.txt was working, I don't think it was intentional.
-
20 hours ago, sasbro97 said:
Hey guys. I guess somebody has probably answered already but the search function does not show me anything useful. I have the same error as this user:
mv: cannot stat '/usr/local/bin/mover': No such file or directory
I see it in the boot logs. It still boots but why is this error appearing? How to fix it?
Mover location was changed somewhere between 6.9 and 6.11. So older versions still need an older path, hence the "missing".
Once you run mover it creates a soft link. If you search back several pages I think I posted how to create the link. Will still go missing on bootup though. DM me if you can't find how to create it.
-
2 hours ago, xra said:
That's what got me here. I believe you can do this by setting "Move files off cache based on age:" What I don't know is if that age is based on last access time or modified time. Where there is a setting to use CTIME I presume the default is modification time. Not exactly what I am looking for.
So, I am thinking of simply adding a wild card in either the ignore file or ignore file type list to make it move nothing. Then I have a bash script that will do the moving the way I want:
Move everything that has a last access age of X days.
Keep moving stuff until the volume of the source is below Y% capacity.
Backing up a bit, I basically want tiered storage. Requirements:
- Create a single share that consists of an SSD and a hard drive (that's easy, that is what unraid does)
- Disable the default mover (only way I can see to do this is using mover tuning to ignore all files somehow)
- Move everything that has a last access age (not last modified) of X days from the SSD to the hard drive
- Keep moving stuff until the volume of the source is below Y% capacity.
Only a month or so into using unraid so there may be a way to do this I just haven't found yet.
UPDATE (later that same day)
The age is based on modified time aka mtime. So not last accessed time. Seems easy enough to change or add additionally last accessed time:
if [ "$CTIMEA" == "yes" ]; then FINDSTR+=" -ctime +$RAGE" else FINDSTR+=" -mtime +$RAGE" fi fi
Not sure if this has changed, but atime was not supported by the shares. (Back in 2020). If you can provide me some evidence that it does support it I can add -atime to the find string.
Also, the way mover works now. it creates a file list. If you want to create a function that I could add to re-order or relist the files I'm willing to add it as an option.
- 1
-
With limited testing.. Thanks SimonF for updating the files.
Change the location of where I downloaded these files to my server (/mnt/user0/Backups)
I also suggest running in test mode first. As there is then a 1 second delay between file moves.
Install instructions behind the spoilerSpoiler1.) backup original ArrayOperation.page file
cp /usr/local/emhttp/plugins/dynamix/ArrayOperation.page /usr/local/emhttp/plugins/ca.mover.tuning/ArrayOperation.page-premovertuning6-12-42.) backup original parity_list file if it exists.
cp /usr/local/emhttp/plugins/dynamix/nchan/parity_list /usr/local/emhttp/plugins/ca.mover.tuning/parity_list6-12-43.) Copy new ArrayOperation.page file overtop original file.
cp /mnt/user0/Backups/ArrayOperation6124a.page /usr/local/emhttp/plugins/dynamix/ArrayOperation.page4.) Set permissions on ArrayOperation.page
chmod 644 /usr/local/emhttp/plugins/dynamix/ArrayOperation.page
chown root:root /usr/local/emhttp/plugins/dynamix/ArrayOperation.page5.) Copy new parity_list file to nchan directory
cp /mnt/user0/Backups/parity_list6124a /usr/local/emhttp/plugins/dynamix/nchan/parity_list6.) Set permissons on parity_list
chmod 755 /usr/local/emhttp/plugins/dynamix/nchan/parity_list
chown root:root /usr/local/emhttp/plugins/dynamix/nchan/parity_list7.) Restart Nchan
Log out of GUI, wait 30 seconds to a minute, log back in.
You may see processes end.
ps -ef | grep nchan8.) (Possibly needed) restart nginx
/etc/rc.d/rc.nginx stop
/etc/rc.d/rc.nginx start- 2
- 1
-
10 hours ago, xD3CrypTionz said:
Any updates on this @hugenbdd?
Yes, I have some updated files but I have to test. I should have time early this week and will post them here.
-
Please try this file.
I have revered a few lines back to the old mover. Would be helpful if you had logging on.
} else {
//exec("echo 'Running from button' >> /var/log/syslog");
//Default "move now" button has been hit.
$niceLevel = $cfg['moverNice'] ?: "0";
$ioLevel = $cfg['moverIO'] ?: "-c 2 -n 0";
logger("ionice $ioLevel nice -n $niceLevel /usr/local/sbin/mover.old $options");
passthru("ionice $ioLevel nice -n $niceLevel /usr/local/sbin/mover.old $options");
-
Swarles found a piece of code I changed that may be causing this Need a bit to revert and I'll post it here for someone to try before a release.
- 1
-
3 hours ago, Robbie G78 said:
So, should I still just run that command and that fixes it?
No, you have the pointer needed.
Can you send me a few mover logs from /tmp/Mover in a DM- 1
-
52 minutes ago, Kloudz said:
DM sent. (Can you send me one of the Mover_Tuning log files?)
-
I'm still on 6.11.4, will upgrading tomorrow morning to test and debug.
Is there any info in Logs or any new files created in /tmp/Mover directory? -
Can you check to see if the mover file has been created? (It's location was changed between 6.11 and 6.12)
ls -ltr /usr/local/sbin/move
and
ls -ltr /usr/local/bin/move
If it is not in /usr/local/sbin, then run this command to create a link.
ln -s /usr/local/bin/move /usr/local/sbin/move
-
New release from some files Swarles and I have made last month. Sorry for the delay, life got in the way....
2023.08.22Fixed Cron Job entry
Modified ignore command to include folders (Swarles)
Updated mover cmdline functions (Hugenbdd)
- 1
-
1 hour ago, loond said:
Running 6.12.3
So move ran and while it did move files from a zfs pool to the array as expected, I'm left with a bunch of empty directories in the share created dataset; also have the CA Mover Tuning plugin. I've tried to re run the mover in hope it would clean up the empty directories but no dice. My setup is essentially tiered storage with files moved off of the zfs pool based on age, so I don't want everything moved at once.
Is this a know issue? If yes, any simple solution to clean them up; like a reboot, etc.?
Thanks in advance.
It should delete the empty directories second/next time it's run.
- 1
-
argg.. you caused me to find an issue.
I will fix when I get some time. but.. I'm away for a few days so probably over the weekend.
In the meantime.. you can use this to grab status or check for the pid file.root@Tower:/usr/local/emhttp/plugins/ca.mover.tuning# ./age_mover status
or check for the existence of the PID file. If it exists, the Process # (pid) will be in the file.
cat /var/run/mover.pid
-
yup, you can do age_mover status
it will just cat out the pid file./var/run/mover.pid
I also write the status of the current mover of what file it's on and there is a "GUI" update, if your willing to modify unraid from stock files. (manual install, I'm out the next few days so can't provide instructions right now, but they are posted here or on Reddit if you want to hunt them down)
-
9 hours ago, cynikaly said:
Hi, still on 6.11.5
I keep randomly getting a mover error with the plugin every now and then:
cat: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg: No such file or directory Jul 13 08:00:01 redacted root: /usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 675: [: 87: unary operator expected
I have not changed anything in the config in over a month and it worked before.
Does anyone have a clue what the cause could be?
Can you make a small change to the settings under scheduler, apply, then change back.
It's possible the file was lost or not saved for some reason.
/boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg should be there.
-
9 hours ago, cynikaly said:
Hi, still on 6.11.5
I keep randomly getting a mover error with the plugin every now and then:
cat: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg: No such file or directory Jul 13 08:00:01 redacted root: /usr/local/emhttp/plugins/ca.mover.tuning/age_mover: line 675: [: 87: unary operator expected
I have not changed anything in the config in over a month and it worked before.
Does anyone have a clue what the cause could be?
Looking into this..
[Plugin] Mover Tuning
in Plugin Support
Posted
I will look into it. but..
I have been very busy with Family and work the past 4-5 months and the next few look busy also.
I also have a few other issues to look into.