• Content Count

  • Joined

  • Last visited

Everything posted by hugenbdd

  1. Thank you! I have mine set to cache-yes as I have a very large cache drive so didn't think of these.
  2. I have seen a lot of issues recently where cache drives are getting filled up causing issues(Not me, but posts). Would a warning of some kind before allowing this to be set be helpful? Also, what are some of the use-cases for this setting?
  3. Agreed, remove the plug-in and try and move. Verify no shares are set to cache-prefer?
  4. Hi Can we get a check for this file if Mover Tuning is installed? /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg Unless that file is not supposed to persist. It seems that maybe updating to 6.2 is removing the file. I have not updated just yet (will this weekend to verify) Thanks
  5. Nope. If anything I would rather it go to Squid. I think the donate button in the settings goes to his paypal.
  6. Looks like somehow the config file is getting deleted. Maybe from an update, but this is the second time I have seen this recently. cat: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg: No such file or directory Make a small change in the mover settings. i.e. change from percent used to a slightly lower or higher value. Then check the console to make sure the file exists. ls -ltr /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg and cat /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg
  7. New Release 2021.04.20 Added "CTIME" option under Age. This allows the "find" command to use ctime instead of mtime. But, thinking I might need to tweak it again to remove the +.... (Please confirm if you use this option) Example find command: find "/mnt/cache/Backup" -depth -ctime +30
  8. I like big SSD cache drives. The bigger the better. Things tend to run smoother. I live on the wild side and do not have my cache in any raid configuration. As I just have 1 - 2TB NVME drive. But it has a 3500TB write rating. I backup appdata and settings with CA Backup every week, and my media/content get's backed up to google drive every night. So I feel like I can recover if the cache drive dies.
  9. after reviewing the latest log... I'm not convinced this is the Mover Tuning plug-in causing issues as I don't see my specific log entries. I think this may be a setup or config issue. Maybe something has "cache=prefer" setting somewhere. Or there is another script running doing something. This is outside of my expertise for unraid. Maybe remove the plug-in, reboot. Try to move (unraid's mover), review the logs. Maybe an unraid employee can help at that point.
  10. Looks like mover was still running for some reason. Is Disk14 full? Apr 20 10:43:18 StarGate move: move: create_parent: /mnt/disk14/appdata/nodered/node_modules/concat-stream error: No space left on device Also, do you have any shares set to prefer for cache? The reason I say that is it appears it's trying to move a file off the array (disk14) to the cache. (... is used in place of the whole file path.) (for "use cache: yes", it usually moves from /mnt/cache/<filepath here> to <array> Apr 20 10:41:51 StarGate shfs: copy_file: /mnt/disk1
  11. That still looks like the old log, as the new "syslog.txt" in the diag. file is blank (0kb). If you open up the log from the GUI when it's moving, or shortly after or if you open /var/log/syslog from the console I'm looking for entries like this. (mvlogger:)
  12. Might also want to make some "entry" in the config settings. i.e. Move on 10% usage etc. The config file needs to have "apply" button hit to be created/updated.
  13. Few things. 1.) I can't find the install for the new Mover tuning plug-in in your syslog. Are the logs updated to after the plug-in was installed? (As your screenshot looks correct) : Apr 18 17:57:08 StarGate root: ---------------------------------------------------- Apr 18 17:57:08 StarGate root: ca.mover.tuning has been installed. Apr 18 17:57:08 StarGate root: Copyright 2018, Andrew Zawadzki Apr 18 17:57:08 StarGate root: Version: 2019.08.23 Apr 18 17:57:08 StarGate root: ---------------------------------------------------- I should see an entry
  14. Can Klainn rename his to lower case to match the display name? Will there be any issues from doing that?
  15. Can you do an ls -ltr of this directory? /boot/config/shares I think we should reach out to unraid and see what they might suggest. If shares should "EXACTLY" match their display names. i.e. caps or not. My guess is that it should match exactly.
  16. Can you tell me what you have in this directory? /boot/config/pools/ Original mover loops through config files like this # Check for objects to move from pools to array for POOL in /boot/config/pools/*.cfg ; do for SHAREPATH in /mnt/$(basename "$POOL" .cfg)/*/ ; do Where my mover loops through shares. (like this) for SHARECFG in /boot/config/shares/* ; do if grep -qs 'shareUseCache="yes"' "$SHARECFG" ; then My gut reaction is that the file Personal.cfg should be renamed to personal.cfg. however,
  17. New Update 2021.04.16 Added "Script to run before mover" text field. Enter the path to a custom script to run before mover starts. Added "Script to run after mover" text field. Enter the path to a custom script to run after mover finishes. These will ALWAYS run, even if the filters remove all possible files. i.e. "Mover not needed". I plan to change this in the future to only run if something is found. But that is months away at the earliest. Should probably stay away from spaces in the path/filename. I only tested with a simple "Hello World" s
  18. At the moment, it would run before doing any checks at all. In the future, if mover changes like I think it will, then I can incorporate a running of the scripts if or not depending on if something moves.
  19. Saw a few posts on Reddit with cache dealing with locked files, or downloaders getting slowed down during move. Would it be helpful if I added two new sections to the tuner? - Script to run before mover starts - Script to run when mover ends I'm thinking this would allow people to shut down dockers or pause downloaders. This will probably run before checking any of the "bounds". i.e. if a pool cache % used is hit or not. Thoughts?
  20. New Release ###2021.04.15 - Changed text for field description of "Force move of files on a schedule:" to "Force move of all files on a schedule:" - No log entry for this, only an email with the output will be sent, like most cron jobs. Basically I put it back to how it used to work. The cron entry in the configs now calls unraids original mover file. None of the plug-in settings will be read (other than the time/s specified in the cron text field).
  21. I think mtime is still the "time" type we should be searching on. From google: Unix keeps 3 timestamps for each file: mtime, ctime, and atime. Most people seem to understand atime (access time), it is when the file was last read. There does seem to be some confusion between mtime and ctime though. ctime is the inode change time while mtime is the file modification time. Maybe it could be an option in the settings....
  22. New release to enable logging based on the standard Mover Setting. 2021.04.14 Logs now based off of standard "Mover Settings" for "Mover Logging:" of enabled or disabled. (Very detailed logs) Changed text for field description of "Force move of All files on a schedule:" to "Force move of files on a schedule:"
  23. Not sure what in the plug-in could be causing this. There is a "Find" command, but that is based on the cache path. Also, it checks the percentage of disk used on the cache pool.
  24. Actually had instructions in a past release.... - age_mover bash script now check for global threshold percent or a manual entry in the ca.mover.tuning.cfg file. (example: /boot/config/plugins/ca.mover.tuning/ca.mover.tuning.cfg - Entry of cachetv="65") - "cachetv" being the name of the pool. - if using single digits leading zero required. (i.e. cachetv="01" for 1 percent) This will over-ride the "global" percentage selected in the GUI for the cache pool (or pools) entered in the config file.
  25. Hi You can copy the file I posted a while back to enable the echo/log statements, or wait a week till I update to handle the logs better. Not much I can troubleshoot at the moment without the log statements. Once I have the log statements, we can test the "find" command that it creates from the console to see why it may not be working as expected.