Swarles

Members
  • Posts

    110
  • Joined

  • Last visited

Everything posted by Swarles

  1. Yeah so at the moment that's not possible with the raw plugin and I totally understand the desire for it. Infact at the moment, there is no mover tuning settings for array->cache moves at all. I have been working on creating a new plugin from the ground up which will bring all the tuning settings to array->cache moves as well even though it probably won't be used by 90% of people. Unfortunately because the mover only really moves stuff one way depending on how the share is configured, what you desire is not easily achievable by manipulating the built in mover. So there may be circumstances where files on the array that your tuning settings would normally keep on the cache will remain on the array. This would be likely to occur if you set up mover tuning after having already established your server file system, or changing the mover tuning settings considerably. What I will think about though, is creating a button that will extract files on the destination drive that match the source drives mover tuning settings (but the opposite), for the purpose of ensuring all files on that share are following the tuned settings. That way you would only need to run it once after making changes to mover tuning. I think this would address your feature request In saying this, the plugin is still a fair way off and this feature will be lower down on my priority list.
  2. Unfortunately your exact requirements are not currently supported. The primary issue is that in point #3 the threshold will take precedence, so if the threshold is not exceeded, files that are older than X days will not be moved. They will only be moved if the threshold has been exceeded AND they are older than X days. Furthermore, the threshold operates on a pool level not a share level, as a share typically does not have a set maximum capacity. So the threshold takes into account the usage of all shares on that pool. You might be able to implement your desired logic by slightly modifying the before script Terebi posted above. Otherwise I recommend setting the threshold to a value that is suitable for all your shares. And then understand that it will only ever move the shares if that threshold is exceeded and then if it is going to move files it will use all your other settings. I'm remaking mover tuning from the ground up and it will support your desired functionality, but it is quite a ways off at the moment.
  3. Interesting, so are you having issues running Mover Tuning entirely? What version of unraid are you using?
  4. In that case you should be able to echo the list of file paths and pipe them into "/usr/local/sbin/move -d".
  5. Yeah unfortunately the way the individual share mover script is implemented it doesn’t use the mover tuning settings. It honestly wouldn’t be the hardest thing to implement so I might do it, but I’m also working on revamping the plug-in from the ground up so that’s where my limited time is being spent at the moment. For now the easiest thing you can do if you want to use the tuning settings but move only 1 share (or a few), would be to put the path for all the other shares in the skip files list. eg. /mnt/cache1/share1 /mnt/cache1/share2 /mnt/cache2/backups etc This will work exactly as expected, albeit quite inefficient as it will still loop through each share but just find no files. Honestly I’m not really sure how the built in mover handles hard-links, but I can tell you that when the mover tuning plugin detects a hard-link in the list of files that should be moved, it sends the entire list of files as a single group to the mover (sbin/move). Otherwise it sends only 1 file path at a time, I didn’t implement this and so I am not sure if this a requirement or not.
  6. I think it would be possible but It's not something I would want to implement in the current mover tuning code base. I am currently in the process of building the mover tuning plugin from the ground up fully in php, using OOP practices and I'll think about the best way to implement evaluating a threshold on a per dataset basis. I'm not sure if I would ship it in the first release but the idea of migrating to OOP would be to make implementing stuff like this easier. For the time being it will remain a whole pool basis unless someone else wants to implement it. In light of that which one do you think would be better to use going forward? zpool get -o value capacity OR df --output=pcent That's weird, can you confirm if that file appears if you perform the following: find '/mnt/cache/movies' -depth -not -path '*/\.*' If not, you might need to confirm what find string is being used in the logs. I couldn't find any code relating to destroying datasets which leads me to believe this is a feature baked into the built in mover. I could possibly add a check to not send datasets to the mover when that setting is enabled. This will definitely be an option when I finish building the new mover tuning plugin.
  7. See item #3 on the FAQ. Make sure the file is readable, double check the file is actually in that location, it's likely that the mover could have moved it after the first run. If it still isn't working, you'll have to provide the latest ".logs" files for me to see what is going on. See item #1 on the FAQ for providing logs
  8. The way it's done on the share page is an onclick event that calls the following: $.post("/plugins/ca.mover.tuning/moveShareNow.php?Share='<?echo $shareName?>'"); The "moveShareNow.php" script executes the following: exec("/usr/local/emhttp/plugins/ca.mover.tuning/share_mover {$shareName} >> /var/log/syslog &", $output, $retval); You can definitely call that script from cmd and just provide the share name as an argument: bash /usr/local/emhttp/plugins/ca.mover.tuning/share_mover 'shareName' NOTE: The share_mover script in the current plugin does not consider the mover tuning settings and will move everything off that share
  9. FAQ - Frequently Asked Questions I'm putting these here for some of the common problems, and will try to update it as more questions/issues come along. How do I enable logs / provide logs? Mover wont run after installing Mover Tunning? Mover isn't ignoring files or folders? Issue Reporting Template
  10. Oh okay during boot makes sense, I have found the issue. line 113 of "ca.mover.tuning.plg" if [[ ! -f /usr/local/bin/mover.old ]]; then mv /usr/local/bin/mover /usr/local/bin/mover.old; fi I uninstalled the plugin and rebooted and it would appear that unraid does not initialise a "mover" file in "/usr/local/bin/mover" (on 6.12.8 at least). "move" is located in "bin" and "mover" is located in "sbin". I don't know enough about unraid to know why this is the case or what the configuration is in older OS. It's probably important for people on older OS. I can confirm that this is not an issue to worry about though. We could change it so it checks if "mover" exists rather than if "mover.old" doesn't exist, thoughts @hugenbdd? If it is really bothering anyone, for the time being you can remove the line in "/boot/config/plugins/ca.mover.tuning.plg" or suppress its output.
  11. @2TC @modem7 @BazzT92 @wgstarks Hmm I can't seem to recreate this issue after having updated to 6.12.8. Would someone with the issue be able to clarify the following: Does this error message happen whenever Mover is trying to start or during boot of the machine? Does this error message happen during scheduled mover or when using "Move" button on main page? (With "Move now button uses mover tuning" enabled.) Can you verify that the file exists manually? ls -l /usr/local/bin/mover Output: -rwxr-xr-x 1 root root 161 Feb 25 11:05 /usr/local/bin/mover* Are the permissions the same as above? Check the cron entry and verify cat /etc/cron.d/root Output: # Generated mover schedule: 40 3 * * * /usr/local/sbin/mover |& logger Check if your output is the same. If you verified both steps 3 and 4 to be true, try uninstalling the plugin, reboot, and reinstall. You can backup your settings with the following: cp -r /boot/config/plugins/ca.mover.tuning /boot/config/ca.mover.tuning.backup Once you have reinstalled, you can restore your settings with: mv -f /boot/config/ca.mover.tuning.backup /boot/config/plugins/ Let me know if issue persists.
  12. Thanks! I'm in the process of implementing similar functionality in the plugin and also fix some bugs. It might be some time though so for the time being I think this is an excellent solution for those that want it Great work and appreciate the effort!
  13. There could be a couple reasons this is happening, it sounded very similar to an issue posted a week or so ago but I have checked and it is not quite the same. It seems as though it is encountering an non-numerical value and thus cannot perform the numerical operation. While we could simply put in some code to handle and ignore these cases it would be beneficial to understand where it is coming from. If you are still occasionally receiving the error, could you please turn on mover logging and test mode and then provide the latest file located at "/tmp/Mover/Custom_Mover_Tuning_[TIMESTAMP].list" If you don't know how to enable these settings:
  14. Oh yep, that'll do it, fixed now. Should be fixed in the next update. Thank you for spotting that
  15. Can you confirm if the path that exists is, /mnt/cache.nvme or /mnt/cache_nvme It could be that perhaps the code is treating the ".nvme.cfg" as the full extension and removing it, but I have checked this and it should not be the case. But for a better understanding you should provide some logs. Go to Settings>Scheduler>Mover Settings Set "Mover Logging" to Enabled. Go to the Mover Tunning settings and set "Test Mode" to "Yes" Set "Move Now button follows plug-in filters" to "Yes". On the Main tab press the "Move" button to run the mover. Running in test mode will generate logs but not make moves. Go to "/tmp/Mover" and get the most recent file ending in ".log" You can revert test mode and logging now. You can supply the .log contents here in a spoiler or send them to me privately. This might help me understand why you are experiencing behaviour that is unexpected.
  16. The mover tuning plugin uses the cache pool configuration files located in "/boot/config/pools" to determine what pools exist and their settings, check to see if you have a file called "cache.cfg" in there in addition to your "cache_nvme.cfg" file. Let me know if you don't have one, or if you only have "cache.cfg" check inside it to see what the cache name reports back.
  17. Not 100% sure but it seems like you might be getting an invalid file size for some of the files that are performing an array>cache move. Could you check the contents of your "/tmp/Mover/Custom_Cache_Tuning_$NOW.list" file, you may have some entries that are missing a number at the end.
  18. Can you confirm if you have a cache pool called "cache" and if the location "/mnt/cache" exists? My first thoughts are that your cache pool is not called that but you may have some left over cache pool config files located at "/boot/config/pools". Check that location and see if all your pools are correct.
  19. Just to clarify, the mover tuning plug-in settings won’t affect “Array > Cache” shares, but the mover itself still affects them. They will be treated as if it was the normal mover and an attempt to move all files in that share from the Array to the Cache will be made regardless of the settings in the mover tuning plug-in. The mover tuning plug-in settings only affect moves made from the cache to the array. You’ll notice that if you click into a share, the option to override the global mover tuning settings is only available for “Cache > Array” shares. This is not to say that it can’t be supported in the future, it just hasn’t been so far and is probably a pretty niche use case.
  20. Have you tried testing it or are you just going off the code? That part of the code is only meant to exclude the top level folder path or individual files if they have been specified in the exclude list, it doesn't exclude anything underneath a folder. The find string will then use a -not -path "/mnt/cache/MyShare/KeepOnCache/*" to exclude the files underneath the folder.
  21. What issue are you noticing? Inside the "exclusion.txt" file you will need to make sure that each path for exclusion is going through the "/mnt/cache" pool name share and not the "/mnt/user" share. You also need to make sure that if you are using the override settings for a particular share, you should include the exclude list inside those override settings too because it won't use the global exclude list.
  22. Yeah it's not possible at the moment because the threshold is for the entire pool and in it's current state, the logic is not there to be able to easily implement it for shares. I will be looking to implement this eventually, but it will likely come with a big rewrite of the mover tuning logic so I wouldn't expect it anytime soon. Some considerations need to be made like which thresholds take precedence. For the share that you want to move daily, are you using any other mover tuning settings? There would be a way to setup a Cron job to always move that share but it would ignore the mover tuning settings, similarly to pressing the "Mover share now" button under the shares settings.
  23. Yeah so it should only be the source of the issue if you had "Override Mover Tuning settings for this share:" set to "Yes", but didn't have the skip list supplied. It's a sort of feature flaw that's a result of how the per share settings is implemented. If you had it set to "No" then it should be a different problem.
  24. Hmm strange, might need to provide some logs to really get to the bottom of this. If you enable logging in the mover settings under scheduler, you can also put it into test mode, so that it doesn't move anything but will still go through the whole process. Then you can find logs in the syslog and at "/tmp/Mover". My initial thoughts are: If you're using the mover tuning settings override for the "data" share, make sure that you enable the skip list and provide the list in those settings too. Double check that "Skip files listed in text file:" is set to "Yes" and that the correct path is supplied under "File list path:" Perhaps the list is written in CTLF instead of LF. Recreate the skip file using nano in terminal: nano /mnt/cache/appdata/ignore.files *paste in your list* *ctrl+o to save* *ctrl+x to exit* Run in test mode again to see if it ignores the list.