• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Andiroo2's Achievements


Apprentice (3/14)




Community Answers

  1. Odd issue today...trying to move files off my ZFS cache and...nothing is happening. I've had Mover Tuning for years without issue. I have tried changing a LOT of these settings to see if anything changes and nothing has worked so far. I haven't tried disabling Mover Tuning yet because I don't want to clear the cache pool out at this point. My Mover Tuning settings: My share overview is attached as well. If I run the dedicated "move all from this share" in a shares settings, Mover works. When I run mover via Move Now (or via the schedule), the log shows this: Nov 22 06:43:37 Tower emhttpd: shcmd (206793): /usr/local/sbin/mover |& logger -t move & Nov 22 06:43:37 Tower root: Starting Mover Nov 22 06:43:37 Tower root: Forcing turbo write on Nov 22 06:43:37 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 45 5 0 "/mnt/user/system/Mover_Exclude.txt" "ini" '' '' no 100 '' '' 95 Nov 22 06:43:37 Tower kernel: mdcmd (46): set md_write_method 1 I don't get the ****staring mover*** line in the log. When I run mover manually from a share's settings page, I get a much nicer looking set of logs: mvlogger: Log Level: 1 mvlogger: *********************************MOVER -SHARE- START******************************* mvlogger: Wed Nov 22 05:52:42 EST 2023 mvlogger: Share supplied CommunityApplicationsAppdataBackup mvlogger: Cache Pool Name: cache mvlogger: Share Path: /mnt/cache/CommunityApplicationsAppdataBackup mvlogger: Complete Mover Command: find "/mnt/cache/CommunityApplicationsAppdataBackup" -depth | /usr/local/sbin/move -d 1 file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-telegraf.xml file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-homebridge.xml ... Running a find command shows plenty of files older than the specified age that should be caught by the mover: root@Tower:~# find /mnt/cache/Movies -mtime +45 /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng - /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng.HI /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.mp4 /mnt/cache/Movies/Movies/80 for Brady (2023) /mnt/cache/Movies/Movies/80 for Brady (2023)/80 for Brady (2023) /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).mkv /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019) /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019) /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019) /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019) Diagnostics attached in case it helps. Thanks!
  2. Is there a way to reduce the time scale for each disk that is listed in the plugin? When I go to the disk listing to see what's causing a disk to spin up, i have to scroll through disk activity from up to 24h ago. I only want to see activity in the last hour or so. Can I specify this? Thanks!
  3. Fascinating...I wonder where these values came from. I had several shares with huge values for the minimum free space (400-700GB). It actually caused overflow from the cache to the array for several shares and answers some other questions I had. Thanks for the fast turnaround!
  4. Diagnostics attached. Min free space on cache is set to 0, with warning and critical thresholds set to 95% and 98%, respectively.
  5. Seeing a ton of these messages in the logs: Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:33 Tower shfs: share cache full Cache is ZFS, sitting around 95% full with > 400GB free. Curious to know what process thinks the cache pool is full. Mover is set to run at 95% usage via Mover Tuning, so I expect it will run soon, but would like to know if something is failing and throwing these messages.
  6. Aha, I run mover on this share a couple times per week. Would this "destroy" the dataset once it empties it out?
  7. I have some interesting lines in my log every day at noon: Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs unmount 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs destroy 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower root: cannot destroy 'cache/CommunityApplicationsAppdataBackup': dataset is busy Sep 28 12:00:01 Tower shfs: retval: 1 attempting 'destroy' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs mount 'cache/CommunityApplicationsAppdataBackup' This folder is the first dataset/folder alphabetically in my cache pool, which is a RaidZ1 setup. This dataset is not part of an auto-snapshot or auto-backup (I recently set up automated snapshots for my appdata and domains datasets via the SpaceinvaderOne tutorial). Here's the view from ZFS Master: The destroy is failing (thankfully?) but I'd like to know what's trying to kill it.
  8. Just installed the latest version of the plugin. Thank you thank you thank you!
  9. I will kiss you if this makes the list! Purely consensual, of course...
  10. This post should be stickied.
  11. No errors that I can see in the backup log (regular and debug). I created a debug log to share with you, ID 8eed7224-7ebd-4120-872c-6e3afb6c0459 in case you can see something I cannot.
  12. Forgot to add, I had to change the VERSION tag as well, otherwise the docker container will update to the latest version available when it runs: I am back to proper 4K hardware transcoding with tone mapping on Quick Sync.
  13. Where are the logs for each backup stored?
  14. Not by Unraid. I have achieved something similar a couple of different ways: I use the Plexcache python script to move all On Deck media to the cache for my Plex users. This is really handy. I have a huge cache (8TB NVMe) and I set mover tuning to keep it full (95%) before running Mover, and then only move files that are 3 months old.