Andiroo2

Members
  • Posts

    162
  • Joined

  • Last visited

Everything posted by Andiroo2

  1. OK I increased my ARC to 1/4 of RAM instead of the default 1/8. Will see what happens.
  2. Background: I have 5x 2TB NVMe SSDs in a RaidZ1 pool (non encrypted, some datasets have compression and some do not), used as my cache. I'm sitting between 80% and 90% utilization at any given time (8TB usable, ~1TB free). Moving large files within the pool is very slow compared to when I was running the same drives in a BTRFS Raid1 pool (5TB usable). Moving a 5GB file from folder A to folder B within the cache gives me consistently less than 500MB/s transfer speed (usually much less), while I was seeing around 1.5GB/s on BTRFS. Not sure what I can check here. Could this be due to the higher pool utilization? I have a 10th gen i7 and 48GB of RAM, and the system isn't taxed. The PCIe bus is 3.0 and all NVMe's have DRAM. Thanks for your suggestions!
  3. Odd issue today...trying to move files off my ZFS cache and...nothing is happening. I've had Mover Tuning for years without issue. I have tried changing a LOT of these settings to see if anything changes and nothing has worked so far. I haven't tried disabling Mover Tuning yet because I don't want to clear the cache pool out at this point. My Mover Tuning settings: My share overview is attached as well. If I run the dedicated "move all from this share" in a shares settings, Mover works. When I run mover via Move Now (or via the schedule), the log shows this: Nov 22 06:43:37 Tower emhttpd: shcmd (206793): /usr/local/sbin/mover |& logger -t move & Nov 22 06:43:37 Tower root: Starting Mover Nov 22 06:43:37 Tower root: Forcing turbo write on Nov 22 06:43:37 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 45 5 0 "/mnt/user/system/Mover_Exclude.txt" "ini" '' '' no 100 '' '' 95 Nov 22 06:43:37 Tower kernel: mdcmd (46): set md_write_method 1 I don't get the ****staring mover*** line in the log. When I run mover manually from a share's settings page, I get a much nicer looking set of logs: mvlogger: Log Level: 1 mvlogger: *********************************MOVER -SHARE- START******************************* mvlogger: Wed Nov 22 05:52:42 EST 2023 mvlogger: Share supplied CommunityApplicationsAppdataBackup mvlogger: Cache Pool Name: cache mvlogger: Share Path: /mnt/cache/CommunityApplicationsAppdataBackup mvlogger: Complete Mover Command: find "/mnt/cache/CommunityApplicationsAppdataBackup" -depth | /usr/local/sbin/move -d 1 file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-telegraf.xml file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-homebridge.xml ... Running a find command shows plenty of files older than the specified age that should be caught by the mover: root@Tower:~# find /mnt/cache/Movies -mtime +45 /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng - 1080p.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng -.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.fr.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng.HI -.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.es.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.en.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.mp4 /mnt/cache/Movies/Movies/80 for Brady (2023) /mnt/cache/Movies/Movies/80 for Brady (2023)/80 for Brady (2023) WEBDL-2160p.es.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).mkv /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).en.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).en.forced.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).es.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).fr.srt Diagnostics attached in case it helps. Thanks! tower-diagnostics-20231122-0656.zip
  4. Is there a way to reduce the time scale for each disk that is listed in the plugin? When I go to the disk listing to see what's causing a disk to spin up, i have to scroll through disk activity from up to 24h ago. I only want to see activity in the last hour or so. Can I specify this? Thanks!
  5. Fascinating...I wonder where these values came from. I had several shares with huge values for the minimum free space (400-700GB). It actually caused overflow from the cache to the array for several shares and answers some other questions I had. Thanks for the fast turnaround!
  6. Diagnostics attached. Min free space on cache is set to 0, with warning and critical thresholds set to 95% and 98%, respectively. tower-diagnostics-20231103-0949.zip
  7. Seeing a ton of these messages in the logs: Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:33 Tower shfs: share cache full Cache is ZFS, sitting around 95% full with > 400GB free. Curious to know what process thinks the cache pool is full. Mover is set to run at 95% usage via Mover Tuning, so I expect it will run soon, but would like to know if something is failing and throwing these messages.
  8. Aha, I run mover on this share a couple times per week. Would this "destroy" the dataset once it empties it out?
  9. I have some interesting lines in my log every day at noon: Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs unmount 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs destroy 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower root: cannot destroy 'cache/CommunityApplicationsAppdataBackup': dataset is busy Sep 28 12:00:01 Tower shfs: retval: 1 attempting 'destroy' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs mount 'cache/CommunityApplicationsAppdataBackup' This folder is the first dataset/folder alphabetically in my cache pool, which is a RaidZ1 setup. This dataset is not part of an auto-snapshot or auto-backup (I recently set up automated snapshots for my appdata and domains datasets via the SpaceinvaderOne tutorial). Here's the view from ZFS Master: The destroy is failing (thankfully?) but I'd like to know what's trying to kill it.
  10. Just installed the latest version of the plugin. Thank you thank you thank you!
  11. I will kiss you if this makes the list! Purely consensual, of course...
  12. No errors that I can see in the backup log (regular and debug). I created a debug log to share with you, ID 8eed7224-7ebd-4120-872c-6e3afb6c0459 in case you can see something I cannot.
  13. Forgot to add, I had to change the VERSION tag as well, otherwise the docker container will update to the latest version available when it runs: I am back to proper 4K hardware transcoding with tone mapping on Quick Sync.
  14. Where are the logs for each backup stored?
  15. Not by Unraid. I have achieved something similar a couple of different ways: I use the Plexcache python script to move all On Deck media to the cache for my Plex users. This is really handy. I have a huge cache (8TB NVMe) and I set mover tuning to keep it full (95%) before running Mover, and then only move files that are 3 months old.
  16. I rolled back to a stable version and it works again…see attached.
  17. Trying to restore for the first time today and I'm having issues. When I try to select the backup to restore, I get an empty dropdown. What am I missing here? I have plenty of backups to choose from. Thanks!
  18. Plex docker is no longer hardware transcoding on Intel Quick Sync as of v1.32.6.7468. No changes otherwise...docker updated after backing up last night and now I see crazy CPU usage due to 4K transcodes using CPU instead of GPU.
  19. Docker failed to start when I tried this, so I rebooted the server. When it came back up, I was able to delete the container and re-install from the “Previous Apps” section of Comunity Applications. Everything is working now.
  20. Same thing happening to me. I had an orphaned image and I deleted it, but still can’t start or delete the main Docker container for the same app.