-
Posts
162 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
Andiroo2's Achievements
-
OK I increased my ARC to 1/4 of RAM instead of the default 1/8. Will see what happens.
-
-
Andiroo2 started following Something is trying to destroy a dataset , ZFS performance within NVMe pool , [Plugin] Mover Tuning and 1 other
-
Background: I have 5x 2TB NVMe SSDs in a RaidZ1 pool (non encrypted, some datasets have compression and some do not), used as my cache. I'm sitting between 80% and 90% utilization at any given time (8TB usable, ~1TB free). Moving large files within the pool is very slow compared to when I was running the same drives in a BTRFS Raid1 pool (5TB usable). Moving a 5GB file from folder A to folder B within the cache gives me consistently less than 500MB/s transfer speed (usually much less), while I was seeing around 1.5GB/s on BTRFS. Not sure what I can check here. Could this be due to the higher pool utilization? I have a 10th gen i7 and 48GB of RAM, and the system isn't taxed. The PCIe bus is 3.0 and all NVMe's have DRAM. Thanks for your suggestions!
-
Odd issue today...trying to move files off my ZFS cache and...nothing is happening. I've had Mover Tuning for years without issue. I have tried changing a LOT of these settings to see if anything changes and nothing has worked so far. I haven't tried disabling Mover Tuning yet because I don't want to clear the cache pool out at this point. My Mover Tuning settings: My share overview is attached as well. If I run the dedicated "move all from this share" in a shares settings, Mover works. When I run mover via Move Now (or via the schedule), the log shows this: Nov 22 06:43:37 Tower emhttpd: shcmd (206793): /usr/local/sbin/mover |& logger -t move & Nov 22 06:43:37 Tower root: Starting Mover Nov 22 06:43:37 Tower root: Forcing turbo write on Nov 22 06:43:37 Tower root: ionice -c 2 -n 0 nice -n 0 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 45 5 0 "/mnt/user/system/Mover_Exclude.txt" "ini" '' '' no 100 '' '' 95 Nov 22 06:43:37 Tower kernel: mdcmd (46): set md_write_method 1 I don't get the ****staring mover*** line in the log. When I run mover manually from a share's settings page, I get a much nicer looking set of logs: mvlogger: Log Level: 1 mvlogger: *********************************MOVER -SHARE- START******************************* mvlogger: Wed Nov 22 05:52:42 EST 2023 mvlogger: Share supplied CommunityApplicationsAppdataBackup mvlogger: Cache Pool Name: cache mvlogger: Share Path: /mnt/cache/CommunityApplicationsAppdataBackup mvlogger: Complete Mover Command: find "/mnt/cache/CommunityApplicationsAppdataBackup" -depth | /usr/local/sbin/move -d 1 file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-telegraf.xml file: /mnt/cache/CommunityApplicationsAppdataBackup/ab_20231120_020002-failed/my-homebridge.xml ... Running a find command shows plenty of files older than the specified age that should be caught by the mover: root@Tower:~# find /mnt/cache/Movies -mtime +45 /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng - 1080p.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng -.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.fr.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023).eng.HI -.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.es.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.en.srt /mnt/cache/Movies/Movies/BlackBerry (2023)/BlackBerry (2023) - 1080p.mp4 /mnt/cache/Movies/Movies/80 for Brady (2023) /mnt/cache/Movies/Movies/80 for Brady (2023)/80 for Brady (2023) WEBDL-2160p.es.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).mkv /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).en.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).en.forced.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).es.srt /mnt/cache/Movies/Movies/Toy Story 4 (2019)/Toy Story 4 (2019).fr.srt Diagnostics attached in case it helps. Thanks! tower-diagnostics-20231122-0656.zip
-
Fascinating...I wonder where these values came from. I had several shares with huge values for the minimum free space (400-700GB). It actually caused overflow from the cache to the array for several shares and answers some other questions I had. Thanks for the fast turnaround!
-
Diagnostics attached. Min free space on cache is set to 0, with warning and critical thresholds set to 95% and 98%, respectively. tower-diagnostics-20231103-0949.zip
-
Seeing a ton of these messages in the logs: Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:17 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:20 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:21 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:22 Tower shfs: share cache full Nov 3 04:14:33 Tower shfs: share cache full Cache is ZFS, sitting around 95% full with > 400GB free. Curious to know what process thinks the cache pool is full. Mover is set to run at 95% usage via Mover Tuning, so I expect it will run soon, but would like to know if something is failing and throwing these messages.
-
Aha, I run mover on this share a couple times per week. Would this "destroy" the dataset once it empties it out?
-
I have some interesting lines in my log every day at noon: Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs unmount 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs destroy 'cache/CommunityApplicationsAppdataBackup' Sep 28 12:00:01 Tower root: cannot destroy 'cache/CommunityApplicationsAppdataBackup': dataset is busy Sep 28 12:00:01 Tower shfs: retval: 1 attempting 'destroy' Sep 28 12:00:01 Tower shfs: /usr/sbin/zfs mount 'cache/CommunityApplicationsAppdataBackup' This folder is the first dataset/folder alphabetically in my cache pool, which is a RaidZ1 setup. This dataset is not part of an auto-snapshot or auto-backup (I recently set up automated snapshots for my appdata and domains datasets via the SpaceinvaderOne tutorial). Here's the view from ZFS Master: The destroy is failing (thankfully?) but I'd like to know what's trying to kill it.
-
Just installed the latest version of the plugin. Thank you thank you thank you!
-
I will kiss you if this makes the list! Purely consensual, of course...
-
This post should be stickied.
-
No errors that I can see in the backup log (regular and debug). I created a debug log to share with you, ID 8eed7224-7ebd-4120-872c-6e3afb6c0459 in case you can see something I cannot.