Shizlminizel Posted September 28, 2023 Share Posted September 28, 2023 I can reproduce this on 3 of my 4 Unraid servers that are on latest branch 2x 4 hdds with full zfs array chooses only the first disk my main unraid with 7 hdds in array chooses only disk 4 (the last disk before the last remaining xfs drive) I already played around with Split level as mentioned in all threads in the forum without success. As the other threads are not refering to the current branch or zfs I belive thats not my issue here. The filesystem is set up with thrash guide and it was working with xfs before. Anyone else seen this issue? Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 Make sure you are using 6.12.4 since there were some related issues before, if yes create a new test share, copy a couple of files and then see if the mover acts as expected for that share. Quote Link to comment
Shizlminizel Posted September 28, 2023 Author Share Posted September 28, 2023 I am on the latest branch 6.12.4 invoked mover to have cache cleared created testshare and enabled mover logging: copied ~20GB to the folder over SMB invoked mover again before: and after (images croped for visibility) so it went again to disk 4 instead of most free disk 3 mover logs: Sep 28 11:16:02 unraid shfs: /usr/sbin/zfs create 'cache/testshare' Sep 28 11:24:12 unraid emhttpd: shcmd (1409): /usr/local/sbin/mover |& logger -t move & Sep 28 11:24:12 unraid move: mover: started Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs unmount 'cache/Backup-Repository' Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs destroy 'cache/Backup-Repository' Sep 28 11:24:12 unraid root: cannot destroy 'cache/Backup-Repository': dataset is busy Sep 28 11:24:12 unraid shfs: retval: 1 attempting 'destroy' Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs mount 'cache/Backup-Repository' Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs unmount 'cache/bootcds' Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs destroy 'cache/bootcds' Sep 28 11:24:12 unraid root: cannot destroy 'cache/bootcds': dataset is busy Sep 28 11:24:12 unraid shfs: retval: 1 attempting 'destroy' Sep 28 11:24:12 unraid shfs: /usr/sbin/zfs mount 'cache/bootcds' Sep 28 11:24:12 unraid move: error: move, 380: No such file or directory (2): lstat: /mnt/cache/data/media/down/mega/B/folder Sep 28 11:24:12 unraid move: skip: /mnt/cache/data/down/folder/nextfolder/file1.mp4 Sep 28 11:24:12 unraid move: skip: /mnt/cache/data/down/folder/nextfolder/file2.mp4 Sep 28 11:24:12 unraid move: skip: /mnt/cache/data/down/folder/nextfolder/file3.mp4 Sep 28 11:24:12 unraid move: skip: /mnt/cache/data/down/folder/nextfolder/file3.mp4 ... Sep 28 11:24:13 unraid shfs: /usr/sbin/zfs unmount 'cache/downloads' Sep 28 11:24:13 unraid shfs: /usr/sbin/zfs destroy 'cache/downloads' Sep 28 11:24:13 unraid root: cannot destroy 'cache/downloads': dataset is busy Sep 28 11:24:13 unraid shfs: retval: 1 attempting 'destroy' Sep 28 11:24:13 unraid shfs: /usr/sbin/zfs mount 'cache/downloads' Sep 28 11:24:14 unraid move: file: /mnt/cache/testshare/file.iso Sep 28 11:24:14 unraid move: file: /mnt/cache/testshare/file2.iso Sep 28 11:24:14 unraid move: file: /mnt/cache/testshare/file3.mp4 ... Sep 28 11:26:39 unraid shfs: /usr/sbin/zfs unmount 'cache/testshare' Sep 28 11:26:39 unraid shfs: /usr/sbin/zfs destroy 'cache/testshare' Sep 28 11:26:39 unraid move: mover: finished Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 Thanks, let me see if I can reproduce. Quote Link to comment
itimpi Posted September 28, 2023 Share Posted September 28, 2023 Probably not relevant but it is always recommended that you do not have both the included disks and excluded disks set for a share - only use one of them and select the one that is most convenient. Quote Link to comment
Shizlminizel Posted September 28, 2023 Author Share Posted September 28, 2023 It is not set at the other shares. Only include is set on the system. Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 5 hours ago, Shizlminizel said: 2x 4 hdds with full zfs array chooses only the first disk Hmm, cannot reproduce, started with 4 empty array devices, share set to most free, copied a few ISOs to cache and ran the mover, they were distributed by all devices: Quote Link to comment
Shizlminizel Posted September 28, 2023 Author Share Posted September 28, 2023 do you have refering dataset on the disks? Maybe that is my problem as I filled those disks up before creating the root datasets with unbalance: root@yoda:/mnt/cache# zfs list NAME USED AVAIL REFER MOUNTPOINT cache 300G 1.46T 144K /mnt/cache cache/Backup-Repository 96K 1.46T 96K /mnt/cache/Backup-Repository cache/appdata 23.8G 1.46T 23.8G /mnt/cache/appdata cache/backup-pve 96K 1.46T 96K /mnt/cache/backup-pve cache/bootcds 96K 1.46T 96K /mnt/cache/bootcds cache/data 3.77G 1.46T 3.77G /mnt/cache/data cache/domains 96K 1.46T 96K /mnt/cache/domains cache/downloads 96K 1.46T 96K /mnt/cache/downloads cache/secure 223G 1.46T 223G /mnt/cache/secure cache/system 32.5G 1.46T 32.5G /mnt/cache/system cache/transcode 16.9G 1.46T 16.9G /mnt/cache/transcode disk1 4.87T 2.27T 4.87T /mnt/disk1 disk2 5.43T 1.71T 5.43T /mnt/disk2 disk3 4.44T 2.70T 4.44T /mnt/disk3 disk4 4.62T 2.52T 4.61T /mnt/disk4 disk4/testshare 19.2G 2.52T 19.2G /mnt/disk4/testshare disk6 7.68T 1.29T 96K /mnt/disk6 disk6/data 7.68T 1.29T 7.68T /mnt/disk6/data disk7 7.60T 1.36T 96K /mnt/disk7 disk7/data 7.60T 1.36T 7.60T /mnt/disk7/data Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 39 minutes ago, Shizlminizel said: Maybe that is my problem as I filled those disks up before creating the root datasets with unbalance: Could be, let me see if I can test. Quote Link to comment
Shizlminizel Posted September 28, 2023 Author Share Posted September 28, 2023 Well it seems so when I look on the other 2 systems: working: root@Q-SHIZL:~# zfs list NAME USED AVAIL REFER MOUNTPOINT cache 286G 1.48T 120K /mnt/cache cache/Backup-Repository 283G 1.48T 283G /mnt/cache/Backup-Repository cache/appdata 52.3M 1.48T 52.3M /mnt/cache/appdata cache/domains 96K 1.48T 96K /mnt/cache/domains cache/isos 96K 1.48T 96K /mnt/cache/isos cache/system 2.78G 1.48T 2.78G /mnt/cache/system disk1 640G 2.89T 112K /mnt/disk1 disk1/Backup-Repository 177G 2.89T 177G /mnt/disk1/Backup-Repository disk1/backup-pve 170G 2.89T 170G /mnt/disk1/backup-pve disk1/data 294G 2.89T 294G /mnt/disk1/data disk1/documents 49.7M 2.89T 49.7M /mnt/disk1/documents disk2 158G 3.36T 104K /mnt/disk2 disk2/Backup-Repository 96K 3.36T 96K /mnt/disk2/Backup-Repository disk2/backup-pve 11.9G 3.36T 11.9G /mnt/disk2/backup-pve disk2/data 146G 3.36T 146G /mnt/disk2/data disk3 426G 3.10T 112K /mnt/disk3 disk3/Backup-Repository 271G 3.10T 271G /mnt/disk3/Backup-Repository disk3/backup-pve 96K 3.10T 96K /mnt/disk3/backup-pve disk3/data 155G 3.10T 155G /mnt/disk3/data disk3/documents 152K 3.10T 96K /mnt/disk3/documents and not working: root@Q-ARTM:~# zfs list NAME USED AVAIL REFER MOUNTPOINT cache 849M 898G 104K /mnt/cache cache/appdata 3.81M 898G 3.81M /mnt/cache/appdata cache/backup-tailscale 96K 898G 96K /mnt/cache/backup-tailscale cache/system 814M 898G 814M /mnt/cache/system disk1 522G 4.82T 521G /mnt/disk1 disk1/backup-appdata 109M 4.82T 109M /mnt/disk1/backup-appdata disk2 3.27M 1.76T 96K /mnt/disk2 disk3 3.25M 1.76T 96K /mnt/disk3 so mover seem to be strugeling if you create folders manualy on disk before mover creates the datasets somehow... Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 Doesn't look like that is the problem, started with this, all current data was in the diskX zfs dataset: Copied some ISOs to a test share and ran the mover, it started filling up disk3: Once disk3 had more used space than the other ones it started writing to them: So it all looks correct to me. Quote Link to comment
Shizlminizel Posted September 29, 2023 Author Share Posted September 29, 2023 Well no clue why it is not working on my system. I am currently trying to free up one disk to see if mover will pick this up once it is empty. But that takes some time to move 6TB with unbalance Quote Link to comment
JorgeB Posted September 29, 2023 Share Posted September 29, 2023 54 minutes ago, Shizlminizel said: Well no clue why it is not working on my system. Same, and if you find something please let us know, but from the tests I did, and one server I have with an all zfs array using most free, everything appears to be working correctly with the latest release. Quote Link to comment
Mainfrezzer Posted September 29, 2023 Share Posted September 29, 2023 Im pretty sure there is some bug with the zfs and mover implementation. I wanted to test something earlier and mover was able to move array -> zfs cache but absolutely unable to move zfs cache -> array. No apparent reason for it. Everything just worked fine apart from mover just simply refusing to move on that single disk in the array. Quote Link to comment
JorgeB Posted September 29, 2023 Share Posted September 29, 2023 6 minutes ago, Mainfrezzer said: No apparent reason for it. Everything just worked fine apart from mover just simply refusing to move on that single disk in the array. Would need diags with mover logging enable to try and see what happened. Quote Link to comment
Mainfrezzer Posted September 29, 2023 Share Posted September 29, 2023 Just now, JorgeB said: Would need diags with mover logging enable to try and see what happened. there wasnt anything "out of the ordinary" logged. it just behaved like theres no file to move. Im gonna check on it later to see if its doing the same again or if got its grip together. Quote Link to comment
Solution Shizlminizel Posted September 30, 2023 Author Solution Share Posted September 30, 2023 So it seem to work now. And it was not the missing dataset but I will fix that anyway. changed from: to and now mover picked the empty disk. So it seems like mover does not like it when a disk is missing in included. Quote Link to comment
JorgeB Posted September 30, 2023 Share Posted September 30, 2023 1 hour ago, Shizlminizel said: So it seems like mover does not like it when a disk is missing in included. Hmm, that should not be a problem, but I'll test later to confirm. 1 Quote Link to comment
JorgeB Posted September 30, 2023 Share Posted September 30, 2023 10 hours ago, Shizlminizel said: So it seems like mover does not like it when a disk is missing in included. Not quite that but thanks to you found the problem, bug occurs if 6 or more disks are included in a share, it's not the mover or the filesystem, it's a bug with the share allocation, e.g., on an array with 6 disks configure a share like this: Then transfer some data to that share and assuming all disks initially had the same free space the data will only be written to disks 4, 5 and 6: So it's a strange one, didn't test other allocation methods, but possibly also affected, if 6 or more disks are included for the share. Quote Link to comment
established-structure1327 Posted October 4, 2023 Share Posted October 4, 2023 I have the same problem, Some hard drives are running out of space Quote Link to comment
JorgeB Posted October 4, 2023 Share Posted October 4, 2023 2 minutes ago, established-structure1327 said: I have the same problem If it's really the same problem just remove the included disks, you can use excluded. Quote Link to comment
Shizlminizel Posted October 12, 2023 Author Share Posted October 12, 2023 I´ll go ahead and mark as solved as the workaround to use exclude instead of an array of included disks solved the issue. Quote Link to comment
Shizlminizel Posted October 12, 2023 Author Share Posted October 12, 2023 And of course: Thanks @JorgeB for your support. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.