JorgeB Posted July 13, 2023 Share Posted July 13, 2023 48 minutes ago, MAM59 said: the ~30MB/s of the write. That is really unacceptable. Bug needs to be fixed for best array write speed, pool write speeds are unaffected. Quote Link to comment
MAM59 Posted July 14, 2023 Author Share Posted July 14, 2023 Getting worse 😞 Even writing to the cache is blocked 😞 The peak in the curve is 1G/s the usual speed I can fill up the cache. But as you can see, soon after it drops to ridiculous "speed" and stays there.  Can it have something to do with the fill level of the drive? This one is 97% full.  Maybe I should learn (and try) to balance it somehow? (the next drive with this share is currently only at 45% full)  Quote Link to comment
JorgeB Posted July 14, 2023 Share Posted July 14, 2023 46 minutes ago, MAM59 said: Can it have something to do with the fill level of the drive? This one is 97% full. Most likely. Quote Link to comment
MAM59 Posted July 14, 2023 Author Share Posted July 14, 2023 3 minutes ago, JorgeB said: Most likely. So I guess I need "unbalance" to move over some Gigs to free up space? Would 75% usage sound good? (and How do I do that, never used unbalance before). I've even got a totally free disk currently (leftover from the copy orgy) Quote Link to comment
MAM59 Posted July 15, 2023 Author Share Posted July 15, 2023 Enough! Today was "backup day" for all clients, lots (and much) of writes. UNRAID became a snail (almost a dead one). A usual backup took 4-8 times longer than usual (same unraid, just with xfs). Luckily I still had a spare drive, already formatted with zfs. I killed it again and reformatted with xfs and now am on the "copy back" tour. And see: instead of 20MB/s it gives me (It will slowdown later on, but ~170 will be the lower limit I think) Â Quote Link to comment
itimpi Posted July 15, 2023 Share Posted July 15, 2023 There have been plenty of reports that writing to ZFS format drives in the main array can be very slow.  I have a suspicion that this may be inherent to ZFS and the way it chooses which sector to write to next causing excessive head movement on the parity drive but I am not sure there has been any structured testing to prove this.  It would be nice to know for certain as that would mean we would then advise against using ZFS in the main Unraid array, and instead use either ZFS as the most efficient or BTRFS for those who want bit rot detection. ZFS would still have a place in pools where you want maximum performance. Quote Link to comment
MAM59 Posted July 15, 2023 Author Share Posted July 15, 2023 1 minute ago, itimpi said: ZFS would still have a place in pools where you want maximum performance. I've tested ZFS as "cache" before, also a total flop. Can't remember the details, but it was also frustrating, so I went back again to xfs. Â 4 minutes ago, itimpi said: next causing excessive head movement on the parity drive Unlikely I think, it would need the same movements for the data drive too and this is within the scope of ZFS. BUT, maybe the combination of several single ZFS installations within a single array will confuse the parity? Each of them maybe planned, but they are surely not coordinated (because ZFS has no idea of the parity drive) and therefor produce chaos moves. I will check this assumption in a few days after I have freed all but one ZFS disks from the array. If I am right, the last one will work "normally". Â Quote Link to comment
vayidm Posted July 15, 2023 Share Posted July 15, 2023 29 minutes ago, MAM59 said: Enough! Today was "backup day" for all clients, lots (and much) of writes. UNRAID became a snail (almost a dead one). A usual backup took 4-8 times longer than usual (same unraid, just with xfs). Luckily I still had a spare drive, already formatted with zfs. I killed it again and reformatted with xfs and now am on the "copy back" tour. And see: instead of 20MB/s it gives me (It will slowdown later on, but ~170 will be the lower limit I think) Â Same thing has happened in my testing. I've turned most of my disks back to XFS. Lesson learnt, don't switch without testing thoroughly first My 8TB copy finished in 12 hours and the data rate slowed to 150MB/s at the end, but like yours it stays at 200+ MB/s for a long while. Â When I do the ZFS copy I notice it will write at full speed for a short while, and then tank which seems to indicate some buffer has filled up or something happens to slow it down significantly. Quote Link to comment
MAM59 Posted July 15, 2023 Author Share Posted July 15, 2023 2 hours ago, vayidm said: but like yours it stays at 200+ MB/s for a long while Out of luck 😞 It fell down to 83MB/s already. 🤔 Quote Link to comment
MAM59 Posted July 19, 2023 Author Share Posted July 19, 2023 Additional Info: even with only one zfs drive in the array, copying is still at crawl speed. So there is no interfering between drives, there is just a serious bug within zfs/UNRAID that causes the slowdowns. Will take me another 3 days until I finally get rid of zfs again completely... Â Quote Link to comment
JorgeB Posted July 19, 2023 Share Posted July 19, 2023 3 hours ago, MAM59 said: even with only one zfs drive in the array, copying is still at crawl speed. Every array drive is a single drive, so yes, that is expected, and you see the same issue when copying from a pool to a zfs array. Quote Link to comment
MAM59 Posted July 19, 2023 Author Share Posted July 19, 2023 (edited) 18 minutes ago, JorgeB said: that is expected, and I'm usually hopelessly optimistic :-))) I am really wondering why this fatal flaw has not been discovered throughout beta phase. It is a such essential thing...  BTW: I'm not quite sure how, but I am missing a lot of files that I have added last week (new on zfs drives) 😞 Seems that mover has moved them to Never-Never land, although some of the targets did not reside on zfs drive at all. But then, it could also have been my own fault, dunno what happened, so I cannot blame anybody (besides myself?). So far I managed to find many of them and re-upped (still there now). But I dont know how many of them there were and at which location they should have been..  Edited July 19, 2023 by MAM59 Quote Link to comment
JorgeB Posted July 19, 2023 Share Posted July 19, 2023 1 hour ago, MAM59 said: I am really wondering why this fatal flaw has not been discovered throughout beta phase It was discovered, though it doesn't affect all hardware in the same way and the the fix might not be a simple one, I think it's related to the Unraid driver. Â Â Â Â Quote Link to comment
Tun2022 Posted August 7, 2023 Share Posted August 7, 2023 On 7/19/2023 at 4:25 AM, JorgeB said: It was discovered, though it doesn't affect all hardware in the same way and the the fix might not be a simple one, I think it's related to the Unraid driver.     same boat - just rolled back to XFS and speed normal now.  Quote Link to comment
Michus Posted December 13, 2023 Share Posted December 13, 2023 I copied 8 TB to an empty 20 TB ZFS SSD Pool and ended up with 11 mb/s ;( Quote Link to comment
JonathanM Posted December 15, 2023 Share Posted December 15, 2023 On 12/13/2023 at 12:46 PM, Michus said: I copied 8 TB to an empty 20 TB ZFS SSD Pool and ended up with 11 mb/s ;( Check your network connection speeds. https://forums.unraid.net/topic/89575-testing-network-performance-with-iperf-how-to-install-use-test/ Â Quote Link to comment
trurl Posted December 15, 2023 Share Posted December 15, 2023 On 12/13/2023 at 12:46 PM, Michus said: I copied 8 TB to an empty 20 TB ZFS SSD Pool and ended up with 11 mb/s ;( I suspect you mean 11Mb/s. That speed sounds suspiciously like 1G ethernet degraded to 100mb. Â Attach diagnostics to your NEXT post in this thread. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.