sansei Posted March 28 Share Posted March 28 Noticed the behavior since I changed the cache pool filesystem to ZFS. They are mirrored 2x2TB SSD drives. Before was using BTRFS without issues. File copies from Windows to Unraid sometimes started normal, then gradually dropped to 0 bytes/s and eventually failed. I created a share that's exposed a specific disk in the array, then copy files from Windows to this share directly is normal, at about 60MB/s. This is to eliminate any network issues. Once done copying, then go into terminal to copy files to cache to test write speed, it is extremely slow. A 70GB file takes over an hour to complete. And often block other activities, such as updating docker containers. It behaves like the file copying task is blocking any other writes to the cache disks. In one severe case, the front GUI became completely non-responsive and threw 500 server error. Reading from cache pool is always fine, speed is normal, can reach 115MB/s. This box has been running for about 11 years, with 32GB non-ECC memory. In which, 4GB is allocated to ZFS, as the ratio seems fine. Don't have any VMs, just dockers. Ran Fix Common Problems and did not find any errors or warnings. Attached the diagnostics file. tower-diagnostics-20240328-0607.zip Quote Link to comment
Solution JorgeB Posted March 28 Solution Share Posted March 28 This looks more like a device problem, do you have other devices you could try width? Even a couple of non SMR HDDs would perform better than that. Quote Link to comment
sansei Posted March 28 Author Share Posted March 28 7 hours ago, JorgeB said: This looks more like a device problem, do you have other devices you could try width? Even a couple of non SMR HDDs would perform better than that. These SSD drives are Crucial BX500 SATA disk rated 500MB/s write speed https://www.crucial.com/ssd/bx500/ct2000bx500ssd1. If read speed is normal, why is the write so slow? I'll try to use some old SSD to test then. Quote Link to comment
JorgeB Posted March 28 Share Posted March 28 They are rated up to 500MB/s, they cannot sustain that speed for writes, though it should be faster than what you are seeing, but one or both could be failing. Quote Link to comment
sansei Posted March 30 Author Share Posted March 30 On 3/28/2024 at 7:16 AM, JorgeB said: This looks more like a device problem, do you have other devices you could try width? Even a couple of non SMR HDDs would perform better than that. It's indeed the problem of the one of cache drives. I moved the data from cache pool to array, ran copy tasks individually on each drive and found one of the disks is acting up. Will do the RMA then. Thanks a lot! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.