xhaloz Posted September 14, 2018 Share Posted September 14, 2018 (edited) Hello all. I have noticed that when I do a network transfer to a share that is only on the cache, I start at max 110 MB/s then it drops to around 20 MB/s and stays there. During that transfer my server locks up for about 5 min after I cancel the transfer. This shuts down my internet as well since my DNS server is hosted on the cache in question. If I transfer files to a share that is ONLY on the array it caps at 110 MB/s for the entire transfer. Transfer itself is about 70GB. Attached are my diagnostics and yes I am using SSD trim. Thanks for any help in advance. Edited September 15, 2018 by xhaloz Quote Link to comment
JorgeB Posted September 14, 2018 Share Posted September 14, 2018 That's not a very fast SSD, still 20MB/s is too slow if it's being trimmed regularly, I would expect around 80MB/s to 100MB/s sustained writes. You can use the script below to check write speeds without involving the network, if still slow the SSD is the problem. Copy it to the flash drive root and then on the console type: /boot/write_speed_test.sh /mnt/cache/test.dat write_speed_test.sh Quote Link to comment
xhaloz Posted September 15, 2018 Author Share Posted September 15, 2018 8 hours ago, johnnie.black said: That's not a very fast SSD, still 20MB/s is too slow if it's being trimmed regularly, I would expect around 80MB/s to 100MB/s sustained writes. You can use the script below to check write speeds without involving the network, if still slow the SSD is the problem. Copy it to the flash drive root and then on the console type: /boot/write_speed_test.sh /mnt/cache/test.dat write_speed_test.sh This is an awesome tool! So it did 2 transfers at 284 MB/s and then stopped...so my cache drive is wacked eh? Quote Link to comment
JorgeB Posted September 15, 2018 Share Posted September 15, 2018 If by stopped you mean it didn't finish the test, then yes, probably there's a problem with the SSD. Quote Link to comment
xhaloz Posted September 15, 2018 Author Share Posted September 15, 2018 5 minutes ago, johnnie.black said: If by stopped you mean it didn't finish the test, then yes, probably there's a problem with the SSD. Yeah stopped, didn't finish the test. Locked up and lost my services as mentioned before. Cool, wife is out and about so I'll have her snag me a SSD. I think I have it all backed up via CA plugin. I appreciate you very much. Quote Link to comment
Andiroo2 Posted September 19, 2018 Share Posted September 19, 2018 Is your cache in a Raid1 pool on BTRFS? I had to switch back to JBOD cache because large file transfers to my raid1 cache would cause the server to hang and all my docker containers to stop. Large files = >2GB for example. I’m running 6.5.3. Quote Link to comment
xhaloz Posted September 19, 2018 Author Share Posted September 19, 2018 Yeah raid 1. It ended up being a bad cache drive. Swapped my old for a new and problem solved thanks to the test above Quote Link to comment
testdasi Posted September 19, 2018 Share Posted September 19, 2018 1 hour ago, Andiroo2 said: Is your cache in a Raid1 pool on BTRFS? I had to switch back to JBOD cache because large file transfers to my raid1 cache would cause the server to hang and all my docker containers to stop. Large files = >2GB for example. I’m running 6.5.3. You might have a problem. I have never had server hanging (and dockers stopping) while transfer large files to/from cache even back when my server a simple NAS. And by large I mean up to 200GB. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.