CowboyRedBeard Posted February 18, 2022 Author Share Posted February 18, 2022 But shouldn't the below behavior be basically all the time? I mean, those SSDs should be able to keep 300MiB/s and more for extended periods of time right? I'm pretty sure this doesn't have anything to do with network, since it exhibits the same behavior even disk to disk. The IO Wait has got to be a byproduct of whatever the problem is. This wasn't the case for a long time prior to 6.7 in this very same server, with all the same hardware. These current SSD drives were even an attempt to rule out the previous SSDs. Quote Link to comment
dlandon Posted February 18, 2022 Share Posted February 18, 2022 20 minutes ago, CowboyRedBeard said: I mean, those SSDs should be able to keep 300MiB/s and more for extended periods of time right? I'm pretty sure this doesn't have anything to do with network, since it exhibits the same behavior even disk to disk. The IO Wait has got to be a byproduct of whatever the problem is. Sorry, I was under the impression that we were talking about an issue with samba shares. How are the SSDs attached sata or nvme? Quote Link to comment
CowboyRedBeard Posted February 18, 2022 Author Share Posted February 18, 2022 Both SSDs are connected via SATA3 ports on the MB (SuperMicro) and I've also tried having them connected to a SATA card plugged via PCIE Quote Link to comment
dlandon Posted February 18, 2022 Share Posted February 18, 2022 Didn't you say that you had a nvme disk installed that gave you the performance you were looking for? Quote Link to comment
CowboyRedBeard Posted February 18, 2022 Author Share Posted February 18, 2022 Yeah, I had an Intel Optane drive in there that I was using with VMs. I was able to send to it at a pretty crazy clip, but the SSDs won't achieve anything even approaching what the benchmarks say they can do for more than a few minutes. Quote Link to comment
dlandon Posted February 18, 2022 Share Posted February 18, 2022 I'd say this is an issue with the SSD disks. I think that we should call in the big gun. @JorgeB Can you weigh in on this user's issues? Looks like some SSD compatabiity issues. Quote Link to comment
CowboyRedBeard Posted February 18, 2022 Author Share Posted February 18, 2022 I have another new Crucial MX500 ( CT1000MX500SSD1 ) I can replace it with to try.... but I had this problem with MX500 500Gb drives that I had prior. And those were 2 drives in a pool at first, split the cache to a single drive and was still a problem. I tried both BTRFS and XFS I'll do whatever tests you guys think make sense. Let me know. THANK YOU FOR THE HELP! -yes that's caps 🙃 Quote Link to comment
dlandon Posted February 18, 2022 Share Posted February 18, 2022 Just hold where you are until JorgeB can lend a hand. It will take him a while to respond because of the time difference. Quote Link to comment
JorgeB Posted February 19, 2022 Share Posted February 19, 2022 I already posted in this thread before, it's an elusive issue and one that I can't replicate, so don't really have anything new to add. Quote Link to comment
CowboyRedBeard Posted February 21, 2022 Author Share Posted February 21, 2022 (edited) Do you think this is a product of the configuration? I can't see how it's a hardware issue, so maybe reinstalling unRAID would help? Also, more than happy to post any logs / test data needed to solve this. Edited February 21, 2022 by CowboyRedBeard Quote Link to comment
CowboyRedBeard Posted March 3, 2022 Author Share Posted March 3, 2022 Any chance this could have something to do with my memory configuration or something like that? I am using the SCU cable for most of the array drives, although the Parity disk Cache drive are on the onboard SATA3 ports. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.