oleg-123 Posted March 25 Share Posted March 25 This is a bit convoluted, but here goes... I'm migrating my Unraid server to a new box with all NVMe drives (16 x Intel P4510). The drives are hooked up to several HBA cards - two HighPoint 1580 and one Broadcom 9600-24i. 12 drives are connected directly to the two HighPoint cards, and 4 remaining drives are inside an Icy Dock enclosure, which is connected to the Broadcom card via Oculink cables. For some reason, the drives inside Icy Dock read/write at around 3 GB/s, while the drives connected to the HighPoint card max out at 750 Mb/s. I'm using DiskSpeed docker for benchmarks. Any ideas on where to begin figuring this out? Diagnostics file attached. roach-diagnostics-20240325-1522.zip Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 Swap some drives, if the performance remains low on the Highpoint, it suggests a controller issue/bottleneck. Quote Link to comment
oleg-123 Posted March 26 Author Share Posted March 26 Oh it’s definitely the controller. I just need to figure out whether there are settings I can adjust to make it work property. Whether it’s in BIOS, in Unraid, or… Quote Link to comment
JorgeB Posted March 26 Share Posted March 26 I'm afraid that won't be easy, unless someone else has the same controller, and found a way to make it perform better, usually you just need the default settings, one thing you can do is use lspci -vv to confirm the link/width are not downgraded. 1 Quote Link to comment
oleg-123 Posted March 26 Author Share Posted March 26 Thanks @JorgeB! I'll see what I can get with Lspci -vv and will post an update. Quote Link to comment
oleg-123 Posted March 29 Author Share Posted March 29 @JorgeBcould you help me parse through it the output? lspci-vv-output-new.txt Quote Link to comment
oleg-123 Posted March 30 Author Share Posted March 30 Out of curiosity, I booted the into a live Mint distro and benchmarked the disks from its disk utility. All of the disks ran at full speed. Looks like it is something in Unraid itself... Quote Link to comment
JorgeB Posted March 30 Share Posted March 30 Check the driver that Mint is using, if the driver is the same it should perform the same, also post the complete diagnostics. 1 Quote Link to comment
oleg-123 Posted March 30 Author Share Posted March 30 (edited) Thanks! Complete diagnostics is in the top post. I’ll look for the driver. Looking for HBA card drivers or NVMe specifically? Edited March 30 by oleg-123 Quote Link to comment
JorgeB Posted March 31 Share Posted March 31 The HBA is just a PCIe switch, so no driver is loaded, all NVMe devices are reporting the correct link speed/with, so to me this looks like a HBA problem, but not sure why it would be different with another distro. Quote Link to comment
oleg-123 Posted April 1 Author Share Posted April 1 weird. Tested with Fedora 39 and the same thing - all disks perform at full speed. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.