Tinlad Posted November 1, 2019 Share Posted November 1, 2019 Hello, Earlier in the year I swapped out the mobo/CPU/RAM in my server: - From: Core i5-4570S (Asus H87I-Plus), 16GB - To: Xeon D-1520 (AsrockRack D1520D4I), 16GB ECC The disks remain identical: - 2TB WD Red parity - 2TB + 1TB WD Reds data - 2x 240GB OCZ SSD cache pool They are now connected via a mini-SAS to 4x SATA cable, plus one directly in a SATA port on the board. All are reporting that they are connected at SATA 3.0 (6 Gbps). I noticed today that, since this change, parity checks have been taking about three times as long! Below is my history - you can clearly see when I changed the hardware in April. 2019-10-01, 16:16:11 16 hr, 16 min, 9 sec 34.2 MB/s OK 0 2019-09-01, 17:44:43 17 hr, 44 min, 42 sec 31.3 MB/s OK 0 2019-08-01, 16:28:53 16 hr, 28 min, 52 sec 33.7 MB/s OK 0 2019-07-01, 16:24:50 16 hr, 24 min, 49 sec 33.9 MB/s OK 0 2019-06-01, 16:13:10 16 hr, 13 min, 9 sec 34.3 MB/s OK 0 2019-05-01, 16:40:44 16 hr, 40 min, 43 sec 33.3 MB/s OK 0 2019-04-01, 05:30:55 5 hr, 30 min, 54 sec 100.8 MB/s OK 0 2019-02-01, 05:30:07 5 hr, 30 min, 6 sec 101.0 MB/s OK 0 2019-01-01, 05:30:06 5 hr, 30 min, 5 sec 101.0 MB/s OK 0 2018-12-01, 05:30:06 5 hr, 30 min, 5 sec 101.0 MB/s OK 0 2018-11-01, 05:30:33 5 hr, 30 min, 32 sec 100.9 MB/s OK 0 I've tried to figure out whether it's just a parity check issue or a more general disk issue. I'm not a Linux guru, so I'm not sure if this is the best way to test this, but the outputs of the following dd commands seem to indicate I'm only getting 25-30 MB/s write speed on my data disks (and a much more reasonable 175 MB/s to my cache pool)? root@Enthalpy:~# dd if=/dev/zero of=/mnt/disk1/test bs=1G count=20 oflag=dsync 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 858.472 s, 25.0 MB/s root@Enthalpy:~# dd if=/dev/zero of=/mnt/disk2/test bs=1G count=20 oflag=dsync 20+0 records in 20+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 715.322 s, 30.0 MB/s root@Enthalpy:~# dd if=/dev/zero of=/mnt/cache/test bs=1G count=10 oflag=dsync 10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 61.4057 s, 175 MB/s Based on some Googling I've tried disabling hot plugging and SATA Aggressive Link Power Management in the BIOS, but it's not made any difference. Diagnostics are attached. I'd be grateful for any ideas, and am happy to try further diagnostics. Thanks! enthalpy-diagnostics-20191101-1035.zip Quote Link to comment
JorgeB Posted November 1, 2019 Share Posted November 1, 2019 indicate I'm only getting 25-30 MB/s write speed on my data disks Yeah but that's about right for default writing mode, it should be much faster with turbo write. There is however a problem with read speeds, and nothing is jumping out looking at the diags, try running the diskspeed docker and see if all disks are performing normally. Quote Link to comment
Tinlad Posted November 1, 2019 Author Share Posted November 1, 2019 Thanks for your reply johnnie.black. I've run the DiskSpeed docker and got the results below (x2 repeats). I stopped all my others dockers, there were no VMs running, and nothing on the network should have been accessing the server and competing for throughput. And here's the controller benchmark: Disk 1 is the 1 TB Red. That seems low to me... Quote Link to comment
JorgeB Posted November 2, 2019 Share Posted November 2, 2019 Disks are all performing way slower than they should, since it's on multiple disks you ideally want to run the same test with a different board/controller to rule that out, or connect a new disk in the current board and test it. Quote Link to comment
Tinlad Posted November 2, 2019 Author Share Posted November 2, 2019 (edited) OK, some progress. Two things I've done: 1) Updated to 6.8-rc5, on the basis that there were some comments of 6.7 having issues with disk performance in certain scenarios. This bumped my parity check speed up to about 55 MB/s (from about 30 MB/s). 2) NCQ was set to 'off' and nr_requests was set to 128. I've now set both of these to 'Auto'. Parity check speed is now up to about 65 MB/s. I ran DiskSpeed after each change, and nothing has changed from my post above. I'm still only getting about 65 MB/s from my spinning disks and 150 MB/s from my SSDs. johnnie.black, I agree that this seems to be a controller related issue. It seems highly unlikely to be a drive issue, given it's affecting all of them. I don't have another controller (or any other drives) available to test unfortunately. Is there anything in the BIOS settings I should be looking for? Anything in the UnRAID settings or tunables that would affect every disk on the controller? Edited November 2, 2019 by Tinlad Spelling Quote Link to comment
JorgeB Posted November 3, 2019 Share Posted November 3, 2019 13 hours ago, Tinlad said: Is there anything in the BIOS settings I should be looking for? Nothing I can think of that would affect read speeds, you can try reset to defaults, defaults should work at normal speed. 13 hours ago, Tinlad said: Anything in the UnRAID settings or tunables that would affect every disk on the controller? Not for the diskspeed test, at least don't think so. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.