Parity check three times slower than sync or rebuild. Is it normal?


Recommended Posts

Running parity check with SAS2LP @ ~ 42Mb/s

 

top - 23:29:10 up 3 min,  1 user,  load average: 1.43, 0.66, 0.26
Tasks: 201 total,   2 running, 198 sleeping,   0 stopped,   1 zombie
Cpu(s):  5.4%us, 18.4%sy,  0.0%ni, 62.9%id,  0.0%wa,  0.0%hi, 13.3%si,  0.0%st
Mem:   3728184k total,   815572k used,  2912612k free,     6520k buffers
Swap:        0k total,        0k used,        0k free,   445020k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2419 root      20   0     0    0    0 S   32  0.0   0:20.36 unraidd
2266 root      20   0     0    0    0 D   16  0.0   0:10.12 mdrecoveryd
1094 root       0 -20     0    0    0 S    7  0.0   0:04.45 kworker/1:1H
1096 root       0 -20     0    0    0 S    5  0.0   0:03.18 kworker/0:1H
2250 root      20   0 88956 3288 2820 S    1  0.1   0:01.50 emhttp
    3 root      20   0     0    0    0 S    1  0.0   0:00.53 ksoftirqd/0
1520 root      20   0  9360 2356 2160 S    0  0.1   0:00.11 cpuload
4764 root      20   0  122m  15m  12m R    0  0.4   0:00.01 php
    1 root      20   0  4368 1592 1492 S    0  0.0   0:11.25 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd
    4 root      20   0     0    0    0 S    0  0.0   0:00.05 kworker/0:0
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H
    6 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/u4:0
    7 root      20   0     0    0    0 S    0  0.0   0:00.11 rcu_preempt
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_sched
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh
   10 root      RT   0     0    0    0 S    0  0.0   0:00.12 migration/0

 

With SASLP, parity check @ ~ 80Mb/s (not same server, identical hardware minus the CPU, celeron G540)

 

top - 07:39:57 up 1 day, 23:28,  1 user,  load average: 1.30, 1.78, 2.30
Tasks: 194 total,   3 running, 191 sleeping,   0 stopped,   0 zombie
Cpu(s):  3.9%us, 36.0%sy,  0.0%ni, 34.1%id,  0.0%wa,  0.0%hi, 26.0%si,  0.0%st
Mem:   3726652k total,  3485348k used,   241304k free,       20k buffers
Swap:        0k total,        0k used,        0k free,  3029200k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2447 root      20   0     0    0    0 R   60  0.0 350:53.32 unraidd
2327 root      20   0     0    0    0 D   28  0.0 114:57.81 mdrecoveryd
17167 root       0 -20     0    0    0 S   16  0.0   0:06.73 kworker/1:2H
28601 root       0 -20     0    0    0 S   12  0.0   0:05.13 kworker/0:1H
    3 root      20   0     0    0    0 S    0  0.0   4:17.88 ksoftirqd/0
2310 root      20   0 90104 4484 2940 S    0  0.1   7:47.74 emhttp
2759 root      20   0  9868 2628 1924 S    0  0.1   7:59.08 cache_dirs
2799 avahi     20   0 34320 2808 2552 S    0  0.1   0:16.93 avahi-daemon
    1 root      20   0  4368 1648 1548 S    0  0.0   0:09.76 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.02 kthreadd
    7 root      20   0     0    0    0 S    0  0.0   1:28.74 rcu_preempt
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_sched
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh
   10 root      RT   0     0    0    0 S    0  0.0   0:05.11 migration/0
   11 root      RT   0     0    0    0 S    0  0.0   0:05.89 migration/1
   12 root      20   0     0    0    0 S    0  0.0   0:10.38 ksoftirqd/1
   15 root       0 -20     0    0    0 S    0  0.0   0:00.00 khelper

Link to comment
  • Replies 79
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Replaced SAS2LP with SASLP on 1st server (Celeron G550), running parity check @ ~80Mb/s

 

top - 13:46:43 up 12 min,  1 user,  load average: 1.86, 1.41, 0.85
Tasks: 196 total,   2 running, 194 sleeping,   0 stopped,   0 zombie
Cpu(s):  1.5%us, 36.2%sy,  0.0%ni, 33.2%id,  0.0%wa,  0.0%hi, 29.1%si,  0.0%st
Mem:   3728184k total,   818632k used,  2909552k free,     6528k buffers
Swap:        0k total,        0k used,        0k free,   445236k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2449 root      20   0     0    0    0 R   61  0.0   3:42.91 unraidd
2296 root      20   0     0    0    0 D   28  0.0   1:44.36 mdrecoveryd
  953 root       0 -20     0    0    0 S   17  0.0   0:56.89 kworker/0:1H
1113 root       0 -20     0    0    0 S   17  0.0   1:02.53 kworker/1:1H
    3 root      20   0     0    0    0 S    1  0.0   0:02.67 ksoftirqd/0
15716 root      20   0  124m  16m  13m S    1  0.5   0:00.02 php
2747 root      20   0  9460 2224 1928 S    0  0.1   0:00.38 cache_dirs
15657 root      20   0 13284 2356 1984 R    0  0.1   0:00.01 top
    1 root      20   0  4368 1632 1536 S    0  0.0   0:11.25 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd
    4 root      20   0     0    0    0 S    0  0.0   0:00.20 kworker/0:0
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H
    7 root      20   0     0    0    0 S    0  0.0   0:00.32 rcu_preempt
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_sched
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh
   10 root      RT   0     0    0    0 S    0  0.0   0:00.14 migration/0
   11 root      RT   0     0    0    0 S    0  0.0   0:00.02 migration/1

 

 

It’s not a good time to replace the card in the other server because I’m running a batch script to create par2 recovery files, but as the speed is the same I expect similar results.

 

For the moment I’m going to keep using both SASLP because I’m going to upgrade some disks on both servers and it will take more than 24hours to rebuild a 4Tb disk at 40Mb/s.

 

Link to comment

Running parity check with SAS2LP @ ~ 42Mb/s

 

top - 23:29:10 up 3 min,  1 user,  load average: 1.43, 0.66, 0.26
Tasks: 201 total,   2 running, 198 sleeping,   0 stopped,   1 zombie
Cpu(s):  5.4%us, 18.4%sy,  0.0%ni, 62.9%id,  0.0%wa,  0.0%hi, 13.3%si,  0.0%st
Mem:   3728184k total,   815572k used,  2912612k free,     6520k buffers
Swap:        0k total,        0k used,        0k free,   445020k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2419 root      20   0     0    0    0 S   32  0.0   0:20.36 unraidd
2266 root      20   0     0    0    0 D   16  0.0   0:10.12 mdrecoveryd
1094 root       0 -20     0    0    0 S    7  0.0   0:04.45 kworker/1:1H
1096 root       0 -20     0    0    0 S    5  0.0   0:03.18 kworker/0:1H
2250 root      20   0 88956 3288 2820 S    1  0.1   0:01.50 emhttp
    3 root      20   0     0    0    0 S    1  0.0   0:00.53 ksoftirqd/0
1520 root      20   0  9360 2356 2160 S    0  0.1   0:00.11 cpuload
4764 root      20   0  122m  15m  12m R    0  0.4   0:00.01 php
    1 root      20   0  4368 1592 1492 S    0  0.0   0:11.25 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.00 kthreadd
    4 root      20   0     0    0    0 S    0  0.0   0:00.05 kworker/0:0
    5 root       0 -20     0    0    0 S    0  0.0   0:00.00 kworker/0:0H
    6 root      20   0     0    0    0 S    0  0.0   0:00.00 kworker/u4:0
    7 root      20   0     0    0    0 S    0  0.0   0:00.11 rcu_preempt
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_sched
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh
   10 root      RT   0     0    0    0 S    0  0.0   0:00.12 migration/0

 

With SASLP, parity check @ ~ 80Mb/s (not same server, identical hardware minus the CPU, celeron G540)

 

top - 07:39:57 up 1 day, 23:28,  1 user,  load average: 1.30, 1.78, 2.30
Tasks: 194 total,   3 running, 191 sleeping,   0 stopped,   0 zombie
Cpu(s):  3.9%us, 36.0%sy,  0.0%ni, 34.1%id,  0.0%wa,  0.0%hi, 26.0%si,  0.0%st
Mem:   3726652k total,  3485348k used,   241304k free,       20k buffers
Swap:        0k total,        0k used,        0k free,  3029200k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
2447 root      20   0     0    0    0 R   60  0.0 350:53.32 unraidd
2327 root      20   0     0    0    0 D   28  0.0 114:57.81 mdrecoveryd
17167 root       0 -20     0    0    0 S   16  0.0   0:06.73 kworker/1:2H
28601 root       0 -20     0    0    0 S   12  0.0   0:05.13 kworker/0:1H
    3 root      20   0     0    0    0 S    0  0.0   4:17.88 ksoftirqd/0
2310 root      20   0 90104 4484 2940 S    0  0.1   7:47.74 emhttp
2759 root      20   0  9868 2628 1924 S    0  0.1   7:59.08 cache_dirs
2799 avahi     20   0 34320 2808 2552 S    0  0.1   0:16.93 avahi-daemon
    1 root      20   0  4368 1648 1548 S    0  0.0   0:09.76 init
    2 root      20   0     0    0    0 S    0  0.0   0:00.02 kthreadd
    7 root      20   0     0    0    0 S    0  0.0   1:28.74 rcu_preempt
    8 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_sched
    9 root      20   0     0    0    0 S    0  0.0   0:00.00 rcu_bh
   10 root      RT   0     0    0    0 S    0  0.0   0:05.11 migration/0
   11 root      RT   0     0    0    0 S    0  0.0   0:05.89 migration/1
   12 root      20   0     0    0    0 S    0  0.0   0:10.38 ksoftirqd/1
   15 root       0 -20     0    0    0 S    0  0.0   0:00.00 khelper

Thanks... no that does not look like "my" problem, seems more like SAS2LP performance related...

Link to comment

I have two SAS2LP, both have same issue, on different servers, although with similar hardware.

 

Also user luvmich appears to have same problem on post #18 of this thread.

 

EDIT: I'm hoping this is a compatibility issue with some hardware / bios, I think this a pretty common card here so if anyone gets a normal > 80Mb/s parity check / disk rebuild speed (not parity sync) with a SAS2LP please post your hardware.

 

Thanks.

Link to comment

I've two SAS2LP, both have same issue, on different servers, although with similar hardware.

 

Also user luvmich appears to have same problem on post #18 of this thread.

 

Ahh ... I missed that.  In fact, I noticed the comment in post #18 that "... Server 1 uses the onboard SATA and a AOC-SAS2LP-MV8.  System 2 uses only the 2 AOC-SASLP-MV8."  and missed that this was a different setup than yours.

 

One possibility that I didn't see discussed ==>  Your board's second PCIe x16 slot is actually just an x4 slot in an x16 connector.  If you're using this slot, then your SAS2LP only has 4 lanes connected, while it's designed to use all 8 ... so you're restricting it to half of its rated bandwidth.    The SASLP card is only an x4 card, so it's getting its full designed bandwidth.

 

Note that the motherboard luvmich uses has this same issue => one of his x8 slots is actually an x4 slot with an x8 connector.

 

So it could be that you're simply handicapping your cards by not providing the # of PCI lanes they're designed for.

 

 

 

Link to comment

So it could be that you're simply handicapping your cards by not providing the # of PCI lanes they're designed for.

 

That’s what I first though, but the card is on a PCI-E 16x slot and creating parity runs at normal speed > 100Mb/s, it’s only slow on parity check or disk rebuild.

 

Also tried both SASLP and SAS2LP on the same server with 4 disks each and parity check was same 40Mb/s, all 8 disks on SASLP is 80Mb/s so I don’t think it’s a bandwidth issue.

 

Link to comment

I’m still trying to get to the bottom of this issue, so did some more tests on different hadware and based on that, I believe there’s an issue with the Supermicro AOC-SAS2LP-MV8 card and Unraid during parity check / disk rebuild, apparently since version 5, according to reply #18 from luvmich in this thread. Strangely the card works as expect on parity sync.

 

For anyone interest here are the tests I made, used 4 different boards and cpus, 2 different SAS2LP with latest bios, and for reference one SASLP and intel sata onboard.

 

Hardware used:

 

Asus p5kpl-am / intel G31 PCIe 1.0 / Core 2 Duo

Intel S3000AHXL server board / Intel 3000 chipset PCIe 1.0 / Core 2 Duo

Asrock B75 Pro3-M / PCIe 2.0 / Celeron G550

Asus P8H77-M / PCIe 2.0 / i5 3470

 

New usb pen with Unraid 6.0.0 trial barebones and 3 disks only:

Parity: WD30EZRX 3TB

Data: 2 x Intel SSD X25-V 40GB

 

Strange disk combination I know, but it’s what I had available that suited my purpose  ;)

 

With 3 disks only I shouldn’t be nowhere near the SAS2LP bandwidth limit, but if that’s true for parity sync, it appears I’m hitting some limit during parity checks.

SAS2LP link speed confirmed by card bios at boot and Unraid system log, SASLP max link is PCIe 1.0 4x

 

Parity Sync speed:

 

Intel sata onboard: 138 Mb/s

SASLP PCIe 1.0 4x: 138 Mb/s

SAS2LP on PCIe 2.0 8x: 139 Mb/s

SAS2LP on PCIe 2.0 4x: 138 Mb/s

SAS2LP on PCIe 1.0 8x: 135 Mb/s

 

Parity sync speeds are all normal, limited by the 3TB WD Green, now, the parity check speed:

 

Intel sata onboard: 138 Mb/s

SASLP PCIe 1.0 4x: 137 Mb/s

SAS2LP on PCIe 2.0 8x: 95 Mb/s

SAS2LP on PCIe 2.0 4x: 38 Mb/s

SAS2LP on PCIe 1.0 8x: 77 Mb/s

 

This is very strange, PCIe 2.0 8x has a bandwidth of 4Gb/s, PCIe 2.0 4x and PCIe 1.0 8x should have roughly same bandwidth but give very different results, and none should be near any limit, as proved by the SASLP results, and while 95Mb/s seams an acceptable speed remember it’s with 3 disks only, with 8 max parity check speed is 40 Mb/s linked at PCIe 2.0 8x witch is painfully slow for 3Tb and above disks.

 

I’m open to any ideas.

 

While I’m still hoping another user with this card can prove me wrong, I wouldn’t recommend it for Unraid use, I believe this is more a Linux than Unraid issue and hope it's fixed soon.

 

Below are screenshots from all tests.

 

XeB70ID.jpg caqddlY.jpg vuVDYCP.jpg 9be2D5C.jpg 4QW5xvg.jpg POHyfrD.jpg nql2ckE.jpg vWCUGN4.jpg OcG9681.jpg Xw0ZeMZ.jpg

Link to comment

Just for grins, what happens if you set up a 3-drive system with UnRAID v4.7 and these cards?

 

You'll have to find a different set of drives to test that with, since you can't use your extra 3TB drive.    But that would at least confirm whether or not this is an issue with the card itself or with the drivers for it used in the later versions.   

 

Link to comment

Just tried Unraid 4.7 but does not support the card, but you gave me the idea to try with unraid 5, and surprise parity checks works at normal speed.

 

jHGiuu2.jpg

 

 

Because this time I used 3 SSDs here's the same test in v6, done on the same computer, SAS2LP on PCIe 1.0 8x

 

JhrG155.jpg

 

 

I think this definitely rules out a hardware problem, now is it a Linux or Unraid problem?

 

Link to comment

There were some changes under the hood starting with beta 15 that *may* affect the parity check times.  (I was one of the beta testers for an unreleased version of beta14 and noticed similar slowdowns).  These changes affect different hardware differently.

 

You can try to run unRaid Tunables Tester (http://lime-technology.com/forum/index.php?topic=29009.0) to try and get back the speed.  (But you may or may not be able to recoup all of your v5 speeds)

 

Note that this is not a SAS2LP issue.  Rather it is a system-wide issue.  We're talking about your entire system as a whole that is affected, rather than merely a single component.

Link to comment

... I think this definitely rules out a hardware problem, now is it a Linux or Unraid problem?

 

Yes, I'd agree it exonerates the card as the problem.  You should send Tom a note (with a link to this thread) to confirm they're aware of this issue with the SAS2LP's => they are fairly common cards in UnRAID systems.

 

 

Link to comment

There were some changes under the hood starting with beta 15 that *may* affect the parity check times.  (I was one of the beta testers for an unreleased version of beta14 and noticed similar slowdowns).  These changes affect different hardware differently.

 

You can try to run unRaid Tunables Tester (http://lime-technology.com/forum/index.php?topic=29009.0) to try and get back the speed.  (But you may or may not be able to recoup all of your v5 speeds)

 

Note that this is not a SAS2LP issue.  Rather it is a system-wide issue.  We're talking about your entire system as a whole that is affected, rather than merely a single component.

 

That explains it, unfortunately tunables tester doesn’t make any difference for me, had tried it before posting on this thread with no significant difference and tried again on the test server and in fact the default setting is the fastest, although there’s little difference.

 

unRAID Tunables Tester v2.2 by Pauven

Test 1   - md_sync_window=384  - Completed in 120.625 seconds =  68.4 MB/s
Test 2   - md_sync_window=512  - Completed in 120.629 seconds =  65.9 MB/s
Test 3   - md_sync_window=640  - Completed in 120.648 seconds =  65.0 MB/s
Test 4   - md_sync_window=768  - Completed in 120.626 seconds =  66.1 MB/s
Test 5   - md_sync_window=896  - Completed in 120.638 seconds =  66.6 MB/s
Test 6   - md_sync_window=1024 - Completed in 120.636 seconds =  65.7 MB/s
Test 7   - md_sync_window=1152 - Completed in 120.629 seconds =  66.4 MB/s
Test 8   - md_sync_window=1280 - Completed in 120.634 seconds =  64.7 MB/s
Test 9   - md_sync_window=1408 - Completed in 120.624 seconds =  64.8 MB/s
Test 10  - md_sync_window=1536 - Completed in 120.633 seconds =  66.3 MB/s
Test 11  - md_sync_window=1664 - Completed in 120.632 seconds =  63.3 MB/s
Test 12  - md_sync_window=1792 - Completed in 120.651 seconds =  63.5 MB/s
Test 13  - md_sync_window=1920 - Completed in 120.622 seconds =  63.3 MB/s
Test 14  - md_sync_window=2048 - Completed in 120.651 seconds =  63.2 MB/s

Completed: 0 Hrs 28 Min 11 Sec.

 

I understand that the changes affect the system as a whole but maybe there is something that can be done, in the same system I get twice the speed with a bus limited SASLP, I would be happy with at least the same performance from the SAS2LP, about 80Mb/s parity check for a fully loaded card.

 

 

Yes, I'd agree it exonerates the card as the problem.  You should send Tom a note (with a link to this thread) to confirm they're aware of this issue with the SAS2LP's => they are fairly common cards in UnRAID systems.

 

 

I will, and many thanks for all your help.

 

Link to comment
  • 2 weeks later...

I'm having a similar issue moving from 5.0 to 6.0.  On 5.0 with antiquated hardware I would get 100+ mb/sec parity syncs.  Now I've just bought some new hardware and the parity sync appears to have been around 35 mb/sec almost all evening.  What is odd is the reads and writes from the array seem pretty good.  Just the parity sync is slow.

Link to comment

I think I have the same problem, using M1015 cards (which is afaik same chipset). Same hardware was working fine with v5 and now has performance degradation after update to v6(.0.1).

@Tom: Is it possible, that this is a problem with the (64bit) drivers for this card or the init settings? Would it help to post the syslog parts of the card initialization? Here is mine:

Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas version 20.100.00.00 loaded (System)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (3920328 kB) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: 8 (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 29 (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: iomem(0x00000000f04c0000), mapped(0xffffc90010870000), size(16384) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: ioport(0x000000000000ee00), size(256) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: sending message unit reset !! (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: message unit reset: SUCCESS (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: Allocated physical memory: size(7445 kB) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432) (System)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: Scatter Gather Elements per IO(128) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: LSISAS2008: FWVersion(20.00.04.00), ChipRevision(0x03), BiosVersion(07.39.00.00) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: scsi host2: Fusion MPT SAS Host (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: sending port enable !! (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: host_add: handle(0x0001), sas_addr(0x500605b0053a9ad0), phys( (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas0: port enable: SUCCESS (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: 64 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (3920328 kB) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: 8 (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1-msix0: PCI-MSI-X enabled: IRQ 30 (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: iomem(0x00000000f0ac0000), mapped(0xffffc90010940000), size(16384) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: ioport(0x000000000000de00), size(256) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: sending message unit reset !! (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: message unit reset: SUCCESS (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: Allocated physical memory: size(7445 kB) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: Current Controller Queue Depth(3307), Max Controller Queue Depth(3432) (System)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: Scatter Gather Elements per IO(128) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: LSISAS2008: FWVersion(20.00.04.00), ChipRevision(0x03), BiosVersion(07.39.00.00) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ) (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: scsi host16: Fusion MPT SAS Host (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: sending port enable !! (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: host_add: handle(0x0001), sas_addr(0x500605b0053c1650), phys( (Drive related)
Jul  4 11:01:04 XMS-GMI-01 kernel: mpt2sas1: port enable: SUCCESS (Drive related)

Can anybody check, how the card is initialized in v5 - maybe it is different?

I am not experienced with Linux, but checking "modinfo mpt2sas" shows possible parameters, maybe extended logging can discover what happens?

On LSI site, in the readme of the driver package there is explanation about all option setting, including logging_level.

 

Link to comment

I just started a parity check on my unraid server and I'm seeing a speed of around 30MB/sec (parity disk is a 6TB and data disks are a mix of 4TB reiserfs and 6TB xfs - SAS2LP cards on 16x PCIE v2.0).

 

I remember before upgrading to V6 and with all reiserfs drives (also 6TB and 4TB) the parity check speed was much faster (average at the end around 100MB/sec - SAS2LP cards on 16x PCIE v2.0).

 

Is there any news on this?

 

Cheers

Max

Link to comment

There’s as issue with Unraid 6 or the current linux driver and the SAS2LP that affects at least some users.

 

I tested with Reiserfs and XFS and there was no difference.

 

You can try running the tunables tester, some people get improved speeds, in my case didn’t make a significant difference, and I expect you won’t get near the speeds you got with v5.

 

Link to comment

I can only assume my system has fallen victim to the same issue here, but I'm not exactly sure.  The only place I really noticed it was with the parity check because I haven't had to do a rebuild or sync yet.  I was on unRAID 5.0 for some time and was using a slow Sempron processor.  This still resulted in parity checks being the 100+ MB/sec range.  After five years I started having issue with the server and since I couldn't narrow down exactly which piece of hardware was causing intermittent freezes I simply elected to replace the motherboard, cpu, and ram.

 

I wanted to see if I could stay with a rather tiny budget since I mainly use unRAID for NAS so I bought a bundle sale at Frys for a AMD A8-7650K CPU and a MSI A78M-E35 motherboard for $89.  I added 4 gigs of Corsair RAM for $35.  I figured at this budget if nothing worked I could simply scrap it and get something else.  What attracted me to this bundle was the fact that the CPU still provided some space for PLEX transcoding.  I think the Passmark was right around 5000.  Naturally, doing all this was a perfect time to upgrade from unRAID v5 to v6.

 

Anyway, because of the intermittent freezes in the previous system the first thing v6 wanted to do was a parity check, which I did.  Initially, it started around 100 Mb/sec, but then quickly dropped to around 35 Mb/sec and stayed there for almost the duration of the check.  Again, this is 1/3 the speed I was getting with v5 in the antiquated system from before.

 

I think (think being the operative word here) everything else seems about where I would expect them to be.  Reads from the array are around 100 Mb/sec.  Writes start pretty fast at 100 Mb/sec, but quickly drop down and plateau at about 45 Mb/sec.  I can't say for sure, but this seems normal for the system I have and some of the 5400 RPM drives I have in the array.  In the previous system I think I was getting closer to 25 - 30 Mb/sec so 45 Mb/sec is an improvement.

 

Some of the dialogue in this thread is over my head, but the basic gist of what I'm taking home seems to be that some of the AMD processors are following victim to very poor parity check speeds and I guess we don't have much choice other than to just sort of live with them...right? 

Link to comment

Some of the dialogue in this thread is over my head, but the basic gist of what I'm taking home seems to be that some of the AMD processors are following victim to very poor parity check speeds and I guess we don't have much choice other than to just sort of live with them...right?

 

There's one case on this thread that seems to be AMD board or CPU related, others appear to be sata controller related, check your CPU usage during a parity check, if it’s very high maybe you have the same AMD issue.

 

Also please post your complete hardware, may help diagnose your issue.

 

Link to comment

I have the same problem. On V5 during a parity check I had speed about 80 Mb/sec and on V6.1 I have about 35Mb/sec. My setup: Gigabyte M61P-S3, Athlon 64 3000+ @1800 and 1 MB of RAM. I use onboard SATA ports. During the parity check the usage of the processor is 100% all the time.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.