[Partially SOLVED] Is there an effort to solve the SAS2LP issue? (Tom Question)


TODDLT

Recommended Posts

  • Replies 453
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Finally got a couple new H310's. Now have to review the dozen or so articles on flashing these guys to IT mode. I'd love to do a nice parity test tonight and see normal speeds and no red balls. :)

 

Well, I guess I'm not a patient man. I flashed both cards with success to IT mode. Removed my SAS2 cards and installed the H310's. The array came up fine as normal. All drives with green, looks good. Went straight for a parity test and it immediately started at 102MB/sec. That's 40MB/sec more then my SAS2 cards. So in my system with the same version of unraid I'm seeing big improvements in a parity check. Because of my impatience I never ran a diagnostic before I took the SAS2 cards out, but after the parity check I can put them back in do a diagnostics if it helps. The cross flashing of the card(s) were much easier then I thought, but then again I'm from the old DOS days and very comfortable around a command prompt.

Link to comment

Finally got a couple new H310's. Now have to review the dozen or so articles on flashing these guys to IT mode. I'd love to do a nice parity test tonight and see normal speeds and no red balls. :)

 

Well, I guess I'm not a patient man. I flashed both cards with success to IT mode. Removed my SAS2 cards and installed the H310's. The array came up fine as normal. All drives with green, looks good. Went straight for a parity test and it immediately started at 102MB/sec. That's 40MB/sec more then my SAS2 cards. So in my system with the same version of unraid I'm seeing big improvements in a parity check. Because of my impatience I never ran a diagnostic before I took the SAS2 cards out, but after the parity check I can put them back in do a diagnostics if it helps. The cross flashing of the card(s) were much easier then I thought, but then again I'm from the old DOS days and very comfortable around a command prompt.

 

While you saw a huge improvement, 102MB/s still seems low to me. I suspect you're bottlenecked by your controller card/PCIe slot.

 

My array of 2x 3TB WD Green and 2x 3TB WD Red starts at 136MB/s.

Link to comment

Finally got a couple new H310's. Now have to review the dozen or so articles on flashing these guys to IT mode. I'd love to do a nice parity test tonight and see normal speeds and no red balls. :)

 

Well, I guess I'm not a patient man. I flashed both cards with success to IT mode. Removed my SAS2 cards and installed the H310's. The array came up fine as normal. All drives with green, looks good. Went straight for a parity test and it immediately started at 102MB/sec. That's 40MB/sec more then my SAS2 cards. So in my system with the same version of unraid I'm seeing big improvements in a parity check. Because of my impatience I never ran a diagnostic before I took the SAS2 cards out, but after the parity check I can put them back in do a diagnostics if it helps. The cross flashing of the card(s) were much easier then I thought, but then again I'm from the old DOS days and very comfortable around a command prompt.

 

While you saw a huge improvement, 102MB/s still seems low to me. I suspect you're bottlenecked by your controller card/PCIe slot.

 

My array of 2x 3TB WD Green and 2x 3TB WD Red starts at 136MB/s.

 

Agree, it’s a good speed but you probably have a bottleneck somewhere, or you have some older disks with smaller size platters.

 

Have you ever tried diskspeed?

Link to comment

Finally got a couple new H310's. Now have to review the dozen or so articles on flashing these guys to IT mode. I'd love to do a nice parity test tonight and see normal speeds and no red balls. :)

 

Well, I guess I'm not a patient man. I flashed both cards with success to IT mode. Removed my SAS2 cards and installed the H310's. The array came up fine as normal. All drives with green, looks good. Went straight for a parity test and it immediately started at 102MB/sec. That's 40MB/sec more then my SAS2 cards. So in my system with the same version of unraid I'm seeing big improvements in a parity check. Because of my impatience I never ran a diagnostic before I took the SAS2 cards out, but after the parity check I can put them back in do a diagnostics if it helps. The cross flashing of the card(s) were much easier then I thought, but then again I'm from the old DOS days and very comfortable around a command prompt.

 

While you saw a huge improvement, 102MB/s still seems low to me. I suspect you're bottlenecked by your controller card/PCIe slot.

 

My array of 2x 3TB WD Green and 2x 3TB WD Red starts at 136MB/s.

 

Agree, it’s a good speed but you probably have a bottleneck somewhere, or you have some older disks with smaller size platters.

 

Have you ever tried diskspeed?

 

One step at a time here. I'm up to 127MB/sec. Remember, I was at 60MB/sec with so called "supported" controllers out of the box. Who's to say there isn't an native bug/issue somewhere with the OS? Something was causing the Supermicro controllers to respond with very low parity check speeds. Who knows if my speeds now are related to whatever was causing that? Maybe I'll try the diskspeed test out after the parity check. I have a high performance system, so really the only thing that should be holding back any speed would be the controller(s) or the OS.

 

My signature does have the correct hardware, which I have to update right now.

 

After flashing to IT firmware, cards come up as:

Sep  9 18:19:48 SUN kernel: mpt2sas0: Dell 6Gbps SAS HBA: Vendor(0x1000), Device(0x0072), SSVID(0x1028), SSDID(0x1F1C)

Sep  9 18:19:48 SUN kernel: mpt2sas1: Dell 6Gbps SAS HBA: Vendor(0x1000), Device(0x0072), SSVID(0x1028), SSDID(0x1F1C)

 

 

Link to comment

70-80 MBps parity check. 22 hours.

 

01:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev c3)

        Subsystem: Marvell Technology Group Ltd. Device 9480

        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-

        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-

        Latency: 0, Cache Line Size: 64 bytes

        Interrupt: pin A routed to IRQ 16

        Region 0: Memory at f0440000 (64-bit, non-prefetchable)

        Region 2: Memory at f0400000 (64-bit, non-prefetchable)

        Expansion ROM at f0460000 [disabled]

        Capabilities: [40] Power Management version 3

                Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-)

                Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-

        Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+

                Address: 0000000000000000  Data: 0000

        Capabilities: [70] Express (v2) Endpoint, MSI 00

                DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us

                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-

                DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-

                        RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop-

                        MaxPayload 128 bytes, MaxReadReq 512 bytes

                DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-

                LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Latency L0 <512ns, L1 <64us

                        ClockPM- Surprise- LLActRep- BwNot-

                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- Retrain- CommClk+

                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-

                LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-

                DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported

                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled

                LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-

                        Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-

                        Compliance De-emphasis: -6dB

                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-

                        EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-

        Capabilities: [100 v1] Advanced Error Reporting

                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-

                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-

                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+

                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-

        Capabilities: [140 v1] Virtual Channel

                Caps:  LPEVC=0 RefClk=100ns PATEntryBits=1

                Arb:    Fixed- WRR32- WRR64- WRR128-

                Ctrl:  ArbSelect=Fixed

                Status: InProgress-

                VC0:    Caps:  PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-

                        Arb:    Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-

                        Ctrl:  Enable+ ID=0 ArbSelect=Fixed TC/VC=ff

                        Status: NegoPending- InProgress-

        Kernel driver in use: mvsas

        Kernel modules: mvsas

 

Link to comment

 

One step at a time here. I'm up to 127MB/sec.

 

I see in your diskspeed results that you do have several 667Gb/platter and at least one 500Gb/platter drive, these disks will limit your starting speed to around 130Mb/s, so if you’re getting 127Mb/s it confirms that the SAS2LP was the problem in your case and the LSI is probably the best option at the moment for anyone on the market for a new 8 port controller.

 

You can maybe improve a little running unraid-tunables-tester but you'll only see big improvements if/when you replace your slower 2tb drives, in any case I would be very happy with a speed close to or above 100Mb/s, that usually means you can do a check overnight, or a little more depending on the array size.

 

P.S: The only results I find a strange are both your SSDs, are they also on the H310?

Link to comment

 

One step at a time here. I'm up to 127MB/sec.

 

I see in your diskspeed results that you do have several 667Gb/platter and at least one 500Gb/platter drive, these disks will limit your starting speed to around 130Mb/s, so if you’re getting 127Mb/s it confirms that the SAS2LP was the problem in your case and the LSI is probably the best option at the moment for anyone on the market for a new 8 port controller.

 

You can maybe improve a little running unraid-tunables-tester but you'll only see big improvements if/when you replace your slower 2tb drives, in any case I would be very happy with a speed close to or above 100Mb/s, that usually means you can do a check overnight, or a little more depending on the array size.

 

P.S: The only results I find a strange are both your SSDs, are they also on the H310?

 

Yes, the SSD's are on the H310's. All my drives are. The diskspeed GRAPH results are correct (550 MB/sec) but the TEXT results display 119MB/sec.

 

Parity complete. Not sure where that 1 error is coming from, but I was one of those users who had a constant 5 errors on every parity I did.

Last checked on Thu 10 Sep 2015 11:00:34 AM EDT (today), finding 1 error.

Duration: 10 hours, 34 minutes, 6 seconds. Average speed: 105.2 MB/sec

Link to comment

Thats a respectful time for a 4TB parity. My 3TB takes a hair over 8 hours to complete.

 

Maybe we should edit the WIKI and indicate there are odd issues with some SAS(2) 9480 controllers. Since they are the least resistance to setting up, as in doesn't need to be flashed and works out of the box. As drives become larger and larger we will eventually do away with controller cards ( just as old adaptec cards are dead ) and use onboard controllers. But there will always be users that have a need for extra controllers for full systems.

 

Link to comment

Thats a respectful time for a 4TB parity. My 3TB takes a hair over 8 hours to complete.

 

Maybe we should edit the WIKI and indicate there are odd issues with some SAS(2) 9480 controllers. Since they are the least resistance to setting up, as in doesn't need to be flashed and works out of the box. As drives become larger and larger we will eventually do away with controller cards ( just as old adaptec cards are dead ) and use onboard controllers. But there will always be users that have a need for extra controllers for full systems.

 

 

I agree.  I got the sas2 because it was on the wiki and was plug and play.

Link to comment

... As drives become larger and larger we will eventually do away with controller cards ...

 

I very much doubt that.  As drives get larger and larger, users find more and more things to fill up the space ... and will STILL be building systems with more drives than the motherboards support.  :)    In fact, with dual parity, a cache drive, and perhaps an out-of-array app drive for Dockers/VMs, it's easy to use up 4 of the motherboard slots before you even start with the array drives.

 

Link to comment

Maybe we should edit the WIKI and indicate there are odd issues with some SAS(2) 9480 controllers. Since they are the least resistance to setting up, as in doesn't need to be flashed and works out of the box. As drives become larger and larger we will eventually do away with controller cards ( just as old adaptec cards are dead ) and use onboard controllers. But there will always be users that have a need for extra controllers for full systems.

 

I agree.  I got the sas2 because it was on the wiki and was plug and play.

 

The SAS2LP note is updated, on the PCI SATA Controllers (Hardware Compatibility) list.

 

There is a very interesting thread linked there (found here), about this card in v5 days.  Out of the box, it was found to have issues with symptoms very like some of the issues known here, including parity errors and drives not visible.  It's during v5 so no v6 long parity check issues yet.  It was found that reflashing the firmware with the same or newer version would correct the problems, so apparently it was not flashed correctly at the factory.  A patch was included in v5.0.3 to handle both the older and newer PCI ID's for the card, but I saw NO confirming posts that that fixed the problems. (Side note to all users:  if you ever report a problem, and it's fixed later, PLEASE come back and report it's fixed for you!  It's really helpful to both the developers and other users.)

 

It might be worth testing by those with issues with the card, go to the SuperMicro download site, grab the 1812 firmware, and flash your card.  Instructions are in the first site I mentioned.

Link to comment

It might be worth testing by those with issues with the card, go to the SuperMicro download site, grab the 1812 firmware, and flash your card.  Instructions are in the first site I mentioned.

Hi Rob,

If you check the thread with my problems with SAS2LP, you'd find that I reflashed the firmware on my card to both the older and the latest versions 1812 to no help...

I'm sorry.  It did seem to help everyone in the thread I linked.  I don't know what is different.

Link to comment

It might be worth testing by those with issues with the card, go to the SuperMicro download site, grab the 1812 firmware, and flash your card.  Instructions are in the first site I mentioned.

Hi Rob,

If you check the thread with my problems with SAS2LP, you'd find that I reflashed the firmware on my card to both the older and the latest versions 1812 to no help...

I'm sorry.  It did seem to help everyone in the thread I linked.  I don't know what is different.

 

I know... I actually found that thread and many others when I searched for solution. And only after that I posted about my problem.

Link to comment

Whenever I think about this issue I can't get past what I see when I boot, and in syslog:

 

Sep 12 05:06:35 BIGBOX kernel: mvsas 0000:01:00.0: mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

I've been told before that 5.0 Gbps is "per lane", so effective 40 Gbps.

 

But after 8b/10b encoding wouldn't this take the 5 Gbps down to 4 Gpbs, which divided by 8 drives gives 0.5 Gbps each, or 62.5 MB/sec, which is pretty much the peak throughput I see whilst all 8 drives on the controller are parity checking...

Link to comment

Whenever I think about this issue I can't get past what I see when I boot, and in syslog:

 

Sep 12 05:06:35 BIGBOX kernel: mvsas 0000:01:00.0: mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

I've been told before that 5.0 Gbps is "per lane", so effective 40 Gbps.

 

But after 8b/10b encoding wouldn't this take the 5 Gbps down to 4 Gpbs, which divided by 8 drives gives 0.5 Gbps each, or 62.5 MB/sec, which is pretty much the peak throughput I see whilst all 8 drives on the controller are parity checking...

 

I don't think so. Many, including myself, were getting 95-100MB/sec with these cards.

Link to comment

I said it years ago... nothing will be done!  I have tried old firmware and the newest firmwares.  My system with a SAS2lp only parity checks at 40MB/sec and parity rebuilds at 130MB/s.  My other system with two SASLP checks and rebuilds at 75MB/sec.  Both versions of unraid v5 and v6 have the same results.  My system with the SAS2LP had the webgui lockup problem and the system with the two SASLP had no problems with the webgui lockup (6.1.2 fix that problem).  My system with the SAS2LP also had IOMMU /HVM loss of share/lockup problem and my system with two SASLP worked fine.  Besides the SAS cards the only difference between the the two system is the amount of memory and the SAS2LP system has IPMI.  Neither system has any plug-ins.

Link to comment

Whenever I think about this issue I can't get past what I see when I boot, and in syslog:

 

Sep 12 05:06:35 BIGBOX kernel: mvsas 0000:01:00.0: mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

I've been told before that 5.0 Gbps is "per lane", so effective 40 Gbps.

 

But after 8b/10b encoding wouldn't this take the 5 Gbps down to 4 Gpbs, which divided by 8 drives gives 0.5 Gbps each, or 62.5 MB/sec, which is pretty much the peak throughput I see whilst all 8 drives on the controller are parity checking...

 

I believe you are mixing Gibagits (Gb) with Gigabytes (GB).

 

5 Gb x 8 lanes = 40 Gb / 8 = 5 GB – 20% for 8b/10b = 4GB / 8  = 500MB for each sata port.

 

500MB/s is max bandwidth, I don’t expect to hit that, but would expect something close to 400MB/s per port with fast SSDs.

 

You can see that on my tests on page 9 of this thread I got almost 200MB/s per port with unraid V5, and can’t get more because I’m using small and slow SSDs (by SSD standards).

 

Link to comment

Whenever I think about this issue I can't get past what I see when I boot, and in syslog:

 

Sep 12 05:06:35 BIGBOX kernel: mvsas 0000:01:00.0: mvsas: PCI-E x8, Bandwidth Usage: 5.0 Gbps

 

I've been told before that 5.0 Gbps is "per lane", so effective 40 Gbps.

 

But after 8b/10b encoding wouldn't this take the 5 Gbps down to 4 Gpbs, which divided by 8 drives gives 0.5 Gbps each, or 62.5 MB/sec, which is pretty much the peak throughput I see whilst all 8 drives on the controller are parity checking...

 

I believe you are mixing Gibagits (Gb) with Gigabytes (GB).

 

5 Gb x 8 lanes = 40 Gb / 8 = 5 GB – 20% for 8b/10b = 4GB / 8  = 500MB for each sata port.

 

500MB/s is max bandwidth, I don’t expect to hit that, but would expect something close to 400MB/s per port with fast SSDs.

 

You can see that on my tests on page 9 of this thread I got almost 200MB/s per port with unraid V5, and can’t get more because I’m using small and slow SSDs (by SSD standards).

 

I'm not mixing Gbit/Gbyte, I'm just not taking for granted that when it says 5.0 Gbps it really means 40 Gbps, which is also a factor of 8.

Link to comment

... As drives become larger and larger we will eventually do away with controller cards ...

 

I very much doubt that.  As drives get larger and larger, users find more and more things to fill up the space ... and will STILL be building systems with more drives than the motherboards support.  :)    In fact, with dual parity, a cache drive, and perhaps an out-of-array app drive for Dockers/VMs, it's easy to use up 4 of the motherboard slots before you even start with the array drives.

 

Add-in controllers are disappearing. Some times you can't even find them anymore in stock. And when you buy them, you are using firmwares/BIOS from years ago. Companies just aren't putting much effort into add-in controllers. Years ago you could easily buy most of them that are in the WIKI, but now you'll find it hard to find more then half being sold anywhere, hence everyone buying used hardware from Ebay. Solid state electronics will take over the mechanical, but few years away. In my opinion add-controllers will just be a niche for users wanting to build their own NAS systems. No other real use for them with these drives getting larger and larger.

Link to comment

Maybe we should edit the WIKI and indicate there are odd issues with some SAS(2) 9480 controllers. Since they are the least resistance to setting up, as in doesn't need to be flashed and works out of the box. As drives become larger and larger we will eventually do away with controller cards ( just as old adaptec cards are dead ) and use onboard controllers. But there will always be users that have a need for extra controllers for full systems.

 

I agree.  I got the sas2 because it was on the wiki and was plug and play.

 

The SAS2LP note is updated, on the PCI SATA Controllers (Hardware Compatibility) list.

 

There is a very interesting thread linked there (found here), about this card in v5 days.  Out of the box, it was found to have issues with symptoms very like some of the issues known here, including parity errors and drives not visible.  It's during v5 so no v6 long parity check issues yet.  It was found that reflashing the firmware with the same or newer version would correct the problems, so apparently it was not flashed correctly at the factory.  A patch was included in v5.0.3 to handle both the older and newer PCI ID's for the card, but I saw NO confirming posts that that fixed the problems. (Side note to all users:  if you ever report a problem, and it's fixed later, PLEASE come back and report it's fixed for you!  It's really helpful to both the developers and other users.)

 

It might be worth testing by those with issues with the card, go to the SuperMicro download site, grab the 1812 firmware, and flash your card.  Instructions are in the first site I mentioned.

 

I may have re-flashed mine when I got them because I wanted both to have the same version. I can't remember now, but since I do have two SAS2 cards can re-flash and try again.

Link to comment

I moved the mainboard with the SAS2LP into the backup server and removed my old ASUS mobo. Attached to the controller are 2x WD20EARX, 1xHDS 721010, 1xWD20EARS and a Seagate Barracuda ST3000DM001 as Parity drive. Cache drive is a Crucial M4-CT128. Everything was up and running without any hassle after the rebuild ....however Parity speed dropped down to 60MB/sec (I always had >90MB/sec with the old board) and the first time ever I saw parity errors on the backup server. Not sure if I reported this already:

root@Tower2:~# lspci -vv -d 1b4b:*
02:00.0 RAID bus controller: Marvell Technology Group Ltd. 88SE9485 SAS/SATA 6Gb/s controller (rev 03)
Subsystem: Marvell Technology Group Ltd. Device 9480
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0, Cache Line Size: 64 bytes
Interrupt: pin A routed to IRQ 17
Region 0: Memory at dfa40000 (64-bit, non-prefetchable) [size=128K]
Region 2: Memory at dfa00000 (64-bit, non-prefetchable) [size=256K]
Expansion ROM at dfa60000 [disabled] [size=64K]
Capabilities: [40] Power Management version 3
	Flags: PMEClk- DSI- D1+ D2- AuxCurrent=375mA PME(D0+,D1+,D2-,D3hot+,D3cold-)
	Status: D0 NoSoftRst- PME-Enable- DSel=0 DScale=0 PME-
Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
	Address: 0000000000000000  Data: 0000
Capabilities: [70] Express (v2) Endpoint, MSI 00
	DevCap:	MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <1us, L1 <8us
		ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
	DevCtl:	Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
		RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop-
		MaxPayload 128 bytes, MaxReadReq 512 bytes
	DevSta:	CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend-
	LnkCap:	Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Latency L0 <512ns, L1 <64us
		ClockPM- Surprise- LLActRep- BwNot-
	LnkCtl:	ASPM L0s Enabled; RCB 64 bytes Disabled- Retrain- CommClk+
		ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
	LnkSta:	Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
	DevCap2: Completion Timeout: Not Supported, TimeoutDis+, LTR-, OBFF Not Supported
	DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
	LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis-
		 Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
		 Compliance De-emphasis: -6dB
	LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
		 EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Capabilities: [100 v1] Advanced Error Reporting
	UESta:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
	UEMsk:	DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
	UESvrt:	DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
	CESta:	RxErr+ BadTLP+ BadDLLP+ Rollover- Timeout+ NonFatalErr+
	CEMsk:	RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
	AERCap:	First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
Capabilities: [140 v1] Virtual Channel
	Caps:	LPEVC=0 RefClk=100ns PATEntryBits=1
	Arb:	Fixed- WRR32- WRR64- WRR128-
	Ctrl:	ArbSelect=Fixed
	Status:	InProgress-
	VC0:	Caps:	PATOffset=00 MaxTimeSlots=1 RejSnoopTrans-
		Arb:	Fixed- WRR32- WRR64- WRR128- TWRR128- WRR256-
		Ctrl:	Enable+ ID=0 ArbSelect=Fixed TC/VC=01
		Status:	NegoPending- InProgress-
Kernel driver in use: mvsas
Kernel modules: mvsas

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.