Extremely slow Drive Rebuild after M1015 Install (<5MB/s)


Recommended Posts

Guys, finally installed a new M1015 after having issues with the first one I purchased and everything boots fine. I'm trying to rebuild a red-balled disk and the rebuild is going very slow.  I couldn't rebuild on my old card because the drives would randomly drop from the array.

 

Here's the syslog, I have no idea how to troubleshoot this.

 

/usr/bin/tail -f /var/log/syslog

Jan 24 11:49:04 Tower kernel: cdb[0]=0x28: 28 00 00 24 6f b7 00 04 00 00

Jan 24 11:49:04 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:49:04 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:49:04 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2d613c0)

Jan 24 11:49:39 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f2dd2300)

Jan 24 11:49:39 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:49:39 Tower kernel: cdb[0]=0x28: 28 00 00 24 f9 47 00 04 00 00

Jan 24 11:49:39 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:49:39 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:49:39 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2dd2300)

Jan 24 11:52:25 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f2dd2c00)

Jan 24 11:52:25 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:52:25 Tower kernel: cdb[0]=0x28: 28 00 00 34 1a df 00 04 00 00

Jan 24 11:52:25 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:52:25 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:52:25 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2dd2c00)

Jan 24 11:53:20 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f7586180)

Jan 24 11:53:20 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:53:20 Tower kernel: cdb[0]=0x28: 28 00 00 36 bc 3f 00 04 00 00

Jan 24 11:53:20 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:53:20 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:53:20 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f7586180)

Jan 24 11:54:58 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f75863c0)

Jan 24 11:54:58 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:54:58 Tower kernel: cdb[0]=0x28: 28 00 00 3e 2c e7 00 04 00 00

Jan 24 11:54:58 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:54:58 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:54:58 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f75863c0)

Jan 24 11:55:51 Tower emhttp: shcmd (47): /usr/sbin/hdparm -y /dev/sdf &> /dev/null

Jan 24 11:56:08 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f7586480)

Jan 24 11:56:08 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:56:08 Tower kernel: cdb[0]=0x28: 28 00 00 42 78 e7 00 04 00 00

Jan 24 11:56:08 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:56:08 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:56:08 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f7586480)

Jan 24 11:57:26 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f2dd2840)

Jan 24 11:57:26 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:57:26 Tower kernel: cdb[0]=0x28: 28 00 00 47 a0 e7 00 04 00 00

Jan 24 11:57:26 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:57:26 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:57:26 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2dd2840)

Jan 24 11:58:11 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f74decc0)

Jan 24 11:58:11 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:58:11 Tower kernel: cdb[0]=0x28: 28 00 00 49 48 e7 00 04 00 00

Jan 24 11:58:11 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:58:11 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:58:11 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f74decc0)

Jan 24 11:58:42 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f2dd20c0)

Jan 24 11:58:42 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 11:58:42 Tower kernel: cdb[0]=0x28: 28 00 00 49 58 e7 00 04 00 00

Jan 24 11:58:42 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 11:58:42 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 11:58:42 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2dd20c0)

Jan 24 12:00:21 Tower kernel: sd 1:0:0:0: attempting task abort! scmd(f2dd2e40)

Jan 24 12:00:21 Tower kernel: sd 1:0:0:0: [sdh] CDB:

Jan 24 12:00:21 Tower kernel: cdb[0]=0x28: 28 00 00 51 14 e7 00 04 00 00

Jan 24 12:00:21 Tower kernel: scsi target1:0:0: handle(0x000b), sas_address(0x4433221103000000), phy(3)

Jan 24 12:00:21 Tower kernel: scsi target1:0:0: enclosure_logical_id(0x500605b0026917c0), slot(0)

Jan 24 12:00:21 Tower kernel: sd 1:0:0:0: task abort: SUCCESS scmd(f2dd2e40)

 
Link to comment

No hotswap bays. 

 

Would cables suddenly go bad if I've been using them for 4 years without touching it?  The problem is very frustrating, considering that I waited so long to get this adapter card working, lol.  I'll just buy a new set of SATA cables and see if that works.

 

I upgraded the Power Supply fairly recently because I had an issue of drives being dropped.  Once I replaced the Power, everything was fine again until these crop of issues.

 

Syslog attached.  Thanks for all the help (as usual).

syslog01-24.zip

Link to comment

No hotswap bays. 

 

Would cables suddenly go bad if I've been using them for 4 years without touching it?  The problem is very frustrating, considering that I waited so long to get this adapter card working, lol.  I'll just buy a new set of SATA cables and see if that works.

 

I upgraded the Power Supply fairly recently because I had an issue of drives being dropped.  Once I replaced the Power, everything was fine again until these crop of issues.

Without hotswap bays, the odds of you inadvertently touching them (and knocking them around) are pretty good when you replace the power supply feeds to the drives.

Link to comment

No hotswap bays. 

 

Would cables suddenly go bad if I've been using them for 4 years without touching it?  The problem is very frustrating, considering that I waited so long to get this adapter card working, lol.  I'll just buy a new set of SATA cables and see if that works.

 

I upgraded the Power Supply fairly recently because I had an issue of drives being dropped.  Once I replaced the Power, everything was fine again until these crop of issues.

Without hotswap bays, the odds of you inadvertently touching them (and knocking them around) are pretty good when you replace the power supply feeds to the drives.

 

Yes, you're right but I recabled everything after I replaced the Power Supply using the same connectors.

 

I might have a few lying around, I'll try replacing ones that look stressed (some of the 90 degree cables).

Link to comment

Ok, I'll try swapping cables around.

 

unRAID remembers the drive position via Serial Number, right?  So it won't matter if I swap actual cable positions, correct?  I remember reading somewhere that this is how it works but would like clarification before I remove all SATA cables.

 

Thanks for the help

Link to comment

Ok, I'll try swapping cables around.

 

unRAID remembers the drive position via Serial Number, right?  So it won't matter if I swap actual cable positions, correct?  I remember reading somewhere that this is how it works but would like clarification before I remove all SATA cables.

 

Thanks for the help

That is correct on 5.x+.  If you swap the breakout on the samsung with another drive we will see if the problem stays with the drive or moves to another one.

Link to comment
  • 2 years later...

This is an old topic, but the issue was the M1015 card I was using, it would begin to slow down after attaching more than 2 drives to it.  This issue occurred on my old build and my new one I setup a year ago (and added the old M1015 2 months ago). Not sure if it was a firmware issue or not, but I'm using a different card now (as of this week, years later) and no longer suffer from this issue. 

Just FYI to anyone who has the same issue.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.