2.5mbs on drive rebuild


Recommended Posts

I originally posted this in the general forum, but because I run Unraid under ESXi - Johnnie Black suggested I post it here instead.

 

I have no idea what happened to my unraid.   I simply wanted to upgrade a drive from 2.5tb to 4tb and my speed has dropped to 2.5 mb/s.   I have been running unraid many years with relatively slow speeds - 40mbs.  But nothing like this. 

 

You can see the speed here: https://prnt.sc/o7qwld 

 

I've attached my diagnostics. 

 

Any help would be appreciated!

babel-diagnostics-20190628-0508.zip

Link to comment
1 hour ago, derekos said:

Btw - I see one of my drives is reading at 95mbs.  So perhaps it is this new drive which is stuck at 2.5mbs write?

 

Your rebuild speed is limited by your slowest disk(s). So 1 disk running at 95MB/s means nothing.

You have some disks with non-zero read error rate:

  • disk 5 (WCC4N08)
  • disk 18 (WCC1T04)
  • disk 21 (WCC1T14)
  • disk 8 (WMC1T03)

Disk 18, in particular has 1 reallocated sector and 96 pending. That sounds like a disk on its last leg.

You might want to double check with jonnie.black. He's well-versed with disk errors and recovery etc.

At least good thing you have dual parity.

 

 

 

Edited by testdasi
Link to comment

Okay, so either the new drive or one of the existing drives may be limited to 2.5mbs.    

 

It seems like the thing to do is test each of these drives individually to locate the one or more with issues.   

 

Meanwhile, I am trying to see if I can pass the new drive through to a different Linux VM.  

 

Link to comment

At Johnnie's suggestion in another thread - I decided to check the disk speed of all the drives first.  I used the diskspeed.sh script from here on the forum.   I pasted the output below. 

 

I am guessing that I have a controller that has either gone bad or has bad or loose cables.   8 drives all in sequence below are very slow.   How can I determine which controller runs those drives?

 

--- paste below --

 

diskspeed.sh for UNRAID, version 2.6.5
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV
/dev/sdc (Disk 15): 140 MB/sec avg
/dev/sdd (Disk 16): 106 MB/sec avg
/dev/sde (Disk 11): 118 MB/sec avg
/dev/sdf (Disk 8): 117 MB/sec avg
/dev/sdg (Disk 20): 108 MB/sec avg
/dev/sdh (Disk 21): 110 MB/sec avg
/dev/sdi (Disk 18): 106 MB/sec avg
/dev/sdj (Disk 9): 113 MB/sec avg
/dev/sdk (Disk 14): 2 MB/sec avg
/dev/sdl (Disk 17): 10 MB/sec avg
/dev/sdm (Disk 13): 10 MB/sec avg
/dev/sdn (Disk 2): 10 MB/sec avg
/dev/sdo (Disk 7): 10 MB/sec avg
/dev/sdp (Disk 3): 10 MB/sec avg
/dev/sdq (Disk 19): 10 MB/sec avg
/dev/sdr (Disk 1): 10 MB/sec avg
/dev/sds (Disk 12): 118 MB/sec avg
/dev/sdt (Parity): 124 MB/sec avg
/dev/sdu (Disk 5): 115 MB/sec avg
/dev/sdv (Disk 23): 124 MB/sec avg
/dev/sdw (Parity 2): 119 MB/sec avg
/dev/sdx (Disk 4): 124 MB/sec avg
/dev/sdy (Disk 10): 118 MB/sec avg
/dev/sdz (Disk 6): 113 MB/sec avg
 

Link to comment

Okay, a little more progress - a the slow drives are on the same controll 4:0:x, which is a Marvel controller.  I have two of them.   

 

I used this command to match the drives to pci device.   Then I looked in ESXi to see what PCI adapter mapped to 4:0:x:x.  

 

ls -ld /sys/block/sd*/device

 

<snip>

lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdk/device -> ../../../4:0:0:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdl/device -> ../../../4:0:1:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdm/device -> ../../../4:0:2:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdn/device -> ../../../4:0:3:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdo/device -> ../../../4:0:4:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdp/device -> ../../../4:0:5:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdq/device -> ../../../4:0:6:0/
lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdr/device -> ../../../4:0:7:0/
<snip>

 

Link to comment

I have an unused LSI card that I am going to put in the case tomorrow - I need longer SAS cables. 

 

But, after re-seating the card and cables - much better.  However, disk 14 is stuck at 9mbs. 

 

I did find this thread after jonatham's advice.  Which is more evidence for replacing the Marvel cards. 

 

 

 

---- speed results ---  

 

/dev/sdc (Disk 15): 138 MB/sec avg
/dev/sdd (Disk 8): 108 MB/sec avg
/dev/sde (Disk 16): 108 MB/sec avg
/dev/sdf (Disk 11): 118 MB/sec avg
/dev/sdg (Disk 20): 111 MB/sec avg
/dev/sdh (Disk 21): 112 MB/sec avg
/dev/sdi (Disk 9): 105 MB/sec avg
/dev/sdj (Disk 18): 109 MB/sec avg
/dev/sdk (Disk 14): 9 MB/sec avg
/dev/sdl (Disk 17): 142 MB/sec avg
/dev/sdm (Disk 13): 134 MB/sec avg
/dev/sdn (Disk 2): 112 MB/sec avg
/dev/sdo (Disk 7): 110 MB/sec avg
/dev/sdp (Disk 3): 116 MB/sec avg
/dev/sdq (Disk 19): 116 MB/sec avg
/dev/sdr (Disk 1): 139 MB/sec avg
/dev/sds (Disk 12): 116 MB/sec avg
/dev/sdt (Parity): 120 MB/sec avg
/dev/sdu (Disk 5): 114 MB/sec avg
/dev/sdv (Disk 23): 122 MB/sec avg
/dev/sdw (Parity 2): 117 MB/sec avg
/dev/sdx (Disk 4): 121 MB/sec avg
/dev/sdy (Disk 10): 112 MB/sec avg
/dev/sdz (Disk 6): 109 MB/sec avg
 

Link to comment

Okay new SAS cables arrived.  I had an LSI card available.  I removed the marvel card giving me trouble and replaced it with the LSI.   Problem solved!  And yes, I am replaced drive 18 as well. 

 

Thank you everyone for your help.  Unraid community is by far the best.

 

Some details for anyone else that needs to do this in the future -

 

ESXi booted up and saw it.  It dropped it under Storage Adapters.   But, I pass through the Marvel and LSI cards as RDMs.   So, I went into Configuration > Hardware > Advanced Settings and using Edit I added the LSI controller as a DirectPath I/O.   Had to reboot ESXi after that.

 

Next, I had to edit the VM.  Removed the PCI that pointed to the Marvel controller that I replaced.  Added a new PCI that pointed to the LSI controller. 

 

Booted unRaid and benchmarked disks.

 

diskspeed.sh for UNRAID, version 2.6.5
By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV
/dev/sdc (Disk 14): 120 MB/sec avg
/dev/sdd (Disk 7): 114 MB/sec avg
/dev/sde (Disk 3): 117 MB/sec avg
/dev/sdf (Disk 19): 117 MB/sec avg
/dev/sdg (Disk 1): 141 MB/sec avg
/dev/sdh (Disk 17): 145 MB/sec avg
/dev/sdi (Disk 13): 140 MB/sec avg
/dev/sdj (Disk 2): 114 MB/sec avg
/dev/sdk (Disk 11): 118 MB/sec avg
/dev/sdl (Disk 15): 138 MB/sec avg
/dev/sdm (Disk 8): 112 MB/sec avg
/dev/sdn (Disk 16): 108 MB/sec avg
/dev/sdo (Disk 18): 109 MB/sec avg
/dev/sdp (Disk 20): 111 MB/sec avg
/dev/sdq (Disk 21): 112 MB/sec avg
/dev/sdr (Disk 9): 105 MB/sec avg
/dev/sds (Disk 12): 112 MB/sec avg
/dev/sdt (Parity): 111 MB/sec avg
/dev/sdu (Disk 5): 112 MB/sec avg
/dev/sdv (Disk 23): 122 MB/sec avg
/dev/sdw (Parity 2): 116 MB/sec avg
/dev/sdx (Disk 4): 122 MB/sec avg
/dev/sdy (Disk 10): 107 MB/sec avg
/dev/sdz (Disk 6): 108 MB/sec avg
 

 

 

 

Edited by derekos
replacing drive 18
  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.