derekos Posted June 28, 2019 Share Posted June 28, 2019 I originally posted this in the general forum, but because I run Unraid under ESXi - Johnnie Black suggested I post it here instead. I have no idea what happened to my unraid. I simply wanted to upgrade a drive from 2.5tb to 4tb and my speed has dropped to 2.5 mb/s. I have been running unraid many years with relatively slow speeds - 40mbs. But nothing like this. You can see the speed here: https://prnt.sc/o7qwld I've attached my diagnostics. Any help would be appreciated! babel-diagnostics-20190628-0508.zip Quote Link to comment
uldise Posted June 28, 2019 Share Posted June 28, 2019 how is your drive attached to the Host? and how unraid access them? Quote Link to comment
uldise Posted June 28, 2019 Share Posted June 28, 2019 ok, just for a test - can you assign that drive to other Linux VM, and test it? Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 (edited) Which drive? The one that I am rebuilding? I will do it tomorrow and report back. Its late now. Edited June 28, 2019 by derekos Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 Btw - I see one of my drives is reading at 95mbs. So perhaps it is this new drive which is stuck at 2.5mbs write? Quote Link to comment
uldise Posted June 28, 2019 Share Posted June 28, 2019 I don't know, and this is why i'm asking you to test. when i add more drives to server, i always preclear it. with this procedure you can evaluate drive performance too. Quote Link to comment
testdasi Posted June 28, 2019 Share Posted June 28, 2019 (edited) 1 hour ago, derekos said: Btw - I see one of my drives is reading at 95mbs. So perhaps it is this new drive which is stuck at 2.5mbs write? Your rebuild speed is limited by your slowest disk(s). So 1 disk running at 95MB/s means nothing. You have some disks with non-zero read error rate: disk 5 (WCC4N08) disk 18 (WCC1T04) disk 21 (WCC1T14) disk 8 (WMC1T03) Disk 18, in particular has 1 reallocated sector and 96 pending. That sounds like a disk on its last leg. You might want to double check with jonnie.black. He's well-versed with disk errors and recovery etc. At least good thing you have dual parity. Edited June 28, 2019 by testdasi Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 Okay, so either the new drive or one of the existing drives may be limited to 2.5mbs. It seems like the thing to do is test each of these drives individually to locate the one or more with issues. Meanwhile, I am trying to see if I can pass the new drive through to a different Linux VM. Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 If Drive 18 is the drive that is slowing things down and I need to replace it in order to rebuild Drive 17, what is the best approach? Quote Link to comment
JorgeB Posted June 28, 2019 Share Posted June 28, 2019 Since you have dual parity you can cancel current rebuild and replace both at the same time. Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 At Johnnie's suggestion in another thread - I decided to check the disk speed of all the drives first. I used the diskspeed.sh script from here on the forum. I pasted the output below. I am guessing that I have a controller that has either gone bad or has bad or loose cables. 8 drives all in sequence below are very slow. How can I determine which controller runs those drives? --- paste below -- diskspeed.sh for UNRAID, version 2.6.5 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV /dev/sdc (Disk 15): 140 MB/sec avg /dev/sdd (Disk 16): 106 MB/sec avg /dev/sde (Disk 11): 118 MB/sec avg /dev/sdf (Disk 8): 117 MB/sec avg /dev/sdg (Disk 20): 108 MB/sec avg /dev/sdh (Disk 21): 110 MB/sec avg /dev/sdi (Disk 18): 106 MB/sec avg /dev/sdj (Disk 9): 113 MB/sec avg /dev/sdk (Disk 14): 2 MB/sec avg /dev/sdl (Disk 17): 10 MB/sec avg /dev/sdm (Disk 13): 10 MB/sec avg /dev/sdn (Disk 2): 10 MB/sec avg /dev/sdo (Disk 7): 10 MB/sec avg /dev/sdp (Disk 3): 10 MB/sec avg /dev/sdq (Disk 19): 10 MB/sec avg /dev/sdr (Disk 1): 10 MB/sec avg /dev/sds (Disk 12): 118 MB/sec avg /dev/sdt (Parity): 124 MB/sec avg /dev/sdu (Disk 5): 115 MB/sec avg /dev/sdv (Disk 23): 124 MB/sec avg /dev/sdw (Parity 2): 119 MB/sec avg /dev/sdx (Disk 4): 124 MB/sec avg /dev/sdy (Disk 10): 118 MB/sec avg /dev/sdz (Disk 6): 113 MB/sec avg Quote Link to comment
derekos Posted June 28, 2019 Author Share Posted June 28, 2019 Okay, a little more progress - a the slow drives are on the same controll 4:0:x, which is a Marvel controller. I have two of them. I used this command to match the drives to pci device. Then I looked in ESXi to see what PCI adapter mapped to 4:0:x:x. ls -ld /sys/block/sd*/device <snip> lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdk/device -> ../../../4:0:0:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdl/device -> ../../../4:0:1:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdm/device -> ../../../4:0:2:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdn/device -> ../../../4:0:3:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdo/device -> ../../../4:0:4:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdp/device -> ../../../4:0:5:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdq/device -> ../../../4:0:6:0/ lrwxrwxrwx 1 root root 0 Jun 28 02:11 /sys/block/sdr/device -> ../../../4:0:7:0/ <snip> Quote Link to comment
JonathanM Posted June 29, 2019 Share Posted June 29, 2019 32 minutes ago, derekos said: which is a Marvel controller. Those have been problematic for some people in the past few years. I recommend picking up one of the LSI chipset HBA's to replace it and see if that changes the symptoms. Quote Link to comment
derekos Posted June 29, 2019 Author Share Posted June 29, 2019 If I were to swap out the Marvel - would unraid just pickup where it left off? What should I watch out for? First I am going to open that box and reseat everything, maybe swap the cables out for new ones. Quote Link to comment
uldise Posted June 29, 2019 Share Posted June 29, 2019 2 hours ago, derekos said: What should I watch out for? there are so many topics about this question, so i will quote @johnnie.black: "Any LSI with a SAS2008/2308/3008 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc" Quote Link to comment
derekos Posted June 29, 2019 Author Share Posted June 29, 2019 I have an unused LSI card that I am going to put in the case tomorrow - I need longer SAS cables. But, after re-seating the card and cables - much better. However, disk 14 is stuck at 9mbs. I did find this thread after jonatham's advice. Which is more evidence for replacing the Marvel cards. ---- speed results --- /dev/sdc (Disk 15): 138 MB/sec avg /dev/sdd (Disk 8): 108 MB/sec avg /dev/sde (Disk 16): 108 MB/sec avg /dev/sdf (Disk 11): 118 MB/sec avg /dev/sdg (Disk 20): 111 MB/sec avg /dev/sdh (Disk 21): 112 MB/sec avg /dev/sdi (Disk 9): 105 MB/sec avg /dev/sdj (Disk 18): 109 MB/sec avg /dev/sdk (Disk 14): 9 MB/sec avg /dev/sdl (Disk 17): 142 MB/sec avg /dev/sdm (Disk 13): 134 MB/sec avg /dev/sdn (Disk 2): 112 MB/sec avg /dev/sdo (Disk 7): 110 MB/sec avg /dev/sdp (Disk 3): 116 MB/sec avg /dev/sdq (Disk 19): 116 MB/sec avg /dev/sdr (Disk 1): 139 MB/sec avg /dev/sds (Disk 12): 116 MB/sec avg /dev/sdt (Parity): 120 MB/sec avg /dev/sdu (Disk 5): 114 MB/sec avg /dev/sdv (Disk 23): 122 MB/sec avg /dev/sdw (Parity 2): 117 MB/sec avg /dev/sdx (Disk 4): 121 MB/sec avg /dev/sdy (Disk 10): 112 MB/sec avg /dev/sdz (Disk 6): 109 MB/sec avg Quote Link to comment
uldise Posted June 29, 2019 Share Posted June 29, 2019 you can setup this docker and test your disks again. https://forums.unraid.net/topic/70636-beta-6a-diskspeed-hard-drive-benchmarking-unraid-6/ you can test more disks at once to see how your controller is performing.. Quote Link to comment
testdasi Posted June 29, 2019 Share Posted June 29, 2019 Is your disk 14 the old disk 18? I think you have 1 disk that is already on its last leg, regardless of controller. 1 Quote Link to comment
BRiT Posted June 29, 2019 Share Posted June 29, 2019 Was going to point out, what testdasi already said, that in Linux your drive letters can/will change between reboots, especially when changing controllers. You need to keep the drive serial number for accurate notes. Quote Link to comment
derekos Posted June 29, 2019 Author Share Posted June 29, 2019 Using serial numbers - Disk 14 and Disk 18 did not swap. Disk 14 is reporting no errors in the Smart reports, unlike Disk 18. Hopefully the longer SAS cables will be here tomorrow afternoon and I can report back on the LSI card. Quote Link to comment
derekos Posted June 30, 2019 Author Share Posted June 30, 2019 (edited) Okay new SAS cables arrived. I had an LSI card available. I removed the marvel card giving me trouble and replaced it with the LSI. Problem solved! And yes, I am replaced drive 18 as well. Thank you everyone for your help. Unraid community is by far the best. Some details for anyone else that needs to do this in the future - ESXi booted up and saw it. It dropped it under Storage Adapters. But, I pass through the Marvel and LSI cards as RDMs. So, I went into Configuration > Hardware > Advanced Settings and using Edit I added the LSI controller as a DirectPath I/O. Had to reboot ESXi after that. Next, I had to edit the VM. Removed the PCI that pointed to the Marvel controller that I replaced. Added a new PCI that pointed to the LSI controller. Booted unRaid and benchmarked disks. diskspeed.sh for UNRAID, version 2.6.5 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV /dev/sdc (Disk 14): 120 MB/sec avg /dev/sdd (Disk 7): 114 MB/sec avg /dev/sde (Disk 3): 117 MB/sec avg /dev/sdf (Disk 19): 117 MB/sec avg /dev/sdg (Disk 1): 141 MB/sec avg /dev/sdh (Disk 17): 145 MB/sec avg /dev/sdi (Disk 13): 140 MB/sec avg /dev/sdj (Disk 2): 114 MB/sec avg /dev/sdk (Disk 11): 118 MB/sec avg /dev/sdl (Disk 15): 138 MB/sec avg /dev/sdm (Disk 8): 112 MB/sec avg /dev/sdn (Disk 16): 108 MB/sec avg /dev/sdo (Disk 18): 109 MB/sec avg /dev/sdp (Disk 20): 111 MB/sec avg /dev/sdq (Disk 21): 112 MB/sec avg /dev/sdr (Disk 9): 105 MB/sec avg /dev/sds (Disk 12): 112 MB/sec avg /dev/sdt (Parity): 111 MB/sec avg /dev/sdu (Disk 5): 112 MB/sec avg /dev/sdv (Disk 23): 122 MB/sec avg /dev/sdw (Parity 2): 116 MB/sec avg /dev/sdx (Disk 4): 122 MB/sec avg /dev/sdy (Disk 10): 107 MB/sec avg /dev/sdz (Disk 6): 108 MB/sec avg Edited June 30, 2019 by derekos replacing drive 18 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.