Dephcon Posted April 28, 2015 Share Posted April 28, 2015 I seem to be having a similar issue, it doesn't seem to like my thin-provisioned vmdk cache "disk". It's configured for 250GB but the script is trying to read past that Performance testing /dev/sdb (Cache) at -309012 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309022 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309032 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309042 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309052 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309062 GB (hit end of disk) (100%) Performance testing /dev/sdb (Cache) at -309072 GB (hit end of disk) (100%) Quote Link to comment
tr0910 Posted April 29, 2015 Share Posted April 29, 2015 I have an Areca with 2x2tb for parity. So that might be confusing it. (but it shows problems with Disk1 which is a normal 4tb Seagate) Quote Link to comment
TonyTheTiger Posted May 4, 2015 Share Posted May 4, 2015 Thank you jbartlett. Your script (v.2.2) works great on my system. The web page shows the information eloquently. Facinating, really. I have all Seagate drives and you can really see the performance differences in the models. (I definitely know which one I'm going to replace.) Thanks, again. Michael Quote Link to comment
jbartlett Posted May 14, 2015 Author Share Posted May 14, 2015 I'm working on a new version that uses dd instead of hdparm. It will eliminate the rare end-of-disk bug. But if it gets into the negative values, you can stop it. It'll never finish. Quote Link to comment
coppit Posted June 14, 2015 Share Posted June 14, 2015 Hey jbartlett, Have you thought about a UI-enabled plugin for V6? Would it be helpful if I worked on that? I might have a few cycles to spare. Quote Link to comment
jbartlett Posted June 16, 2015 Author Share Posted June 16, 2015 Hey jbartlett, Have you thought about a UI-enabled plugin for V6? Would it be helpful if I worked on that? I might have a few cycles to spare. I don't think it'll work at a UI plugin - it can take a long time to process which would seem to hang the emhttp process. I'll ponder the feasibility of having a ajax refresh to fetch the current process and update the UI.... Quote Link to comment
tr0910 Posted July 28, 2015 Share Posted July 28, 2015 I'm working on a new version that uses dd instead of hdparm. It will eliminate the rare end-of-disk bug. But if it gets into the negative values, you can stop it. It'll never finish. Looking forward to testing that. Hopefully will work on my areca. Quote Link to comment
interwebtech Posted July 30, 2015 Share Posted July 30, 2015 I also have an Areca card and it got lost on the first disk it tried to bench (disk8 oddly enough) Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 Version 2.3 released, utilizes the dd command which should remove the issue people had with it failing to read a drive. Instead of reading 1GB at each location, it now reads 200MB which during my testing provided the highest read rate. This should cut down testing time by nearly 75%. Download is in the first post of this thread. Example console output The drives are tested in the order they were assigned by the OS diskspeed.sh for UNRAID, version 2.3 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV /dev/sdb (Disk 4): 107 MB/sec avg /dev/sdc (Disk 2): 158 MB/sec avg /dev/sdd (Disk 5): 98 MB/sec avg /dev/sde (Disk 10): 98 MB/sec avg /dev/sdf (Disk 6): 100 MB/sec avg /dev/sdg (Disk 7): 99 MB/sec avg /dev/sdh (Disk : 97 MB/sec avg /dev/sdi (Disk 9): 97 MB/sec avg /dev/sdj (Disk 3): 123 MB/sec avg /dev/sdk (Disk 11): 104 MB/sec avg /dev/sdl (Disk 1): 112 MB/sec avg To see a graph of the drive's speeds, please browse to the current directory and open the file diskspeed.html in your Internet Browser application. Example graph Change Log Version 2.3 Changed to use the "dd" command for speed testing, eliminates risk of hitting the end of the drive. The app will read 200MB of data at each testing location. Before scanning each spot, uses the "dd" command to place the drive head at the start of the test location. Added -o --output option for saving the file to a given location/name (credit pkn) Added report generation date & server name to the end of the report (credit pkn) Added a Y axis floor of zero to keep the graph from display negative ranges Hid graph that compared each drive by percentage. If you wish to re-enable it, change the line "ShowGraph1=0" to "ShowGraph1=1" Added average speed to the drive inventory list below the graph Added -x --exclude option to ignore drives, comma seperated. Ex: -x sda,sdb,sdc Added -o --output option to specify report HTML file name Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 The big dip on the drive 2 at the start is charistic of Seagate drives actually performing poorly in the first 20-25 GB range. Quote Link to comment
interwebtech Posted August 10, 2015 Share Posted August 10, 2015 Tests run to completion but clearly the Areca card is causing some confusion... or I have invisible warp drives lol. Odd number too for the Seagate 6TB. I wouldn't think it was so slow. Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 Tests run to completion but clearly the Areca card is causing some confusion... or I have invisible warp drives lol. Odd number too for the Seagate 6TB. I wouldn't think it was so slow. What do you get when you execute the following on the Areca card drives and not? dd if=/dev/xxx of=/dev/null bs=1M count=200 skip=0 iflag=direct dd if=/dev/xxx of=/dev/null bs=1M count=200 skip=100 iflag=direct And can you give the output of "hdparm -I /dev/xxx" for one of the Areca drives? Replace "xxx" with the drive 3 alpha designation. Quote Link to comment
interwebtech Posted August 10, 2015 Share Posted August 10, 2015 This is Disk 4 login as: root root@tower's password: Last login: Sun Aug 9 18:32:43 2015 from dell-i7.home Linux 4.1.1-unRAID. root@Tower:~# cd /boot root@Tower:/boot# dd if=/dev/sdf of=/dev/null bs=1M count=200 skip=0 iflag=direct 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 1.94571 s, 108 MB/s root@Tower:/boot# dd if=/dev/sdf of=/dev/null bs=1M count=200 skip=100 iflag=direct 200+0 records in 200+0 records out 209715200 bytes (210 MB) copied, 1.30029 s, 161 MB/s root@Tower:/boot# hdparm -I /dev/sdf /dev/sdf: SG_IO: bad/missing sense data, sb[]: f0 00 05 00 00 00 00 0b 00 00 00 00 20 00 00 00 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ATA device, with non-removable media Standards: Likely used: 1 Configuration: Logical max current cylinders 0 0 heads 0 0 sectors/track 0 0 -- Logical/Physical Sector size: 512 bytes device size with M = 1024*1024: 0 MBytes device size with M = 1000*1000: 0 MBytes cache/buffer size = unknown Capabilities: IORDY not likely Cannot perform double-word IO R/W multiple sector transfer: not supported DMA: not supported PIO: pio0 root@Tower:/boot# Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 Well, that's why the drives are invisible, hdparm is not returning any identifying information on the drive. *ponders* Quote Link to comment
interwebtech Posted August 10, 2015 Share Posted August 10, 2015 what info we do have has been added by tinkering and several lines in my go file http://lime-technology.com/forum/index.php?topic=38487 Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 what info we do have has been added by tinkering and several lines in my go file http://lime-technology.com/forum/index.php?topic=38487 Is there a command you can issue that will return identifying information on the drives attached to the Areca card? If so, can you give the command and the output in a code block? Such as lsscsi -g|grep "Areca" Quote Link to comment
tr0910 Posted August 10, 2015 Share Posted August 10, 2015 Is there a command you can issue that will return identifying information on the drives attached to the Areca card? If so, can you give the command and the output in a code block? Such as lsscsi -g|grep "Areca" I have an Areca 1280 on v6.01 with 9 drives attached, and another 4 on the motherboard SATA ports. Does that help? lsscsi -g|grep "Areca" [1:0:0:4] disk Areca ARC1280V2 R001 /dev/sdf /dev/sg5 [1:0:16:0] process Areca RAID controller R001 - /dev/sg10 Quote Link to comment
interwebtech Posted August 10, 2015 Share Posted August 10, 2015 what info we do have has been added by tinkering and several lines in my go file http://lime-technology.com/forum/index.php?topic=38487 Is there a command you can issue that will return identifying information on the drives attached to the Areca card? If so, can you give the command and the output in a code block? Such as lsscsi -g|grep "Areca" login as: root root@tower's password: Last login: Sun Aug 9 20:23:50 2015 from dell-i7.home Linux 4.1.1-unRAID. root@Tower:~# lsscsi -g|grep "Areca" [1:0:16:0] process Areca RAID controller R001 - /dev/sg9 root@Tower:~# Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 what info we do have has been added by tinkering and several lines in my go file http://lime-technology.com/forum/index.php?topic=38487 Is there a command you can issue that will return identifying information on the drives attached to the Areca card? If so, can you give the command and the output in a code block? Such as lsscsi -g|grep "Areca" login as: root root@tower's password: Last login: Sun Aug 9 20:23:50 2015 from dell-i7.home Linux 4.1.1-unRAID. root@Tower:~# lsscsi -g|grep "Areca" [1:0:16:0] process Areca RAID controller R001 - /dev/sg9 root@Tower:~# Okay, no identifying information there. Can you run the following to see if I can get the information from UNRAID? cat /proc/mdcmd | grep "Id.4" Quote Link to comment
tr0910 Posted August 10, 2015 Share Posted August 10, 2015 Better?? cat /proc/mdcmd | grep "Id.4" diskId.4=WD30EZRS-00J99B0_WD-WCAWZ1999111 rdevId.4=WD30EZRS-00J99B0_WD-WCAWZ1999111 but look... cat /proc/mdcmd | grep "Id.2" diskId.2=ST3000DM001-9YN166_W1F0N4JK rdevId.2=ST3000DM001-9YN166_W1F0N4JK diskId.20=ST33000651AS_9XK0P1JR rdevId.20=ST33000651AS_9XK0P1JR diskId.21= rdevId.21= diskId.22=WDC_WD30EZRX-00MMMB0_WD-WCAWZ2195094 rdevId.22=WDC_WD30EZRX-00MMMB0_WD-WCAWZ2195094 diskId.23=ST3000DM001-9YN1_W1F0MED2 rdevId.23=ST3000DM001-9YN1_W1F0MED2 Quote Link to comment
interwebtech Posted August 10, 2015 Share Posted August 10, 2015 same here login as: root root@tower's password: Last login: Mon Aug 10 09:27:21 2015 from dell-i7.home Linux 4.1.1-unRAID. root@Tower:~# cat /proc/mdcmd | grep "Id.4" diskId.4=WD20EZRX-00DC0B0_WD-WMC1T3951491 rdevId.4=WD20EZRX-00DC0B0_WD-WMC1T3951491 root@Tower:~# Quote Link to comment
jbartlett Posted August 10, 2015 Author Share Posted August 10, 2015 Thanks! Looks like I can get the information from UNRAID. I'll update and post a new version soon and after I figure out why your drives are maxing out the graph.... I used "Id.4" since I knew there wouldn't be a conflict between drives. Quote Link to comment
jbartlett Posted August 11, 2015 Author Share Posted August 11, 2015 Tests run to completion but clearly the Areca card is causing some confusion... or I have invisible warp drives lol. Odd number too for the Seagate 6TB. I wouldn't think it was so slow. Can you try this test version and execute it with the -l / --log option and PM/post the generated diskspeed.log file http://strangejourney.net/Temp/diskspeed.v2.4.zip I added logic to resolve your invisible drive issue. The log file should allow me to investigate the abnormal graphs. Quote Link to comment
interwebtech Posted August 11, 2015 Share Posted August 11, 2015 Tests run to completion but clearly the Areca card is causing some confusion... or I have invisible warp drives lol. Odd number too for the Seagate 6TB. I wouldn't think it was so slow. Can you try this test version and execute it with the -l / --log option and PM/post the generated diskspeed.log file http://strangejourney.net/Temp/diskspeed.v2.4.zip I added logic to resolve your invisible drive issue. The log file should allow me to investigate the abnormal graphs. Can you post the exact command line I need to run? Reduces operator errors. ;P Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.