JorgeB Posted April 11, 2017 Share Posted April 11, 2017 51 minutes ago, jbartlett said: Did you modify the script so it would test under 25GB drives? Don't care if you did alter it, just wondering if there's a bug. Yes, I did. ? Quote Link to comment
Smitty2k1 Posted April 15, 2017 Share Posted April 15, 2017 So I recently updated and ran this read speed test again. See attached for plots. I'm happy with the results - even though my two older 2TB drives are pretty slow. The REALLY slow one (disk5) currently has no data on it. When I get to the point of filling up my array I will upgrade my parity drive and replace this 2TB drive with the current 4TB parity. However I have a question - most of my drives that contain data have read speeds over 100MB/sec. However, if I try to COPY a file from one share to another using Midnight Commander or the Krusader docker, I am rarely able to exceed a measly 40MB/sec write speeds. This is USING the SSD cache drive. I figure the write/copy speed would be limited by the read speed of the slowest drive involved in the process, but for whatever reason I can never even come close to those lighting fast write speeds to the cache. Any ideas? diskspeed.html Quote Link to comment
trurl Posted April 15, 2017 Share Posted April 15, 2017 3 minutes ago, Smitty2k1 said: However I have a question - most of my drives that contain data have read speeds over 100MB/sec. However, if I try to COPY a file from one share to another using Midnight Commander or the Krusader docker, I am rarely able to exceed a measly 40MB/sec write speeds. This is USING the SSD cache drive. I figure the write/copy speed would be limited by the read speed of the slowest drive involved in the process, but for whatever reason I can never even come close to those lighting fast write speeds to the cache. Any ideas? That seems like the sort of speed you would get if you were reading and writing to the parity array. Are you saying you are copying from one User share to another User share. and the destination User share is configured so the file is written to cache first? Are you sure it is written to cache and not to an array disk? Quote Link to comment
jbartlett Posted April 15, 2017 Author Share Posted April 15, 2017 Is your copy overwriting an existing file? If so, that's a cache buster. Quote Link to comment
Smitty2k1 Posted April 15, 2017 Share Posted April 15, 2017 I am copying a file from one user share to a different user share. Not overwriting any files. Source files on array (not cache) destination writes to cache (confirmed by viewing cache files through the unRaid GUI after copy). I used to have an old Atom CPU so I always attributed poor performance to that. However been running a Xeon for a while now and still get the same speeds. I've checked by copying a file from a Windows PC on an SSD to the unRaod cache (SSD) over gigabit network and it saturates the gig ethernet. Therefore I assumed it was a slow read speed from the array disks but this script is telling me otherwise. Quote Link to comment
trurl Posted April 15, 2017 Share Posted April 15, 2017 49 minutes ago, Smitty2k1 said: I am copying a file from one user share to a different user share. Not overwriting any files. Source files on array (not cache) destination writes to cache (confirmed by viewing cache files through the unRaid GUI after copy). I used to have an old Atom CPU so I always attributed poor performance to that. However been running a Xeon for a while now and still get the same speeds. I've checked by copying a file from a Windows PC on an SSD to the unRaod cache (SSD) over gigabit network and it saturates the gig ethernet. Therefore I assumed it was a slow read speed from the array disks but this script is telling me otherwise. Do you use Cache Dirs plugin? Quote Link to comment
Smitty2k1 Posted April 16, 2017 Share Posted April 16, 2017 Yeah, I just installed cache_dirs a few days ago. Problem persisted before and after installing the plugin. Overall quality of life has improved significantly with chace_dirs though! Quote Link to comment
1812 Posted May 7, 2017 Share Posted May 7, 2017 So, here's a weird thing. Been using the plugin. It won't push the disks over 7-10MB/s and pegs 1 cpu at 100%. I tired uninstalling and reinstalling the plugin. No dice. I know it's not a hardware problem because a quick non-correcting parity check shows: Thoughts? It works flawless on my other server which is nearly identical to this one except for processors/ram quality/disks. This did work when I first installed the plugin a few days ago. Quote Link to comment
Squid Posted May 7, 2017 Share Posted May 7, 2017 (edited) 8 minutes ago, 1812 said: So, here's a weird thing. Been using the plugin. It won't push the disks over 7-10MB/s and pegs 1 cpu at 100%. I tired uninstalling and reinstalling the plugin. No dice. I know it's not a hardware problem because a quick non-correcting parity check shows: Thoughts? It works flawless on my other server which is nearly identical to this one except for processors/ram quality/disks. This did work when I first installed the plugin a few days ago. Strange. Does the standalone script work fine? Edited May 7, 2017 by Squid Quote Link to comment
1812 Posted May 7, 2017 Share Posted May 7, 2017 9 minutes ago, Squid said: Strange. Does the standalone script work fine? appears so, but I don't know what the warning is for: diskspeed.sh for UNRAID, version 2.6.4 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV Warning: Files in the array are open. Please refer to /tmp/lsof.txt for a list /dev/sdb (Cache): 268 MB/sec avg Quote Link to comment
Squid Posted May 7, 2017 Share Posted May 7, 2017 2 hours ago, 1812 said: appears so, but I don't know what the warning is for: diskspeed.sh for UNRAID, version 2.6.4 By John Bartlett. Support board @ limetech: http://goo.gl/ysJeYV Warning: Files in the array are open. Please refer to /tmp/lsof.txt for a list /dev/sdb (Cache): 268 MB/sec avg The warning is exactly what it says. There are open files. Open files may affect the testing results. 99% of the script is identical between the plugin and the bare script. The main differences are some prior to testing just to get the raw script to work under a different environment. At this time I have no clue why you're seeing such a wide disparity (I'm certainly not) between the two versions as they are nearly identical, but I'll think about it... Quote Link to comment
1812 Posted May 7, 2017 Share Posted May 7, 2017 8 hours ago, Squid said: The warning is exactly what it says. There are open files. Open files may affect the testing results. 99% of the script is identical between the plugin and the bare script. The main differences are some prior to testing just to get the raw script to work under a different environment. At this time I have no clue why you're seeing such a wide disparity (I'm certainly not) between the two versions as they are nearly identical, but I'll think about it... Something else interesting I noticed this am. I ran the test on an ssd with no open file activity, which still produced the result around 7MB/s. So after the first segment finished, I pressed cancel. The page refreshed to show not running. I then opened the stats page where it still showed disk activity at 7MB/s, cycling to show the different read intervals that were specified. I spend another tab to the dashboard and could also see 1 cpu thread pegged at 100%. So despite cancelling, it still continued on with the test. Log only showed this May 7 07:44:33 Brahms1 emhttp: cmd: /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugin checkall May 7 07:45:41 Brahms1 root: kill 12110 Quote Link to comment
CraziFuzzy Posted May 18, 2017 Share Posted May 18, 2017 (edited) Wonder if it would be possible to modify the script to optionally run each disk's test simultaneously - might provide insights into controller bottlenecks. So like all selected disks run their 0% test, then all run the 10%, etc. Haven't looked into the script to see if the loops are set up in a way that could be easily modified to do it this way, and obviously each individual test would need to be forked. Edited May 18, 2017 by CraziFuzzy Quote Link to comment
jbartlett Posted May 18, 2017 Author Share Posted May 18, 2017 That's in the plan for the next version. 1 Quote Link to comment
CraziFuzzy Posted May 18, 2017 Share Posted May 18, 2017 8 hours ago, jbartlett said: That's in the plan for the next version. Excellent! look forward to it! Quote Link to comment
jbartlett Posted June 7, 2017 Author Share Posted June 7, 2017 The Plugin version doesn't hang up the GUI while running if you're running 6.4 or higher due to 6.4 no longer being 100% single threaded for web calls. Quote Link to comment
Squid Posted June 7, 2017 Share Posted June 7, 2017 25 minutes ago, jbartlett said: The Plugin version doesn't hang up the GUI while running if you're running 6.4 or higher due to 6.4 no longer being 100% single threaded for web calls. Even under 6.3 it won't hang the GUI while its running... Everything is done in the background. Quote Link to comment
jbartlett Posted June 8, 2017 Author Share Posted June 8, 2017 6 hours ago, Squid said: Even under 6.3 it won't hang the GUI while its running... Everything is done in the background. My bad, I had forgotten that you had fixed that. Quote Link to comment
interwebtech Posted June 8, 2017 Share Posted June 8, 2017 I have an Areca ARC-1231ML that unRaid on its own is unable to id the drives attached to it. I have the Dynamix SCSI Devices plug-in installed which allows the drives to be properly identified in Web GUI. Would it be possible to check for and make use of that translation if your script gets an "Unable to determine" value for drive ID? Quote Link to comment
chaosratt Posted June 9, 2017 Share Posted June 9, 2017 So neither the plugin or bare script seem to be generating graphs for me now. I get the small table with disk names, but the area where the graph should be is empty. Quote Link to comment
Squid Posted June 9, 2017 Share Posted June 9, 2017 17 minutes ago, chaosratt said: So neither the plugin or bare script seem to be generating graphs for me now. I get the small table with disk names, but the area where the graph should be is empty. unRaid version? Quote Link to comment
jbartlett Posted June 9, 2017 Author Share Posted June 9, 2017 31 minutes ago, chaosratt said: So neither the plugin or bare script seem to be generating graphs for me now. I get the small table with disk names, but the area where the graph should be is empty. If you run the stand-alone script with the -l (log) option, you can send that along and I'll be able to debug what's happening. Quote Link to comment
jbartlett Posted June 9, 2017 Author Share Posted June 9, 2017 On 6/8/2017 at 10:11 AM, interwebtech said: I have an Areca ARC-1231ML that unRaid on its own is unable to id the drives attached to it. I have the Dynamix SCSI Devices plug-in installed which allows the drives to be properly identified in Web GUI. Would it be possible to check for and make use of that translation if your script gets an "Unable to determine" value for drive ID? Does the command "lsscsi -i" reveal the information? Quote Link to comment
chaosratt Posted June 9, 2017 Share Posted June 9, 2017 log file and html attached. For sanity's sake I only ran it against my parity drives this time. diskspeed.zip Quote Link to comment
chaosratt Posted June 9, 2017 Share Posted June 9, 2017 Here's a full run html output, no log unfortunately. 2017-06-09 14-25-52.html Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.