jaybee Posted September 19, 2012 Share Posted September 19, 2012 I wish people wouldn't use mbps!! Parity checks are measured MB/s which is completely different to Mb/s (mbps)!!!!! And still you knew exactly what this user meant. ? I think that was his point, he doesn't know. I think the problem is that people are being lazy and writing Mega Bytes per second as "mbps" which is not just lazy, it's wrong and something totally different. It's not about being picky, it's about making sure you write clearly the speeds you are getting so others can compare. We should be talking about parity speeds in terms of MB/s (Mega Bytes per second) so it would be good if people that have stated their speeds can confirm if this is what they meant, as opposed to mb/s (Mega Bits per second). Quote Link to comment
dalben Posted September 19, 2012 Share Posted September 19, 2012 but when you guys are debating MBs and Mbps, are you counting to 1024 or 1000 ? Quote Link to comment
pras1011 Posted September 19, 2012 Share Posted September 19, 2012 1024/1000 is very minor compared to Mbps and MBps!!! Quote Link to comment
jowi Posted September 19, 2012 Share Posted September 19, 2012 [...] which is completely different to Mb/s (mbps)!!!!! But... Mb/s is Mega-bits per second, where mbps should be regarded as millibits per second i'm not sure what millibits are though. I think really small bits. Quote Link to comment
WeeboTech Posted September 19, 2012 Share Posted September 19, 2012 Can we keep this topic .."on topic" so that Tom has clear concise information to review. Thanks guys! Quote Link to comment
mrow Posted September 19, 2012 Share Posted September 19, 2012 Parity check is 65-75 MB/s so far (9%). About what I was getting with RC6. All array disks except one are on an MV8, the other is on an on board SATA port. Speeds usually pick up a bit after the small disk not on the HBA has completed and the rest of the check is on the HBA only. Specs: Core i3-2120 3.3GHz, Supermicro X9SCM-F, 8GB 1600MHz RAM (running at 1333MHz because of the CPU) and a Supermicro AOC-SASLP-MV8. Disks are all WD Greens except the parity which is a Seagate ST3000DM001 7200rpm and a 320GB WD Scorpio Black 7200rpm 2.5 inch drive. Quote Link to comment
Influencer Posted September 19, 2012 Share Posted September 19, 2012 Sorry for the confusion. Although it should be obvious, if I had parity checks running at a rate of ~70megabits per second I probably would not be very thrilled. To clarify, and with end results: At the beginning of the check I was running about 40 MB/s, after around 7% it went up to 94 MB/s. Let the parity check finish overnight and it completed with no errors at an average of 75 MB/s. This has been the same average I've had no matter what release I use. Specs: Motherboard: ECS 880GM-M7 RAM: Crucial DDR3-1600 2x4GB CPU: AMD Sempron 145 All disks are on the onboard sata ports, not sure what chipset it is. Quote Link to comment
kingpin Posted September 19, 2012 Share Posted September 19, 2012 Duration: 16 hours, 41 minutes, 17 seconds. Average speed: 49.9 MB/sec Started 27TB server 22.6TB data. mix of 1TB, 2TB, and 3TB Hitachi, WD and Seagate. i am happy with the results so far. Quote Link to comment
WeeboTech Posted September 19, 2012 Share Posted September 19, 2012 I've found that parity check speeds depend on the controller, model of drives and position where the drives are installed. Continually refreshing to get the parity speed tends to slow down the parity check as each drive is polled for information. temperature,etc, etc. (At least this is what I've noticed). I've also noticed that usage of the array has a dramatic effect. In addition, another factor. Parity creation vs check has a dramatic speed difference on my system with the ARC-1200. Parity creation is faster then checking since the controller caches the writes. I guess keeping a historical review of your ending parity checks, Duration and speed might be worthwhile to see if issues are present. Quote Link to comment
pras1011 Posted September 19, 2012 Share Posted September 19, 2012 I have now upgraded to 2 x sas2lp (from 1 x saslp and 1 x sas2lp) and the parity check is around 70 to 75 MB/s (up from 50 to 65 MB/s). No change in write speed though. Quote Link to comment
dalben Posted September 19, 2012 Share Posted September 19, 2012 I guess keeping a historical review of your ending parity checks, Duration and speed might be worthwhile to see if issues are present. If this were built into Unraid that would be very cool. Every time a parity check or create is a run a summary gets copied to a log file on boot. Mobo, Controller, Drives, OS Version etc plus the averages times. Good for those of us how get a kick out of info like that but also invaluable to Tom if he's trying to track down an issue related to parity speeds Quote Link to comment
StevenD Posted September 19, 2012 Share Posted September 19, 2012 My parity check is WAY slower as well. I just crossed the 10% mark and its still running ~24MB/s. See config in signature. Quote Link to comment
PeterB Posted September 19, 2012 Share Posted September 19, 2012 At the beginning of the check I was running about 40 MB/s, after around 7% it went up to 94 MB/s. I wonder whether there is a problem with one of your drives - a significant positive change part way through the check does not make sense ~(assuming there was no other use of the machine during that time). I have a drive, currently assigned to the 'spares' shelf, which shows a dramatic drop in speed at around 70-80% across the span of the drive, outside of that range (ie before and after) the drive operates at a decent speed. No SMART report, or any other test, has ever reported an identifiable fault with the drive. Quote Link to comment
WeeboTech Posted September 19, 2012 Share Posted September 19, 2012 I guess keeping a historical review of your ending parity checks, Duration and speed might be worthwhile to see if issues are present. If this were built into Unraid that would be very cool. Every time a parity check or create is a run a summary gets copied to a log file on boot. Mobo, Controller, Drives, OS Version etc plus the averages times. Good for those of us how get a kick out of info like that but also invaluable to Tom if he's trying to track down an issue related to parity speeds I'm going to try this for a while and see how it works out. root@atlas /etc/logrotate.d #cat /etc/logrotate.d/syslog /var/log/syslog { # size 1M dateext sharedscripts prerotate grep -i 'md: sync' /var/log/syslog >> /boot/logs/syslog-mdsync-history endscript postrotate /bin/kill -HUP `cat /var/run/syslogd.pid 2>/dev/null` 2>/dev/null || true endscript } Quote Link to comment
WeeboTech Posted September 19, 2012 Share Posted September 19, 2012 Here's a thought. Since there have been a number of 'parity check' speed changes. Could there be an issue with order of initialization on the drives? Maybe certain controllers are now initialized before others thus changing the order, thus altering the md: sync speeds. Can someone confirm the same controller initialization order and drive order by comparing both syslogs? i.e. pre RC8a and RC8a. Quote Link to comment
DoeBoye Posted September 19, 2012 Share Posted September 19, 2012 In case it slipped under the radar, I have another theory re: slower speeds on the newer release candidates. I posted in general support, but I realize now it might have some value to this discussion as well: http://lime-technology.com/forum/index.php?topic=22633.0 Quote Link to comment
WeeboTech Posted September 19, 2012 Share Posted September 19, 2012 In case it slipped under the radar, I have another theory re: slower speeds on the newer release candidates. I posted in general support, but I realize now it might have some value to this discussion as well: http://lime-technology.com/forum/index.php?topic=22633.0 This is a good case to check out too. Folks with slower parity checks, I would suggest you review the tangent thread. Quote Link to comment
StevenD Posted September 19, 2012 Share Posted September 19, 2012 In case it slipped under the radar, I have another theory re: slower speeds on the newer release candidates. I posted in general support, but I realize now it might have some value to this discussion as well: http://lime-technology.com/forum/index.php?topic=22633.0 This is a good case to check out too. Folks with slower parity checks, I would suggest you review the tangent thread. I looked at that, but I have an Intel processor and it didn't seem to apply. That being said, my proc is running at ~3% during the parity check. I was thinking about going into BIOS and making sure SpeedStep and stuff is all disabled. Quote Link to comment
DoeBoye Posted September 19, 2012 Share Posted September 19, 2012 In case it slipped under the radar, I have another theory re: slower speeds on the newer release candidates. I posted in general support, but I realize now it might have some value to this discussion as well: http://lime-technology.com/forum/index.php?topic=22633.0 This is a good case to check out too. Folks with slower parity checks, I would suggest you review the tangent thread. I looked at that, but I have an Intel processor and it didn't seem to apply. That being said, my proc is running at ~3% during the parity check. I was thinking about going into BIOS and making sure SpeedStep and stuff is all disabled. An easy way to confirm is use the cpu info option of unmenu to see what speed your cpu is running at during parity checks. If it's running at default speed, then it's not being throttled down... Quote Link to comment
StevenD Posted September 19, 2012 Share Posted September 19, 2012 In case it slipped under the radar, I have another theory re: slower speeds on the newer release candidates. I posted in general support, but I realize now it might have some value to this discussion as well: http://lime-technology.com/forum/index.php?topic=22633.0 This is a good case to check out too. Folks with slower parity checks, I would suggest you review the tangent thread. I looked at that, but I have an Intel processor and it didn't seem to apply. That being said, my proc is running at ~3% during the parity check. I was thinking about going into BIOS and making sure SpeedStep and stuff is all disabled. An easy way to confirm is use the cpu info option of unmenu to see what speed your cpu is running at during parity checks. If it's running at default speed, then it's not being throttled down... Looks fine. CPU Info (from /proc/cpuinfo) processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Quad CPU Q8300 @ 2.50GHz stepping : 10 microcode : 0xa07 cpu MHz : 2491.305 cache size : 2048 KB physical id : 0 siblings : 4 core id : 0 cpu cores : 4 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4982.61 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: Quote Link to comment
jumperalex Posted September 19, 2012 Share Posted September 19, 2012 I updated my earlier post for posterity, but don't expect current thread followers to see it so ... Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors. * Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec So I too am seeing slow starts followed by stronger finishes. And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640. Apparently I am running the ondemand governor too. But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed That said, I'll run some testing to see [shrug] Quote Link to comment
DoeBoye Posted September 19, 2012 Share Posted September 19, 2012 Apparently I am running the ondemand governor too. But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed That said, I'll run some testing to see [shrug] Please do! I would love to see the results (Please post results in other thread). It's possible that people running faster cpus may not see any difference at all because even their throttled performance is adequate to max out transfers. My cpu is an X4, but it is a low-power version so perhaps that is the difference... Quote Link to comment
jumperalex Posted September 19, 2012 Share Posted September 19, 2012 OK this is now just silly. In an effort to test DoeBoye's theory I did some xfer tests from my PC SSD to my unraid cache. I saw 65-ishMB/s per terracopy using a 1GB file regardless of governor. I validated cpu speed with cat /proc/cpuinfo I then put the governor back to test parity ... and using the ondemand governor this is what I saw (see attached) At 3.9% now and strong at 115MB/s Quote Link to comment
distracted Posted September 20, 2012 Share Posted September 20, 2012 but when you guys are debating MBs and Mbps, are you counting to 1024 or 1000 ? 1000. 1024 would be MiB/s and Mib/s. Edit: Typed it backwards Edit2: Reference http://physics.nist.gov/cuu/Units/binary.html Quote Link to comment
brent112 Posted September 20, 2012 Share Posted September 20, 2012 I updated my earlier post for posterity, but don't expect current thread followers to see it so ... Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors. * Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec So I too am seeing slow starts followed by stronger finishes. And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640. Apparently I am running the ondemand governor too. But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed That said, I'll run some testing to see [shrug] Same here, i have a gigabit network, FX-4100 AMD processor (unraid virtualized in ESXi). I am getting ~60mbps when writing to my cache drive which is a SSD. I remember back in some of the earlier Betas i was getting network speed (110-112mbps) when transferring from my gigabit connected laptop to the cache drive. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.