JackBauer Posted September 20, 2012 Share Posted September 20, 2012 So far it seems the only potential concern is slower parity checks on some configurations, but no true "defects" found? Quote Link to comment
Influencer Posted September 20, 2012 Share Posted September 20, 2012 None found so far. I'm guilty too, but may I suggest (since a concern for parity speed has been noted in this thread already) users post in the other linked thread concerning the issue. That way this thread doesn't become 30 pages long with only parity speeds and is reserved for any other issues that may come to light. I think we are close to final! I also believe we should wait for a plug-in manager as well as any other additions. I think a plug-in manager will be great(and possibly make my life easier from a maintenance viewpoint) but I think it is more important to release the stable version and then any subsequent changes be released as 5.1 Beta or Alpha builds. Quote Link to comment
jinjorge Posted September 20, 2012 Share Posted September 20, 2012 installed and currently doing a parity sync. nothing out of the ordinary so far. also want to add my +1 to pushing new features to .1 and or .2 releases J Quote Link to comment
garycase Posted September 20, 2012 Share Posted September 20, 2012 A brief note on parity checks ... Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones. So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive. The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size. e.g. if you have 1TB, 1.5TB, and 2TB drives mixed; the parity check will slow down as it gets to about the 850MB point; then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process); then slow down again as it gets close to the 1.5TB point; then jump back up until it reaches the last bit of the 2TB drives. Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process. Quote Link to comment
jaybee Posted September 20, 2012 Share Posted September 20, 2012 I updated my earlier post for posterity, but don't expect current thread followers to see it so ... Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors. * Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec So I too am seeing slow starts followed by stronger finishes. And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640. Apparently I am running the ondemand governor too. But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed That said, I'll run some testing to see [shrug] Same here, i have a gigabit network, FX-4100 AMD processor (unraid virtualized in ESXi). I am getting ~60mbps when writing to my cache drive which is a SSD. I remember back in some of the earlier Betas i was getting network speed (110-112mbps) when transferring from my gigabit connected laptop to the cache drive. Well either is very poor, assuming you are meaning to talk about mega bits per second on a gigabit network. Quote Link to comment
downloadski Posted September 20, 2012 Share Posted September 20, 2012 A brief note on parity checks ... Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones. So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive. The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size. e.g. if you have 1TB, 1.5TB, and 2TB drives mixed; the parity check will slow down as it gets to about the 850MB point; then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process); then slow down again as it gets close to the 1.5TB point; then jump back up until it reaches the last bit of the 2TB drives. Good explanation, i see this as well (5.0RC6, i have not yet upgraded) with a mix of 4 TB, 2 TB, 1.5TB and 1 TB in the server. Quote Link to comment
GFOviedo Posted September 20, 2012 Share Posted September 20, 2012 I never really looked into parity check speeds since my server is set up to run it while I'm asleep and not using it. However, I just installed RC8a and started parity check. Parity check speeds varied from 55 MB/s to 116 MB/s. Quote Link to comment
mr-hexen Posted September 20, 2012 Share Posted September 20, 2012 Taking a speed at a point in a parity check or sync is not accurate. Let it completely finish. Then review the syslog for md sync xxxxxxx and it will say complete in xxxxxx seconds Take the size of the parity drive and * it by 1 000 000 then divide by the seconds to complete the parity sync/check. Quote Link to comment
jumperalex Posted September 20, 2012 Share Posted September 20, 2012 A brief note on parity checks ... Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones. So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive. The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size. e.g. if you have 1TB, 1.5TB, and 2TB drives mixed; the parity check will slow down as it gets to about the 850MB point; then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process); then slow down again as it gets close to the 1.5TB point; then jump back up until it reaches the last bit of the 2TB drives. Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process. True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact. So I would have expected a fast start on the outter tracks followed by the gradual decline. But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's. Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92. A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. Anyway the point is, it is very strange behavior but in fairness NOT a bug. At best I was going to deem it a speed regression but not one so bad that Tom should delay release. Now ... well hell now I'm just happy as a clam and forgot what I was complaining about Quote Link to comment
brent112 Posted September 20, 2012 Share Posted September 20, 2012 I updated my earlier post for posterity, but don't expect current thread followers to see it so ... Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors. * Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec So I too am seeing slow starts followed by stronger finishes. And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640. Apparently I am running the ondemand governor too. But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed That said, I'll run some testing to see [shrug] Same here, i have a gigabit network, FX-4100 AMD processor (unraid virtualized in ESXi). I am getting ~60mbps when writing to my cache drive which is a SSD. I remember back in some of the earlier Betas i was getting network speed (110-112mbps) when transferring from my gigabit connected laptop to the cache drive. Well either is very poor, assuming you are meaning to talk about mega bits per second on a gigabit network. Sorry..I meant MBps. Quote Link to comment
WeeboTech Posted September 20, 2012 Share Posted September 20, 2012 A brief note on parity checks ... Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones. So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive. The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size. e.g. if you have 1TB, 1.5TB, and 2TB drives mixed; the parity check will slow down as it gets to about the 850MB point; then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process); then slow down again as it gets close to the 1.5TB point; then jump back up until it reaches the last bit of the 2TB drives. Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process. True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact. So I would have expected a fast start on the outter tracks followed by the gradual decline. But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's. Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92. A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. Anyway the point is, it is very strange behavior but in fairness NOT a bug. At best I was going to deem it a speed regression but not one so bad that Tom should delay release. Now ... well hell now I'm just happy as a clam and forgot what I was complaining about The more you refresh the front page, the slower your parity check goes. I think it interrupts things while the drive is probed. I've noticed this on my own 4.7 system. Quote Link to comment
jumperalex Posted September 20, 2012 Share Posted September 20, 2012 A brief note on parity checks ... Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones. So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive. The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size. e.g. if you have 1TB, 1.5TB, and 2TB drives mixed; the parity check will slow down as it gets to about the 850MB point; then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process); then slow down again as it gets close to the 1.5TB point; then jump back up until it reaches the last bit of the 2TB drives. Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process. True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact. So I would have expected a fast start on the outter tracks followed by the gradual decline. But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's. Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92. A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. Anyway the point is, it is very strange behavior but in fairness NOT a bug. At best I was going to deem it a speed regression but not one so bad that Tom should delay release. Now ... well hell now I'm just happy as a clam and forgot what I was complaining about The more you refresh the front page, the slower your parity check goes. I think it interrupts things while the drive is probed. I've noticed this on my own 4.7 system. Yes I've noticed that too. I try to limit my refreshes to about once a minute or two. That is about what I've noticed I need to see the numbers climb back up after doing say 5 back-to-back refreshes where I see the numbers fall. But even then the effect isn't usually on the order of dropping from 100+ down to 20 Quote Link to comment
GFOviedo Posted September 20, 2012 Share Posted September 20, 2012 My parity check was completed in 4.35 hours for a total of 6.54 TB. Quote Link to comment
mvdzwaan Posted September 20, 2012 Share Posted September 20, 2012 My parity check was completed in 4.35 hours for a total of 6.54 TB. Not the total size of the array determines the time needed. What's the size of your parity disk and then you got to factor in the different size disks etc.etc.. Quote Link to comment
joshpond Posted September 20, 2012 Share Posted September 20, 2012 It seems some of the parity tests people are running are only a few minutes. Perhaps also to test the speed you have to run the whole thing to see if the total time is different between the different RC's It could also be that it is reading/outputting the wrong info but everything runs as is. Josh Quote Link to comment
Interstellar Posted September 20, 2012 Share Posted September 20, 2012 All running at full speed here. Good stuff Quote Link to comment
SCSI Posted September 21, 2012 Share Posted September 21, 2012 I have not downloaded any of the RCs but this is half the size of the betas. I just checked the size of RC6 and it is also small at 35MB Time to update from b12a Quote Link to comment
S80_UK Posted September 21, 2012 Share Posted September 21, 2012 All running at full speed here. Good stuff Me too. Quote Link to comment
Joe L. Posted September 21, 2012 Share Posted September 21, 2012 I have not downloaded any of the RCs but this is half the size of the betas. I just checked the size of RC6 and it is also small at 35MB Time to update from b12a Correct. A more efficient compression scheme is being used for the distribution. It is much smaller, even though it contains more. Quote Link to comment
michael123 Posted September 21, 2012 Share Posted September 21, 2012 Upgraded yesterday's night - looks fine.. Quote Link to comment
Harpz Posted September 21, 2012 Share Posted September 21, 2012 Upgraded this morning and appear to be fine, no issues that i can see but I'm new to unraid so i could of missed something, syslog flags nothing i can see that could be a problem in unmenu. Quote Link to comment
ajgoyt Posted September 21, 2012 Share Posted September 21, 2012 Lets go final Already ... The Plugins etc can wait like others have mentioned! Currently a little Paranoid to upgrade from 4.7 Pro, Per I kind of a Noob with Unraid. Looks like it's ready Tom? Quote Link to comment
jowi Posted September 21, 2012 Share Posted September 21, 2012 What i don't understand; everybody talks about how they do NOT want plugin support for the final, but does that mean i can't use e.g. the SimpleFeatures webgui + its plugins? If that's the case, i will keep running v5.0-rc5... Quote Link to comment
Joe L. Posted September 21, 2012 Share Posted September 21, 2012 What i don't understand; everybody talks about how they do NOT want plugin support for the final, but does that mean i can't use e.g. the SimpleFeatures webgui + its plugins? If that's the case, i will keep running v5.0-rc5... You misunderstood. We do not want to wait for a plugin-manager to be built by Tom prior for 5.0 to be released. We will continue to manage the installation of plugins ourselves until 5.1, 5.2, whatever.... using the existing framework in the 5.0series. It has the "event" processing needed, even if we have to manage downloads, installation, dependencies, etc of the plugins on our own. (And that process will evolve and mature over time as it did with the unMENU packages, with input and feedback from the user community.. ) If anything, the plugin manager will probably (should) be developed as a plugin itself... That will allow the installation and improvement of it without having to wait on a specific release of unRAID. Joe L. Quote Link to comment
prostuff1 Posted September 21, 2012 Share Posted September 21, 2012 What i don't understand; everybody talks about how they do NOT want plugin support for the final, but does that mean i can't use e.g. the SimpleFeatures webgui + its plugins? If that's the case, i will keep running v5.0-rc5... No, plugins will still work, there just won't be a plugin manager (ala firefox). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.