unRAID Server Release 5.0-rc8a Available


limetech

Recommended Posts

  • Replies 418
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

None found so far.

 

I'm guilty too, but may I suggest (since a concern for parity speed has been noted in this thread already) users post in the other linked thread concerning the issue. That way this thread doesn't become 30 pages long with only parity speeds and is reserved for any other issues that may come to light.

 

I think we are close to final!

 

I also believe we should wait for a plug-in manager as well as any other additions. I think a plug-in manager will be great(and possibly make my life easier from a maintenance viewpoint) but I think it is more important to release the stable version and then any subsequent changes be released as 5.1 Beta or Alpha builds.

Link to comment

A brief note on parity checks ...

 

Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones.  So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive.   

 

The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size.    e.g. if you have 1TB, 1.5TB, and 2TB drives mixed;  the parity check will slow down as it gets to about the 850MB point;  then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process);  then slow down again as it gets close to the 1.5TB point;  then jump back up until it reaches the last bit of the 2TB drives.

 

Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process.

 

Link to comment

I updated my earlier post for posterity, but don't expect current thread followers to see it so ...

 

Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done

 

Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors.
* Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec

 

So I too am seeing slow starts followed by stronger finishes.    And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640.

 

Apparently I am running the ondemand governor too.  But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed :o  That said, I'll run some testing to see [shrug]

 

Same here, i have a gigabit network, FX-4100 AMD processor (unraid virtualized in ESXi). I am getting ~60mbps when writing to my cache drive which is a SSD. I remember back in some of the earlier Betas i was getting network speed (110-112mbps) when transferring from my gigabit connected laptop to the cache drive.

 

Well either is very poor, assuming you are meaning to talk about mega bits per second on a gigabit network.

Link to comment

A brief note on parity checks ...

 

Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones.  So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive.   

 

The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size.    e.g. if you have 1TB, 1.5TB, and 2TB drives mixed;  the parity check will slow down as it gets to about the 850MB point;  then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process);  then slow down again as it gets close to the 1.5TB point;  then jump back up until it reaches the last bit of the 2TB drives.

Good explanation, i see this as well (5.0RC6, i have not yet upgraded) with a mix of 4 TB, 2 TB, 1.5TB and 1 TB in the server.

Link to comment

A brief note on parity checks ...

 

Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones.  So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive.   

 

The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size.    e.g. if you have 1TB, 1.5TB, and 2TB drives mixed;  the parity check will slow down as it gets to about the 850MB point;  then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process);  then slow down again as it gets close to the 1.5TB point;  then jump back up until it reaches the last bit of the 2TB drives.

 

Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process.

 

True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact.  So I would have expected a fast start on the outter tracks followed by the gradual decline.  But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's.  Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92.  A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. 

 

Anyway the point is, it is very strange behavior but in fairness NOT a bug.  At best I was going to deem it a speed regression but not one so bad that Tom should delay release.  Now ... well hell now I'm just happy as a clam and forgot what I was complaining about :)

Link to comment

I updated my earlier post for posterity, but don't expect current thread followers to see it so ...

 

Despite starting at 80MB/s, then dropping to 20's, followed about 5% later into the 60's, this is what I saw this morning after the check was done

 

Last checked on Tue Sep 18 23:12:00 2012 EDT (today), finding 0 errors.
* Duration: 6 hours, 1 minute, 49 seconds. Average speed: 92.1 MB/sec

 

So I too am seeing slow starts followed by stronger finishes.    And I too did notice a drop in xfer speeds to the point that I even tested using an SSD for my cache vice my original 640GB WD Black ... no real difference so I swapped back to the 640.

 

Apparently I am running the ondemand governor too.  But looking at that other thread and the reasons postulated ... I mean .. sheesh writing to the cache drive over a 1000mbit network should not be impacted by a dual core AthlonII 7750 running at low speed :o  That said, I'll run some testing to see [shrug]

 

Same here, i have a gigabit network, FX-4100 AMD processor (unraid virtualized in ESXi). I am getting ~60mbps when writing to my cache drive which is a SSD. I remember back in some of the earlier Betas i was getting network speed (110-112mbps) when transferring from my gigabit connected laptop to the cache drive.

 

Well either is very poor, assuming you are meaning to talk about mega bits per second on a gigabit network.

 

Sorry..I meant MBps.

Link to comment

A brief note on parity checks ...

 

Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones.  So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive.   

 

The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size.    e.g. if you have 1TB, 1.5TB, and 2TB drives mixed;  the parity check will slow down as it gets to about the 850MB point;  then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process);  then slow down again as it gets close to the 1.5TB point;  then jump back up until it reaches the last bit of the 2TB drives.

 

Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process.

 

True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact.  So I would have expected a fast start on the outter tracks followed by the gradual decline.  But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's.  Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92.  A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. 

 

Anyway the point is, it is very strange behavior but in fairness NOT a bug.  At best I was going to deem it a speed regression but not one so bad that Tom should delay release.  Now ... well hell now I'm just happy as a clam and forgot what I was complaining about :)

 

The more you refresh the front page, the slower your parity check goes. I think it interrupts things while the drive is probed.

I've noticed this on my own 4.7 system.

Link to comment

A brief note on parity checks ...

 

Remember that all modern drives use zoned sectoring -- so their performance is far better on the outer cylinders than the innermost ones.  So ... regardless of the controller-related performance issues (i.e. PCI controllers vs. PCIe x1 controllers vs. PCIe x4 controllers vs. on-motherboard controllers) ... the performance for a given drive will decrease notably as it reaches the innermost cylinders -- this is most noticeable in the last 15% or so of the drive.   

 

The result of this is that parity checks on a system with mixed drive sizes will slow down much more often than those on systems with all drives the same size.    e.g. if you have 1TB, 1.5TB, and 2TB drives mixed;  the parity check will slow down as it gets to about the 850MB point;  then jump back up in speed at 1TB (as the 1TB drives are now no longer part of the process);  then slow down again as it gets close to the 1.5TB point;  then jump back up until it reaches the last bit of the 2TB drives.

 

Unless you have a slow controller (e.g. a 4-port PCI controller) that can't keep up with the transfer rate of the drives, these drive performance issues are the most significant reason for the parity check rate varying during the process.

 

True indeed, however in my case you shuold notice that I have 3 drives, all 2TB and all the same type infact.  So I would have expected a fast start on the outter tracks followed by the gradual decline.  But on my first few partial attemps, as far as 2%-ish, I was seeing checks in the 20's.  Yet oddly once I let it go to completion it sped up overnight to complete with an average of 92.  A subsequent check started at 110 and stayed there through 3+% but I stopped it after that. 

 

Anyway the point is, it is very strange behavior but in fairness NOT a bug.  At best I was going to deem it a speed regression but not one so bad that Tom should delay release.  Now ... well hell now I'm just happy as a clam and forgot what I was complaining about :)

 

The more you refresh the front page, the slower your parity check goes. I think it interrupts things while the drive is probed.

I've noticed this on my own 4.7 system.

 

Yes I've noticed that too.  I try to limit my refreshes to about once a minute or two.  That is about what I've noticed I need to see the numbers climb back up after doing say 5 back-to-back refreshes where I see the numbers fall.  But even then the effect isn't usually on the order of dropping from 100+ down to 20 :o

Link to comment

I have not downloaded any of the RCs but this is half the size of the betas. I just checked the size of RC6 and it is also small at 35MB  :)

 

Time to update from b12a

Correct.  A more efficient compression scheme is being used for the distribution.  It is much smaller, even though it contains more.
Link to comment

What i don't understand; everybody talks about how they do NOT want plugin support for the final, but does that mean i can't use e.g. the SimpleFeatures webgui + its plugins? If that's the case, i will keep running v5.0-rc5...

You misunderstood.

 

We do not want to wait for a plugin-manager to be built by Tom prior for 5.0 to be released.  We will continue to manage the installation of plugins ourselves until 5.1, 5.2, whatever.... using the existing framework in the 5.0series.  It has the "event" processing needed, even if we have to manage downloads, installation, dependencies, etc of the plugins on our own.  (And that process will evolve and mature over time as it did with the unMENU packages, with input and feedback from the user community.. )

 

If anything, the plugin manager will probably (should) be developed as a plugin itself...  That will allow the installation and improvement of it without having to wait on a specific release of unRAID.

 

Joe L.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.