unRAID Server Release 6.2.0-rc4 Available


Recommended Posts

Single core Sempron CPUs should be taken out to the back paddock and shot.  Tell your kids it's "gone to the farm".

 

Any bargain-basement Sandy Bridge/Ivy Bridge Celeron will run rings round a Sempron.  Heck, even an ancient Pentium G620 can do 100MB/s parity build, and that's running on an old B75 board with 3Gb/s SATA.

Even my quad core sempron on an am1 platform (single channel only) can max out my rw speeds on parity checks and rebuilds.  Not using dual parity though

 

Sent from my LG-D852 using Tapatalk

Dual parity is where the PROBLEM starts!  Until you install a dual parity setup, you have no idea what is going to be happening to your parity check speeds!  You may be right about your setup BUT until you actually test it be careful about bragging.  (OH, that old Sempron 140 will do a 105MB/s on a single single parity check...)

Certainly wasn't bragging about a Sempron 3850 running on an AM1 platform.  I don't think anyone has ever bragged about that....

 

And I did point out "not using dual parity though"

Link to comment
  • Replies 185
  • Created
  • Last Reply

Top Posters In This Topic

What if your monthly parity check are taking fifteen plus hours for a 3Tb array?  What is your WAF? Remember in order to implement dual parity, you have to procure (most, likely purchase) a drive that is at least as large as your largest data drive.  If you are running older hardware, it is because you are concerned a bit about cost...

Is your array unusable during a parity check? If so, wouldn't it be better to tune it so the array IS usable, even if it takes longer to complete the check? Parity check time to complete is a red herring, the only elapsed  time I care about is if the array is down.
Link to comment

Single core Sempron CPUs should be taken out to the back paddock and shot.  Tell your kids it's "gone to the farm".

 

Any bargain-basement Sandy Bridge/Ivy Bridge Celeron will run rings round a Sempron.  Heck, even an ancient Pentium G620 can do 100MB/s parity build, and that's running on an old B75 board with 3Gb/s SATA.

Even my quad core sempron on an am1 platform (single channel only) can max out my rw speeds on parity checks and rebuilds.  Not using dual parity though

 

Sent from my LG-D852 using Tapatalk

Dual parity is where the PROBLEM starts!  Until you install a dual parity setup, you have no idea what is going to be happening to your parity check speeds!  You may be right about your setup BUT until you actually test it be careful about bragging.  (OH, that old Sempron 140 will do a 105MB/s on a single single parity check...)

Certainly wasn't bragging about a Sempron 3850 running on an AM1 platform.  I don't think anyone has ever bragged about that....

 

And I did point out "not using dual parity though"

 

Not picking on anyone about their hardware but trying to point out that hardware that perform very well using single parity may not be so stellar  ::)  in performance when running dual parity.  Particularly any hardware before Sandy Bridge and Bulldozer when the AVX2 instruction set was added into those platform bases.  Until we can get some more data, using dual parity could be at a significant performance penalty.  (I have never even really investigated to see if it has an impact on the time required to save files as that measurement has so many more variables in it than the one for parity check speed and time.)

 

Is your array unusable during a parity check? If so, wouldn't it be better to tune it so the array IS usable, even if it takes longer to complete the check? Parity check time to complete is a red herring, the only elapsed  time I care about is if the array is down.

 

To be honest, I have never tried to play a Blu-ray iso (what I consider to be a tough task to perform properly to prevent pauses in the playback) from the server when the check is running.  But I can tell, it does significant impact the responsiveness of the GUI! 

 

EDIT: And some of you are saying that everything will be OK, procure the hardware and test it for yourself...

Link to comment

If you look near the top of your system log, there are a set of messages that output the rate at which various gen/xor algorithms run using different instruction sets.  It then picks the best performing function to be used.  Here 'gen' means "generate Q" and 'xor' means "generate P", ie, simple XOR.

 

The first ones that measure xor speed (P calculation) begin with "kernel: xor:", then there are a set that begin with "kernel: raid6:" which measure algorithm used to calculate Q.

 

Here is what I see for a lowly Atom D510 @ 1.66GHz:

 

Aug 18 10:13:21 Test1 kernel: xor: measuring software checksum speed
Aug 18 10:13:21 Test1 kernel:   prefetch64-sse:  5424.000 MB/sec
Aug 18 10:13:21 Test1 kernel:   generic_sse:  4912.000 MB/sec
Aug 18 10:13:21 Test1 kernel: xor: using function: prefetch64-sse (5424.000 MB/sec)
:
:
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   gen()   113 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   xor()   714 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   gen()   371 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   xor()  1193 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   gen()   632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   xor()  1316 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: using algorithm sse2x4 gen() 632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: .... xor() 1316 MB/s, rmw enabled
Aug 18 10:13:21 Test1 kernel: raid6: using ssse3x2 recovery algorithm

 

The best it can do is 632 MB/s for gen and 5424 MB/s for xor.  If we assume average throughput for attached HDD's is 100 MB/s, then barring other bottlenecks, this says that with 6 or fewer array devices, dual-parity sync/check will be disk bandwidth limited; but, if you go wider than that, CPU will start limiting performance.  In single-parity setup, no CPU bottleneck until you go to 54 drives or higher (ie, never a bottleneck).

 

Here is what I see for a Core i5-4430 @ 3.00 GHz (a modest CPU by today's standards):

 

Aug 18 09:59:07 test2 kernel: xor: automatically using best checksumming function:
Aug 18 09:59:07 test2 kernel:   avx       : 41780.000 MB/sec
:
:
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   gen()  9300 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7027 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   gen() 11988 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8070 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   gen() 14296 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x1   gen() 18269 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x2   gen() 21593 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x4   gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: using algorithm avx2x4 gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7031 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8330 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: .... xor() 9363 MB/s, rmw enabled
Aug 18 09:59:07 test2 kernel: raid6: using avx2x2 recovery algorithm

 

This should support a 25-drive array before CPU becomes bottleneck in dual-parity sync/check. Notice that for the xor calculation, if AVX instruction set is present, kernel uses it and doesn't bother measuring others.

Link to comment

 

Added support in user share file system and mover for "special" files: fifos (pipes), named sockets, device nodes.

 

The user who brought to my attention the issue with mknod on RC3 has let me know that it is working properly on RC4.  I don't personally run those affected containers, so I cannot speak to this personally.

 

That great!  Thanks for the report.

Link to comment

I wanted to know if there are changes being made to the unRAID driver or they are caused by the different kernels (or other factors).

Not that I'm aware of.

 

I looked at the algorithm used for RAID6, and though always the same, speed varies with each release, with lower CPU usage coinciding with fastest results, b18 and rc4, so I guess this answers my question.

That is odd, speed should not vary.  I'll put this on our "strange things to look into" list.  :o

Link to comment

The Q calculation is pretty expensive, especially for less-capable CPU's.  The kernel has a number of hand-coded assembly functions to do the math and it picks the "best" one upon boot (by measuring it).  Refer to:

http://lxr.free-electrons.com/source/lib/raid6/algos.c?v=4.4

 

Best performance is had when your CPU supports AVX2 instruction set:

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2

 

Thanks for the informative links, but sorry I wasn't clear, my question is not about the high CPU usage (I was using a very low end CPU for those tests, hence high CPU utilization is expected), but about the difference in CPU usage between releases, if you click to expand the animated GIF I posted above you can see how it varies considerably with each v6.2 release, I wanted to know if there are changes being made to the unRAID driver or they are caused by the different kernels (or other factors).

 

I looked at the algorithm used for RAID6, and though always the same, speed varies with each release, with lower CPU usage coinciding with fastest results, b18 and rc4, so I guess this answers my question.

As I explained earlier, you can look for changes yourself in the md kernel source code between releases. No need to rely on LimeTech for that.

Link to comment

If you look near the top of your system log, there are a set of messages that output the rate at which various gen/xor algorithms run using different instruction sets.  It then picks the best performing function to be used.  Here 'gen' means "generate Q" and 'xor' means "generate P", ie, simple XOR.

 

The first ones that measure xor speed (P calculation) begin with "kernel: xor:", then there are a set that begin with "kernel: raid6:" which measure algorithm used to calculate Q.

 

Here is what I see for a lowly Atom D510 @ 1.66GHz:

 

Aug 18 10:13:21 Test1 kernel: xor: measuring software checksum speed
Aug 18 10:13:21 Test1 kernel:   prefetch64-sse:  5424.000 MB/sec
Aug 18 10:13:21 Test1 kernel:   generic_sse:  4912.000 MB/sec
Aug 18 10:13:21 Test1 kernel: xor: using function: prefetch64-sse (5424.000 MB/sec)
:
:
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   gen()   113 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   xor()   714 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   gen()   371 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   xor()  1193 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   gen()   632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   xor()  1316 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: using algorithm sse2x4 gen() 632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: .... xor() 1316 MB/s, rmw enabled
Aug 18 10:13:21 Test1 kernel: raid6: using ssse3x2 recovery algorithm

 

The best it can do is 632 MB/s for gen and 5424 MB/s for xor.  If we assume average throughput for attached HDD's is 100 MB/s, then barring other bottlenecks, this says that with 6 or fewer array devices, dual-parity sync/check will be disk bandwidth limited; but, if you go wider than that, CPU will start limiting performance.  In single-parity setup, no CPU bottleneck until you go to 54 drives or higher (ie, never a bottleneck).

 

Here is what I see for a Core i5-4430 @ 3.00 GHz (a modest CPU by today's standards):

 

Aug 18 09:59:07 test2 kernel: xor: automatically using best checksumming function:
Aug 18 09:59:07 test2 kernel:   avx       : 41780.000 MB/sec
:
:
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   gen()  9300 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7027 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   gen() 11988 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8070 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   gen() 14296 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x1   gen() 18269 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x2   gen() 21593 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x4   gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: using algorithm avx2x4 gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7031 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8330 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: .... xor() 9363 MB/s, rmw enabled
Aug 18 09:59:07 test2 kernel: raid6: using avx2x2 recovery algorithm

 

This should support a 25-drive array before CPU becomes bottleneck in dual-parity sync/check. Notice that for the xor calculation, if AVX instruction set is present, kernel uses it and doesn't bother measuring others.

 

First thing I question is your math  (i.e.;  25238/100  is not ~ 25...).  But that aside here are the section of the syslog for my Testbed server (Rose):

 

Aug 18 14:59:24 Rose kernel: xor: measuring software checksum speed
Aug 18 14:59:24 Rose kernel:   prefetch64-sse: 10844.000 MB/sec
Aug 18 14:59:24 Rose kernel:   generic_sse: 10264.000 MB/sec
Aug 18 14:59:24 Rose kernel: xor: using function: prefetch64-sse (10844.000 MB/sec)
:
:
Aug 18 14:59:24 Rose kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Aug 18 14:59:24 Rose kernel: PCI: not using MMCONFIG
Aug 18 14:59:24 Rose kernel: PCI: Using configuration type 1 for base access
Aug 18 14:59:24 Rose kernel: PCI: Using configuration type 1 for extended access
Aug 18 14:59:24 Rose kernel: raid6: sse2x1   gen()  3574 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x1   xor()  3529 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x2   gen()  5753 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x2   xor()  5962 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x4   gen()  7386 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x4   xor()  3783 MB/s
Aug 18 14:59:24 Rose kernel: raid6: using algorithm sse2x4 gen() 7386 MB/s
Aug 18 14:59:24 Rose kernel: raid6: .... xor() 3783 MB/s, rmw enabled

 

Using your method, I get a figure of 108 disks for single parity and 73 disks for dual parity.  Yet, the dual parity speeds for recent releases appear to be CPU limited for this server and they were not for beta 19.  Any insight you can provide would be appreciated.

 

Link to comment

First thing I question is your math  (i.e.;  25238/100  is not ~ 25...).

Doh!

 

But that aside here are the section of the syslog for my Testbed server (Rose):

 

Aug 18 14:59:24 Rose kernel: xor: measuring software checksum speed
Aug 18 14:59:24 Rose kernel:   prefetch64-sse: 10844.000 MB/sec
Aug 18 14:59:24 Rose kernel:   generic_sse: 10264.000 MB/sec
Aug 18 14:59:24 Rose kernel: xor: using function: prefetch64-sse (10844.000 MB/sec)
:
:
Aug 18 14:59:24 Rose kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000)
Aug 18 14:59:24 Rose kernel: PCI: not using MMCONFIG
Aug 18 14:59:24 Rose kernel: PCI: Using configuration type 1 for base access
Aug 18 14:59:24 Rose kernel: PCI: Using configuration type 1 for extended access
Aug 18 14:59:24 Rose kernel: raid6: sse2x1   gen()  3574 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x1   xor()  3529 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x2   gen()  5753 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x2   xor()  5962 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x4   gen()  7386 MB/s
Aug 18 14:59:24 Rose kernel: raid6: sse2x4   xor()  3783 MB/s
Aug 18 14:59:24 Rose kernel: raid6: using algorithm sse2x4 gen() 7386 MB/s
Aug 18 14:59:24 Rose kernel: raid6: .... xor() 3783 MB/s, rmw enabled

 

Using your method, I get a figure of 108 disks for single parity and 73 disks for dual parity.  Yet, the dual parity speeds for recent releases appear to be CPU limited for this server and they were not for beta 19.  Any insight you can provide would be appreciated.

 

At the moment, can't explain it.

 

To add to earlier post, I said, "barring other bottlenecks"... as the array width increases overall "strain" on the system increases.  Usually first to top out is a PCI bus, where, for example, you might have a single disk controller with more disk-side bandwidth than PCI bus bandwidth.  Next would be the memory controller where there is more overall I/O bandwidth than the memory controller can support.

 

That probably doesn't explain differences between releases.  Could be a lot of things.

Link to comment

 

Added support in user share file system and mover for "special" files: fifos (pipes), named sockets, device nodes.

 

The user who brought to my attention the issue with mknod on RC3 has let me know that it is working properly on RC4.  I don't personally run those affected containers, so I cannot speak to this personally.

That would be me. Yes I can confirm it works on rc4.

 

Sent from my SM-N920V using Tapatalk

 

 

Link to comment

 

Added support in user share file system and mover for "special" files: fifos (pipes), named sockets, device nodes.

 

The user who brought to my attention the issue with mknod on RC3 has let me know that it is working properly on RC4.  I don't personally run those affected containers, so I cannot speak to this personally.

That would be me. Yes I can confirm it works on rc4.

 

Sent from my SM-N920V using Tapatalk

 

8)  thanks!

Link to comment
.... wouldn't it be better to tune it so the array IS usable, even if it takes longer to complete the check? Parity check time to complete is a red herring, the only elapsed  time I care about is if the array is down.

 

Not if you have an average of 2 power cuts a day (four in the last seven hours).  It's almost a year since I had a completed parity check.

Link to comment

If you look near the top of your system log, there are a set of messages that output the rate at which various gen/xor algorithms run using different instruction sets.  It then picks the best performing function to be used.  Here 'gen' means "generate Q" and 'xor' means "generate P", ie, simple XOR.

 

The first ones that measure xor speed (P calculation) begin with "kernel: xor:", then there are a set that begin with "kernel: raid6:" which measure algorithm used to calculate Q.

 

Here is what I see for a lowly Atom D510 @ 1.66GHz:

 

Aug 18 10:13:21 Test1 kernel: xor: measuring software checksum speed
Aug 18 10:13:21 Test1 kernel:   prefetch64-sse:  5424.000 MB/sec
Aug 18 10:13:21 Test1 kernel:   generic_sse:  4912.000 MB/sec
Aug 18 10:13:21 Test1 kernel: xor: using function: prefetch64-sse (5424.000 MB/sec)
:
:
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   gen()   113 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x1   xor()   714 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   gen()   371 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x2   xor()  1193 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   gen()   632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: sse2x4   xor()  1316 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: using algorithm sse2x4 gen() 632 MB/s
Aug 18 10:13:21 Test1 kernel: raid6: .... xor() 1316 MB/s, rmw enabled
Aug 18 10:13:21 Test1 kernel: raid6: using ssse3x2 recovery algorithm

 

The best it can do is 632 MB/s for gen and 5424 MB/s for xor.  If we assume average throughput for attached HDD's is 100 MB/s, then barring other bottlenecks, this says that with 6 or fewer array devices, dual-parity sync/check will be disk bandwidth limited; but, if you go wider than that, CPU will start limiting performance.  In single-parity setup, no CPU bottleneck until you go to 54 drives or higher (ie, never a bottleneck).

 

Here is what I see for a Core i5-4430 @ 3.00 GHz (a modest CPU by today's standards):

 

Aug 18 09:59:07 test2 kernel: xor: automatically using best checksumming function:
Aug 18 09:59:07 test2 kernel:   avx       : 41780.000 MB/sec
:
:
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   gen()  9300 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7027 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   gen() 11988 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8070 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   gen() 14296 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x1   gen() 18269 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x2   gen() 21593 MB/s
Aug 18 09:59:07 test2 kernel: raid6: avx2x4   gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: using algorithm avx2x4 gen() 25238 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x1   xor()  7031 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x2   xor()  8330 MB/s
Aug 18 09:59:07 test2 kernel: raid6: sse2x4   xor()  9363 MB/s
Aug 18 09:59:07 test2 kernel: raid6: .... xor() 9363 MB/s, rmw enabled
Aug 18 09:59:07 test2 kernel: raid6: using avx2x2 recovery algorithm

 

This should support a 25-drive array before CPU becomes bottleneck in dual-parity sync/check. Notice that for the xor calculation, if AVX instruction set is present, kernel uses it and doesn't bother measuring others.

 

Very interesting and useful information. Given that the forum, wiki and google have a decade of CPU recommendations that do not take this into account we should promote these figures out of the logs and into the GUI.

 

As a path of least resistance perhaps system info would be a sane place to present the figures in a human readable way.

Link to comment

As a path of least resistance perhaps system info would be a sane place to present the figures in a human readable way.

 

You mean something like this ?

 

That is nice place to display the information.  Now all that remains is to figure out what the numbers are telling us in the real world of things like writing speed, and parity check speeds and times. 

Link to comment

This is a bit out-of-topic but how many of you are actually running a server that is using dual parity at this time?  And what is its size and drive types.

I have 2 x 6TB WD Reds for parity on my main server.

Parity is valid
Last checked on Mon 01 Aug 2016 02:59:38 PM EDT (19 days ago), finding 0 errors.
Duration: 14 hours, 59 minutes, 37 seconds. Average speed: 111.2 MB/s

Link to comment

Hi there,

 

I just upgraded from RC3 to RC4 and although everything upgraded without issue, I am now unable to launch my Windows 10 VM with VGA passthrough which worked fine before the upgrade. I did a plugin upgrade.

 

Attached are my diagnostic file, error message, and VM config.

 

Please let me know what I can do to get this working again.

 

Thanks

fosterserver-diagnostics-20160820-1147.zip

Screen_Shot_2016-08-20_at_11_48.55_AM.png.694e082cc3ec4ec2b0ab8a4d76deaee1.png

Screen_Shot_2016-08-20_at_11_48.00_AM.png.b71540ad06b1da07a651f3516b5d2b5b.png

Link to comment

As a path of least resistance perhaps system info would be a sane place to present the figures in a human readable way.

 

You mean something like this ?

 

Meh - I don't see the value in presenting this info because there's nothing you can do with it.  The rate "is what it is".

Link to comment

As a path of least resistance perhaps system info would be a sane place to present the figures in a human readable way.

 

You mean something like this ?

 

Meh - I don't see the value in presenting this info because there's nothing you can do with it.  The rate "is what it is".

 

Sure, for the system you have. But it is very helpful for what-if situations.

 

The community wants to have a database of various setups and their performance so users on lower end systems know what to expect IF they go Dual-Parity. It also helps them if they're having lower performance than they'd like in knowing what a possible upgrade would give them.

Link to comment

You mean something like this ?

 

Meh - I don't see the value in presenting this info because there's nothing you can do with it.  The rate "is what it is".

 

Sure, for the system you have. But it is very helpful for what-if situations.

 

The community wants to have a database of various setups and their performance so users on lower end systems know what to expect IF they go Dual-Parity. It also helps them if they're having lower performance than they'd like in knowing what a possible upgrade would give them.

 

I like the idea of having a number that would indicate expected dual parity performance, but from what I've been reading don't know this is it:

 

Taken alone these numbers are pretty impressive. Of course this isn’t surprising; a lot of people have put a great deal of effort into making these routines extremely efficient. For comparison, at this throughput it would require somewhere around 2% of a single core’s cycles to rebuild an eight-drive array (assuming each drive pushes somewhere around 60MByte/s, which itself it a rather generous number).

 

Looking at the code responsible for these messages, these figures arise from timing a simple loop to generate syndromes from a dummy buffer. As this buffer fits in cache these performance numbers clearly aren’t what we will see under actual workloads.

http://bgamari.github.io/posts/2015-04-27-software-raid-performance.html

 

 

Also, and from my own experiences, AMD FM2 CPUs have a considerably higher result compared to equivalent Intel CPUs and are much slower with dual parity.

 

 

Link to comment

As a path of least resistance perhaps system info would be a sane place to present the figures in a human readable way.

 

You mean something like this ?

 

Meh - I don't see the value in presenting this info because there's nothing you can do with it.  The rate "is what it is".

 

Sure, for the system you have. But it is very helpful for what-if situations.

 

The community wants to have a database of various setups and their performance so users on lower end systems know what to expect IF they go Dual-Parity. It also helps them if they're having lower performance than they'd like in knowing what a possible upgrade would give them.

 

Not disputing that, in fact we have something in the works.  My objection is there's no point putting it in the Info box.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.