unRAID Server Release 6.1.3 Available


limetech

Recommended Posts

...  I believe we should remove the first 2 classes from this discussion, not because they aren't serious issues, but because they really aren't parity check software issues.

 

EXCEPT that a CPU should NOT be maxed out for a simple parity check ... so WHY that's happening -- which it did NOT happen with v5 or previous v6 versions should be a matter of interest.

 

... I agree the SAS2LP issue is a separate problem that I'm sure is already being looked into.

 

 

... If your CPU is maxed out (or close to it), then NOTHING is going to work right!  It's not just parity checks.  For some, it's single core low speed CPU's at fault, don't know about others.  Gary is right, and they *should* work fine with the minimal demands of only running a NAS, and nothing more.

 

And they DO work just fine with v5 ... and apparently with pre-6.1.3 versions of v6 (as I noted I haven't tried earlier v6 versions).

 

I think I'll reset back to 6.1.3 for one more test -- the "no GUI" until it's done.  I might even try using 6.1.2 just for grins ... although candidly I'm quite happy with 5.0.6 for this system, so it's not worth too much "fiddling".

 

 

Link to comment
  • Replies 246
  • Created
  • Last Reply

Top Posters In This Topic

ikosa, your first post (the one that started this) appears to be the only one that implicates 6.1.3 (changes in 6.1.3 from 6.1.2).  But later in this post, you never mention 6.1.2, only 6.0.1 and 6.1.3.  Do you possibly have a test result from 6.1.2?  It's still available on the LimeTech download page, if you want to test again with 6.1.2.

Sorry about that, At first i didnt examine my old parity checks deeply for the unraid version and i was not sure about the upgrade affects it or not, so i just ask that way. After trying to eliminate other factors and feeling that i m not alone. i look deeply for all my parity checks (after 6.0 from stored notifications) and runing unraid version at that date. So i realize that i never done any parity checks with 6.1, 6.1.1 and 6.1.2 because of the close release dates and i try to make it it clear at this post.

I'm planing to downgrade to 6.1.2 and test with it but if the issue continoues i have to test it with the older versions since 6.0.1. This is a really long hours without unraid and without dockers. Is there any faster way to test this? For instance running parity check for a fixed time (1 or 2 hours for ex.) and comparing the position. Or running check until a position (10% or 15% for ex.). Or running tunables test script and compare results?

 

 

If your CPU is maxed out (or close to it), then NOTHING is going to work right!  It's not just parity checks.  For some, it's single core low speed CPU's at fault, don't know about others.  Gary is right, and they *should* work fine with the minimal demands of only running a NAS, and nothing more.  But that's a separate issue.  My advice for those in this class is to try another parity check with NO webGui running at all.  You already know how long a parity check can run.  Start the parity check from the webGui then close the browser completely, on all stations running the webGui, and don't start it again until you know it's done.  (You can use browsers for other things, but don't try opening the webGui, anywhere.)  With this test, you can determine what percentage of impact the webGui is having on the CPU.

Indeed dockers, vms and webgui is affecting the speed of parity check but our old checks also done in the same conditions. So to be fair all tests must be repeated with this conditions.

When i realize that something is wrong with my parity check speed (at my first and the longest check with 6.1.3) i disable docker to gain speed and to check if something causing this issue in docker.(not sure when but before %50) I cant give accurate results but it didnt affect much.

EDIT: And also i diasable cache dirs.

Link to comment

To look at the full picture you need to take Dockers and VMs into the equation as well.

 

When timing the parity check execution it would be better to have both Dockers and VMs disabled and minimize 'external' factors.

ok...  Explain this to me.

 

Results from unRaid tunables tester.  All tests done 2x to verify not a fluke.  And no other computer accessing unRaid at all.

 

Test #1:  Docker disabled.  Cache Dirs enabled.  No other plugins active.  Fresh boot.  Test started within 30 seconds of array being mounted.

Test 1   - md_sync_window=384  - Completed in 20.115 seconds  =  44.0 MB/s
Test 2   - md_sync_window=512  - Completed in 20.113 seconds  =  49.7 MB/s
Test 3   - md_sync_window=640  - Completed in 20.114 seconds  =  58.3 MB/s
Test 4   - md_sync_window=768  - Completed in 20.112 seconds  =  65.6 MB/s
Test 5   - md_sync_window=896  - Completed in 20.114 seconds  =  69.8 MB/s
Test 6   - md_sync_window=1024 - Completed in 20.112 seconds  =  75.4 MB/s
Test 7   - md_sync_window=1152 - Completed in 20.115 seconds  =  82.4 MB/s
Test 8   - md_sync_window=1280 - Completed in 20.115 seconds  =  86.8 MB/s
Test 9   - md_sync_window=1408 - Completed in 20.119 seconds  =  91.4 MB/s
Test 10  - md_sync_window=1536 - Completed in 20.110 seconds  =  96.1 MB/s
Test 11  - md_sync_window=1664 - Completed in 20.108 seconds  = 101.2 MB/s
Test 12  - md_sync_window=1792 - Completed in 20.110 seconds  =  97.1 MB/s
Test 13  - md_sync_window=1920 - Completed in 20.108 seconds  = 110.1 MB/s
Test 14  - md_sync_window=2048 - Completed in 20.104 seconds  = 112.6 MB/s

 

Test #2: Docker enabled.  Cache Dirs enabled.  No other plugins active.  Fresh boot.  Test started within 30 seconds of array being mounted.

Cache drive being hammered by all the containers booting up.  Array being scanned by cache dirs

Test 1   - md_sync_window=384  - Completed in 20.117 seconds  =  64.1 MB/s
Test 2   - md_sync_window=512  - Completed in 20.128 seconds  =  80.8 MB/s
Test 3   - md_sync_window=640  - Completed in 20.118 seconds  =  92.9 MB/s
Test 4   - md_sync_window=768  - Completed in 20.124 seconds  = 102.1 MB/s
Test 5   - md_sync_window=896  - Completed in 20.126 seconds  = 113.1 MB/s
Test 6   - md_sync_window=1024 - Completed in 20.117 seconds  = 119.2 MB/s
Test 7   - md_sync_window=1152 - Completed in 20.145 seconds  = 116.4 MB/s
Test 8   - md_sync_window=1280 - Completed in 20.116 seconds  = 122.5 MB/s
Test 9   - md_sync_window=1408 - Completed in 20.112 seconds  = 113.1 MB/s
Test 10  - md_sync_window=1536 - Completed in 20.112 seconds  = 122.5 MB/s
Test 11  - md_sync_window=1664 - Completed in 20.116 seconds  = 122.9 MB/s
Test 12  - md_sync_window=1792 - Completed in 20.109 seconds  = 121.2 MB/s
Test 13  - md_sync_window=1920 - Completed in 20.110 seconds  = 122.7 MB/s
Test 14  - md_sync_window=2048 - Completed in 20.118 seconds  = 123.0 MB/s

My SAS2LP and/or CPU/mobo prefers to have a full load on it when doing parity checks.    :o  And like I said, this pattern goes all the way back to 6b14e with my testing on the preemptible kernel (when I basically ran straight parity checks for a week for Tom), and is repeatable for my system.  As an aside, the speeds I got under 6.1.3 were actually an improvement over 6.1.2 (all else stayed the same) by ~15 MB/s.

 

I guess the real problem here is that all of this parity speed check discussions is all highly hardware dependent, and is very difficult to pin down completely 100% where the actual fault lies.

Link to comment

Squid: results seems mis calculated to me, am i missing smtg?

 

Test#1

Test 1 Completed in 20.115 seconds  =  44.0 MB/s

Test#2

Test 1 Completed in 20.117 seconds  =  64.1 MB/s

 

Test#1 completed in a shorter time but calculated speed is lower.

tunables tester calculates the mb/s from how much data is transferred in the time period
Link to comment

... Is there any faster way to test this? For instance running parity check for a fixed time (1 or 2 hours for ex.) and comparing the position. Or running check until a position (10% or 15% for ex.)

 

Running for a fixed amount of time and noting the position will give a pretty good idea of the relative speeds.

 

I'd run 30 min or an hour for each different version and/or settings that you want to check.

 

Link to comment

To look at the full picture you need to take Dockers and VMs into the equation as well.

 

When timing the parity check execution it would be better to have both Dockers and VMs disabled and minimize 'external' factors.

 

+1.  Whenever I care to look at "true" parity check speeds, I turn off all dockers, VMs and make sure the server is not being taxed (streaming BDRIPS).

Link to comment

Agree -- you especially don't want it streaming ANYTHING ... that would just thrash the disk repeatedly and slow things WAY down.    When I do a parity check, the server is never used for ANYTHING until it completes.  No reads; no writes; absolutely nothing except an occasional check of the status.

 

Link to comment

My SAS2LP and/or CPU/mobo prefers to have a full load on it when doing parity checks.    :o  And like I said, this pattern goes all the way back to 6b14e with my testing on the preemptible kernel (when I basically ran straight parity checks for a week for Tom), and is repeatable for my system.  As an aside, the speeds I got under 6.1.3 were actually an improvement over 6.1.2 (all else stayed the same) by ~15 MB/s.

 

I guess the real problem here is that all of this parity speed check discussions is all highly hardware dependent, and is very difficult to pin down completely 100% where the actual fault lies.

 

I think your SAS2LP is a wildcard, too many strange behaviors, and that taints any conclusions.  The fact that *sometimes* it causes spurious parity errors makes it hard to ever trust it.  We need to see if someone can replicate your results, without that card.

Link to comment

Agree -- you especially don't want it streaming ANYTHING ... that would just thrash the disk repeatedly and slow things WAY down.    When I do a parity check, the server is never used for ANYTHING until it completes.  No reads; no writes; absolutely nothing except an occasional check of the status.

 

That all sounds well and good but with the size of my disks (8TB) I'd have to take my server offline for 16 hours in order to follow those guidelines.  That's not really feasible for many users.

Link to comment

Agree -- you especially don't want it streaming ANYTHING ... that would just thrash the disk repeatedly and slow things WAY down.    When I do a parity check, the server is never used for ANYTHING until it completes.  No reads; no writes; absolutely nothing except an occasional check of the status.

 

That all sounds well and good but with the size of my disks (8TB) I'd have to take my server offline for 16 hours in order to follow those guidelines.  That's not really feasible for many users.

 

Agree that with modern very large drives (6-8TB and growing) the parity check times can get long enough that it's not always practical to effectively be offline for the time the check takes.    Note, of course, that the server isn't really "off-line" -- it can still be used, it will just be slower than normal (and will also slow down the parity check).    There have been several suggestions over the years to make parity checks "self-throttling" ... i.e. to essentially pause themselves during other activity on the server => this would make a check take much longer, but would pretty much eliminate slowdowns in other activity.

 

In any event, it doesn't "hurt" anything to be using the server during a check ... it just slows down all involved activities.

 

Link to comment

FWIW I just finished another parity check on v6.1.3.    Reset my server back to 6.1.3; changed my disk tunables from the defaults to the same values I had been using on 5.0.6;  checked the "don't update during parity checks" box for display updates; and than did a full parity check -- and except for a few quick status checks, didn't have any browser windows open to the GUI.    Result still wasn't as good as with 5.0.6 [8:05:42]; but was a significant improvement over the time I had noted when I first loaded 6.1.3 [10:34:04].    This time, with the parameter changes, it took 8:33:59.  Still a bit disappointing, but at least instead of 31% longer it's now only 6% longer  :)

 

I loaded the GUI 4-5 times for status checks, and noted the CPU % each time.  The CPU was at 98% at the 2 hr point; 90% at the 3 hr point; and then dropped to 45-50% when I looked at the 5, 7, and 8 hr points.

 

Although I still think it shouldn't take longer than with v5 for this basic array operation, it's good enough that I'll leave it on 6.1.3, at least for a while.  I'm hopeful that 6.1.4 will show improvement  :)

 

Link to comment

That leaves the rest, and I'm not sure how many there are?  For those comparing with v5 speeds, both of the first 2 classes involve changes from v5 - it seems very possible that there may be other v5 to v6 changes that could be responsible.  Then others believe there are changes in recent v6 versions.  We need to know exactly which versions are responsible.  Apart from that, testing without the webGui running still seems a good test, to see what impact it has.

 

Some additional data points below:  TL;DR: All 6.x releases are performing about the same for me.  My 6.0 parity checks are faster than my 5.x parity checks, but I've replaced some older drives since that, so that is not a valid comparison.

 

My environment:  unRAID is a VM on VMware ESXi 6.0.  The VM has 1.8 GB memory allocated to it, as well as two processor cores.  The underlying CPU is an 8-core AMD FX-8350.  I am running 2 TB WD RED drives (3 data, 1 parity).  My disk controller is dedicated "passed-through" to the unRAID VM.  It's a LSI SAS3041E controller.  The array is about 35% utilized.  Disk format is XFS.

 

  •                       Release            Performance
    • 6.1.3        5:15:36  105.6 MB/sec
    • 6.1.1        4:59:28  113.3 MB/sec
    • 6.0.1        5:05:53  109.0 MB/sec

 

John

Link to comment

Agree -- you especially don't want it streaming ANYTHING ... that would just thrash the disk repeatedly and slow things WAY down.    When I do a parity check, the server is never used for ANYTHING until it completes.  No reads; no writes; absolutely nothing except an occasional check of the status.

 

That all sounds well and good but with the size of my disks (8TB) I'd have to take my server offline for 16 hours in order to follow those guidelines.  That's not really feasible for many users.

 

Agree that with modern very large drives (6-8TB and growing) the parity check times can get long enough that it's not always practical to effectively be offline for the time the check takes.    Note, of course, that the server isn't really "off-line" -- it can still be used, it will just be slower than normal (and will also slow down the parity check).    There have been several suggestions over the years to make parity checks "self-throttling" ... i.e. to essentially pause themselves during other activity on the server => this would make a check take much longer, but would pretty much eliminate slowdowns in other activity.

 

In any event, it doesn't "hurt" anything to be using the server during a check ... it just slows down all involved activities.

 

Right, that's how I've been doing parity checks for months since I got my 8TB drives.  I have Plex running and people streaming from it at some point during the 16hr parity check.

 

I was just reacting to your statement that you have NOTHING happening with regards to your server during parity checks and that's jut not feasible for me.

Link to comment

If that test setup is still available, I hope johnnie.black will consider adding data points for 6.1.2 and 6.1.3! (with and without the SAS2LP))

 

I had to repurpose some of the SSDs used in those tests so unfortunately can’t do direct comparisons but I did a few parity checks comparing v5.0.6 and stable v6 releases.

 

Hardware used:

 

Server 1 – Supermicro S9SCL-F, Celeron G1620, 4GB DDR3

Server 2 – Supermicro X7SBE, Celeron E1200, 2GB DDR2

 

Common hardware: 12 SSDs (4x32GB+4x80GB+2x120GB+2x180GB) on Dell Perc H310 and Adaptec 1430SA

 

To keep tests as accurate as possible they were done on a clean Unraid install with the WebGUI closed during the check, only plugin installed on v6 was dynamix system stats but not used during the tests on server 2 because it impacts performance (about 5 to 10Mb/s), it does not impact performance on server 1 and I it used once for a second pass on server 2 for the screens below.

 

Average speed for a full parity check (MB/sec):

 

                    Server 1 –  Server 2

 

V5.0.6    -    223.1    -  182.8

V6.0.0    -    223.7    -  167.9

V6.0.1    -    223.9    -  173.6

V6.1.0    -    223.7    -  167.9

V6.1.1    -    223.9    -  173.3

V6.1.2    -    224.5    -  172.9

V6.1.3    -    224.2    -    173.8

 

For low power CPUs there’s a small hit going from v5 to v6 but I can’t find any significant difference in the V6 releases, naturally this does not mean that there’s not an issue with some specific hardware, but anyone experiencing it should try to pin down in what version starts to occur and post the hardware used.

 

Screens below are a sample from each server, slowest one with V6 can still do 100Mb/s+ when reading all 12 disks which I find pretty acceptable for a dual core Celeron @ 1.6Ghz. Expect slower performance for servers with similar CPUs and more disks.

server1.jpg.867e54fdfaff262124c1c76c2790aa7d.jpg

server2.jpg.ad90bc7815cbe3e1e94d3d128e802fc3.jpg

Link to comment

The main conclusion has to be that there are no general software defects introduced into unRAID, related to parity check performance.  (Thanks Johnnie!)

 

Yet there are many reports of slower parity check speeds.  Can we put them into 2 separate classes, those related to specific hardware components and those related to insufficient CPU power?

 

The SAS2LP is a known problem, affecting many but not all.  And it's probably not the only hardware component that is not working as well in the 64 bit, virtualization-enabled, v4 kernel environment.  I believe some hardware needs driver or firmware updates, better optimization for the modern environment, and that includes the motherboard BIOS.  Older hardware may not ever be updated, so may need replacement.  In general, it's always best when hardware is designed for the OS's and OS environments they will run in.

 

As to increased CPU requirements, that's always been normal with the move to GUI based interfaces.  A little nostalgia, some of us can remember the days before the first Gui's, when you could scroll textual data up the screen MUCH faster than you could possibly read - and then saw the first Gui's attempt it!  I can remember being open-mouthed, watching the laughably absurd view of the top line being rewritten, then the second line, then the third, and finally the bottom line being blanked then written with a new line of text, then back to the top to repeat this ridiculous process, taking many seconds to scroll the whole page a single line up!  We didn't have CPU monitoring tools then, but we didn't need them, the CPU was obviously doing everything in its power!  I remember being terribly embarrassed for the poor programmer responsible for that.  Later Gui's and better programming produced faster scrolls, but you could still see the ripple down the page with each line advanced.  My point is, Gui's do tens of thousands times as much work as textual consoles, no matter how well written.  Slower CPU's will feel more of the load, and any time CPU usage is very high, then all operation is going to be affected.

 

I strongly recommend putting a check in the box, for no screen updates during operations such as the parity check, especially if you have a low power CPU.  And close the webGui whenever you don't need it.

Link to comment

The main conclusion has to be that there are no general software defects introduced into unRAID, related to parity check performance.  (Thanks Johnnie!)

 

Yet there are many reports of slower parity check speeds.  Can we put them into 2 separate classes, those related to specific hardware components and those related to insufficient CPU power?

 

The SAS2LP is a known problem, affecting many but not all.  And it's probably not the only hardware component that is not working as well in the 64 bit, virtualization-enabled, v4 kernel environment.  I believe some hardware needs driver or firmware updates, better optimization for the modern environment, and that includes the motherboard BIOS.  Older hardware may not ever be updated, so may need replacement.  In general, it's always best when hardware is designed for the OS's and OS environments they will run in.

 

As to increased CPU requirements, that's always been normal with the move to GUI based interfaces.  A little nostalgia, some of us can remember the days before the first Gui's, when you could scroll textual data up the screen MUCH faster than you could possibly read - and then saw the first Gui's attempt it!  I can remember being open-mouthed, watching the laughably absurd view of the top line being rewritten, then the second line, then the third, and finally the bottom line being blanked then written with a new line of text, then back to the top to repeat this ridiculous process, taking many seconds to scroll the whole page a single line up!  We didn't have CPU monitoring tools then, but we didn't need them, the CPU was obviously doing everything in its power!  I remember being terribly embarrassed for the poor programmer responsible for that.  Later Gui's and better programming produced faster scrolls, but you could still see the ripple down the page with each line advanced.  My point is, Gui's do tens of thousands times as much work as textual consoles, no matter how well written.  Slower CPU's will feel more of the load, and any time CPU usage is very high, then all operation is going to be affected.

 

I strongly recommend putting a check in the box, for no screen updates during operations such as the parity check, especially if you have a low power CPU.  And close the webGui whenever you don't need it.

 

+1

Link to comment

I strongly recommend putting a check in the box, for no screen updates during operations such as the parity check, especially if you have a low power CPU.  And close the webGui whenever you don't need it.

 

In SETTINGS > DISPLAY SETTINGS, there is an option near the bottom PAGE UPDATE FREQUENCY.

Options are Disabled, Real-Time, Regular, Slow

 

I set mine to disabled.

Perhaps those who are having performance issues should check this setting and disable it temporarily.

 

Perhaps we need a hook into the smart monitoring system to either automatically expand the monitor window or disable it temporarily during parity checks. i.e. poll attributes.

 

For those who are having performance issues, perhaps setting this to some higher value, i.e. 86400 (for a day).

From what I remembered in another conversation, a poll for attributes causes the SATA channel to be flushed.

Link to comment

If that test setup is still available, I hope johnnie.black will consider adding data points for 6.1.2 and 6.1.3! (with and without the SAS2LP))

 

I had to repurpose some of the SSDs used in those tests so unfortunately can’t do direct comparisons but I did a few parity checks comparing v5.0.6 and stable v6 releases.

 

Hardware used:

 

Server 1 – Supermicro S9SCL-F, Celeron G1620, 4GB DDR3

Server 2 – Supermicro X7SBE, Celeron E1200, 2GB DDR2

 

Common hardware: 12 SSDs (4x32GB+4x80GB+2x120GB+2x180GB) on Dell Perc H310 and Adaptec 1430SA

 

To keep tests as accurate as possible they were done on a clean Unraid install with the WebGUI closed during the check, only plugin installed on v6 was dynamix system stats but not used during the tests on server 2 because it impacts performance (about 5 to 10Mb/s), it does not impact performance on server 1 and I it used once for a second pass on server 2 for the screens below.

 

Average speed for a full parity check (MB/sec):

 

                    Server 1 –  Server 2

 

V5.0.6    -    223.1    -  182.8

V6.0.0    -    223.7    -  167.9

V6.0.1    -    223.9    -  173.6

V6.1.0    -    223.7    -  167.9

V6.1.1    -    223.9    -  173.3

V6.1.2    -    224.5    -  172.9

V6.1.3    -    224.2    -    173.8

 

For low power CPUs there’s a small hit going from v5 to v6 but I can’t find any significant difference in the V6 releases, naturally this does not mean that there’s not an issue with some specific hardware, but anyone experiencing it should try to pin down in what version starts to occur and post the hardware used.

 

Screens below are a sample from each server, slowest one with V6 can still do 100Mb/s+ when reading all 12 disks which I find pretty acceptable for a dual core Celeron @ 1.6Ghz. Expect slower performance for servers with similar CPUs and more disks.

 

Thanks for your testing, really useful  :)

Link to comment

If that test setup is still available, I hope johnnie.black will consider adding data points for 6.1.2 and 6.1.3! (with and without the SAS2LP))

 

I had to repurpose some of the SSDs used in those tests so unfortunately can’t do direct comparisons but I did a few parity checks comparing v5.0.6 and stable v6 releases.

 

Hardware used:

 

Server 1 – Supermicro S9SCL-F, Celeron G1620, 4GB DDR3

Server 2 – Supermicro X7SBE, Celeron E1200, 2GB DDR2

 

Common hardware: 12 SSDs (4x32GB+4x80GB+2x120GB+2x180GB) on Dell Perc H310 and Adaptec 1430SA

 

To keep tests as accurate as possible they were done on a clean Unraid install with the WebGUI closed during the check, only plugin installed on v6 was dynamix system stats but not used during the tests on server 2 because it impacts performance (about 5 to 10Mb/s), it does not impact performance on server 1 and I it used once for a second pass on server 2 for the screens below.

 

Average speed for a full parity check (MB/sec):

 

                    Server 1 –  Server 2

 

V5.0.6    -    223.1    -  182.8

V6.0.0    -    223.7    -  167.9

V6.0.1    -    223.9    -  173.6

V6.1.0    -    223.7    -  167.9

V6.1.1    -    223.9    -  173.3

V6.1.2    -    224.5    -  172.9

V6.1.3    -    224.2    -    173.8

 

For low power CPUs there’s a small hit going from v5 to v6 but I can’t find any significant difference in the V6 releases, naturally this does not mean that there’s not an issue with some specific hardware, but anyone experiencing it should try to pin down in what version starts to occur and post the hardware used.

 

Screens below are a sample from each server, slowest one with V6 can still do 100Mb/s+ when reading all 12 disks which I find pretty acceptable for a dual core Celeron @ 1.6Ghz. Expect slower performance for servers with similar CPUs and more disks.

 

Nice test => the low-powered server shows almost exactly the same amount of degradation that mine does with the settings I used in my last test of this ... i.e. about a 6% reduction in speed.

 

6% isn't all that bad (much better than the 31% I saw with my first test before adjusting some settings) ... but nevertheless I don't understand what might have changed to cause this.    It's also resulting in VERY high CPU utilization during the checks ... something that was not the case in previous versions.    The CPU demands of a GUI don't account for this, as these tests were done with no GUI loaded and (at least in my case) the "disable updates during parity checks" box checked.

 

It's pretty much in the "no big deal" category, as any new systems I build will have CPU's with at least 8-10k Passmark ratings, but it's nevertheless frustrating that a simple NAS function like a parity check is now maxing out the CPU even if there are no other add-ons in the system.

 

Link to comment

I had to repurpose some of the SSDs used in those tests so unfortunately can’t do direct comparisons but I did a few parity checks comparing v5.0.6 and stable v6 releases.

 

Hardware used:

 

Server 1 – Supermicro S9SCL-F, Celeron G1620, 4GB DDR3

Server 2 – Supermicro X7SBE, Celeron E1200, 2GB DDR2

 

Common hardware: 12 SSDs (4x32GB+4x80GB+2x120GB+2x180GB) on Dell Perc H310 and Adaptec 1430SA

 

To keep tests as accurate as possible they were done on a clean Unraid install with the WebGUI closed during the check, only plugin installed on v6 was dynamix system stats but not used during the tests on server 2 because it impacts performance (about 5 to 10Mb/s), it does not impact performance on server 1 and I it used once for a second pass on server 2 for the screens below.

 

Average speed for a full parity check (MB/sec):

 

                    Server 1 –  Server 2

 

V5.0.6    -    223.1    -  182.8

V6.0.0    -    223.7    -  167.9

V6.0.1    -    223.9    -  173.6

V6.1.0    -    223.7    -  167.9

V6.1.1    -    223.9    -  173.3

V6.1.2    -    224.5    -  172.9

V6.1.3    -    224.2    -    173.8

 

For low power CPUs there’s a small hit going from v5 to v6 but I can’t find any significant difference in the V6 releases, naturally this does not mean that there’s not an issue with some specific hardware, but anyone experiencing it should try to pin down in what version starts to occur and post the hardware used.

 

Screens below are a sample from each server, slowest one with V6 can still do 100Mb/s+ when reading all 12 disks which I find pretty acceptable for a dual core Celeron @ 1.6Ghz. Expect slower performance for servers with similar CPUs and more disks.

 

nice test :)

can you tell us which process are eating almost all CPU?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.