unRAID Server Release 6.1.3 Available


limetech

Recommended Posts

Interesting observation.  The slow switching from docker to another tab is more likely to occur on fast switches between tabs (e.g. less than 10 seconds on docker tab before switching to another); however, it sometimes is slow to switch to another tab even after a couple of minutes on the docker tab.  I will say it is much less likely to occur if the docker tab has been open a while.

 

Again, in my case (100% repeatable thus far), slow switching only happens from docker to another tab and never between any other tabs.

Then let me try the magic ball again...

 

Ignoring that it sometimes happens after sitting on the tab for a while, the quick switches that are consistently slow can be explained away like this:

 

Your internet is either slow or consistently maxed out, and you also are running Community Applications.

 

Basically what happens with the docker tab when CA is installed is that CA automatically downloads an updated list of the apps available (~70k download).  You can see that this is in progress if the spinning ring is still displayed on the CA tab.  Until that process is finished, getting out of the tab can be slow.  (On my pretty average internet speeds, it usually is done in ~1/2 second if nothing else is happening with regards to downloads, or ~5 seconds if my connection is maxed out).

The CA explanation makes sense.  However, I don't think my Internet connection is either slow or constantly maxed out.  I have a 20 meg fibre optic connection and speeds seem reasonable.  I did, however, do this test from a laptop connected via wifi and not a wired connection.  Where I am in the house, connection speed seems to fluctuate between 144 Mbps and 217 Mbps. Nothing else is actively using the Internet connection.

Link to comment
  • Replies 246
  • Created
  • Last Reply

Top Posters In This Topic

Interesting observation.  The slow switching from docker to another tab is more likely to occur on fast switches between tabs (e.g. less than 10 seconds on docker tab before switching to another); however, it sometimes is slow to switch to another tab even after a couple of minutes on the docker tab.  I will say it is much less likely to occur if the docker tab has been open a while.

 

Again, in my case (100% repeatable thus far), slow switching only happens from docker to another tab and never between any other tabs.

Then let me try the magic ball again...

 

Ignoring that it sometimes happens after sitting on the tab for a while, the quick switches that are consistently slow can be explained away like this:

 

Your internet is either slow or consistently maxed out, and you also are running Community Applications.

 

Basically what happens with the docker tab when CA is installed is that CA automatically downloads an updated list of the apps available (~70k download).  You can see that this is in progress if the spinning ring is still displayed on the CA tab.  Until that process is finished, getting out of the tab can be slow.  (On my pretty average internet speeds, it usually is done in ~1/2 second if nothing else is happening with regards to downloads, or ~5 seconds if my connection is maxed out).

The CA explanation makes sense.  However, I don't think my Internet connection is either slow or constantly maxed out.  I have a 20 meg fibre optic connection and speeds seem reasonable.  I did, however, do this test from a laptop connected via wifi and not a wired connection.  Where I am in the house, connection speed seems to fluctuate between 144 Mbps and 217 Mbps. Nothing else is actively using the Internet connection.

Like anything else with the internet, you mileage will vary depending upon your location and wherever Kode has the app feed hosted from.
Link to comment

Interesting observation.  The slow switching from docker to another tab is more likely to occur on fast switches between tabs (e.g. less than 10 seconds on docker tab before switching to another); however, it sometimes is slow to switch to another tab even after a couple of minutes on the docker tab.  I will say it is much less likely to occur if the docker tab has been open a while.

 

Again, in my case (100% repeatable thus far), slow switching only happens from docker to another tab and never between any other tabs.

Then let me try the magic ball again...

 

Ignoring that it sometimes happens after sitting on the tab for a while, the quick switches that are consistently slow can be explained away like this:

 

Your internet is either slow or consistently maxed out, and you also are running Community Applications.

 

Basically what happens with the docker tab when CA is installed is that CA automatically downloads an updated list of the apps available (~70k download).  You can see that this is in progress if the spinning ring is still displayed on the CA tab.  Until that process is finished, getting out of the tab can be slow.  (On my pretty average internet speeds, it usually is done in ~1/2 second if nothing else is happening with regards to downloads, or ~5 seconds if my connection is maxed out).

The CA explanation makes sense.  However, I don't think my Internet connection is either slow or constantly maxed out.  I have a 20 meg fibre optic connection and speeds seem reasonable.  I did, however, do this test from a laptop connected via wifi and not a wired connection.  Where I am in the house, connection speed seems to fluctuate between 144 Mbps and 217 Mbps. Nothing else is actively using the Internet connection.

Like anything else with the internet, you mileage will vary depending upon your location and wherever Kode has the app feed hosted from.

Works for me.  The slow switching doesn't really bother me as I assumed something similar to what you have explained.  Some days and from some computers, there is more of an issue than others.  I never considered it a "bug" just an observation with a likely explanation.  I don't usually do fast switches between tabs anyway.  I suppose this issue is related less to the unRAID/WebUI version and more to the growing size of the the CA apps list.

Link to comment

Clearly SOMETHING was changed in 6.1.3 that's causing this -- hopefully 6.1.4 will restore the performance that's been lost.

 

I keep a record of all my parity check speeds since version 6.0 and in my case there isn't any noticable diffierence between unRAID versions (it varies between 10h17m and 10h22m).

 

Whatever it is, it isn't impacting everyone...

 

I also note my parity check times -- first thing I do everytime I update to a new version.    This was in fact the first time I've run my oldest server on v6 (thus I don't know what its performance is on 6, 6.1, 6.1.1, or 6.1.2 -- just on 6.1.3).    But what I HAVE noticed is that a LOT of folks are noting significant degradation of their times with 6.1.3.

 

I suspect it's a function of CPU loading -- SOMETHING must be using significantly more CPU with the newest version.    I didn't do a comprehensive search, but I did look through several of the posts in this thread r.e. folks who ARE or ARE NOT having speed issues with their parity checks.  Noted the following with regards to the CPU's those various folks have ...

 

My impacted system has a Pentium E6300 (PassMark = 1707) => and parity checks take 31% longer than with v5.0.6

 

A user with a Sempron 145  (PassMark = 799) noted a 38% increase in parity check times.

 

A user with an AMD A6-6400K (PassMark = 2292) noted a 69% increase in parity check times !!

 

Several users indicated no increase in their parity check times ... the ones I noted had the following CPUs:

 

Core i3-4130T  (PassMark = 4138)

Core i5-4570S  (PassMark = 6620)

Xeon E3-1230  (PassMark = 7888)

Core i7-3770S  (PassMark = 8920)

 

The fairly clear common denominator among those with no issues is they have far more CPU "horsepower" available.

 

Note that ALL of these folks indicated the checks were on the SAME hardware -- the difference was only in the UnRAID version.

 

 

I don't mind the significant increases in CPU power that are needed for virtualization; video rendering; etc. for those who use their servers as more than a basic NAS.  But the ability to use relatively modest hardware to create a very nice NAS with UnRAID seems to have gone out the window with the latest version.      A user in another thread (Russ Uno) posted some interesting stats about how much higher CPU utilization is for some fairly basic tasks with 6.1.3 ... e.g. recording 3 streams took 15% in v5,  45% in v6 !!  [ http://lime-technology.com/forum/index.php?topic=43103.msg412665#msg412665 ]

 

The good news is that all of my newer systems have either i7-4790's or E3-1271's ... and I plan to continue to use at least that range of CPU's, so "horsepower" won't be any issue with any new systems => but it'd be nice to be able to use v6 on my older servers without losing so much performance in a fundamental task like a parity check or disk rebuild.

 

 

Link to comment

My parity check speeds, seems very stable at ~90MB/s before 6.1.3:

01-10-2015 15:22	Duration: 2 seconds Average speed: 1.5 TB/sec	[6.1.3] ( something is wrong with duration and speed calculation syslog attached, actually it is 10h 52m)
30-09-2015 15:41	Duration: Average speed: 0.0 B/sec	[6.1.3] (i dont know why duration not calculated maybe log rotation, actually it is 10h 59m)
29-09-2015 02:53	Duration: unavailable (system reboot or log rotation)	[6.1.3] (power outage just before it finishes, actually it is 12h 25m)
31-08-2015 03:21	Duration: 9 hours, 22 minutes Average speed: 89.0 MB/sec	[6.0.1]
17-08-2015 18:43	Duration: 9 hours, 4 minutes, 46 seconds Average speed: 91.8 MB/sec	[6.0.1]
15-08-2015 19:44	Duration: 9 hours, 4 minutes, 57 seconds Average speed: 91.8 MB/sec	[6.0.1]

***After this point i upgrade parity to 3TB add SASLP and 2 disks***
01-08-2015 11:28	Duration: 6 hours, 57 minutes, 6 seconds Average speed: 79.9 MB/sec	[6.0.1]
12-07-2015 09:50	Duration: 6 hours, 57 minutes, 26 seconds Average speed: 79.9 MB/sec	[6.0.1]
01-07-2015 11:27	Duration: 6 hours, 56 minutes, 41 seconds Average speed: 80.0 MB/sec	[6.0.1]

inorder to this this summary parity check slow down issue can be related to an update between 6.0.1 and 6.1.3

tower-syslog-20151002-1025.zip

Link to comment

 

My impacted system has a Pentium E6300 (PassMark = 1707) => and parity checks take 31% longer than with v5.0.6

 

A user with a Sempron 145  (PassMark = 799) noted a 38% increase in parity check times.

 

A user with an AMD A6-6400K (PassMark = 2292) noted a 69% increase in parity check times !!

 

 

But the ability to use relatively modest hardware to create a very nice NAS with UnRAID seems to have gone out the window with the latest version.   

 

To be fair, 2 of those systems are extremely ancient, and the other is low performance, budget hardware.  Not what you should be running your home server on and expect blistering performance or any sort of reliability (in the case of the two older machines which must both be heading towards 7 years old).

 

All of those machines can be easily out-performed by a Celeron J1900.

 

 

Link to comment

A couple of more data points (See server Spec's below):

 

  My Media Server did the parity CHECK (No update) in 7 hours, 44 minutes, 57 seconds. Average speed: 107.6 MB/sec.

 

  My Test Bed Server did the Parity Check in 7 hours, 50 minutes, 39 seconds. Average speed: 106.3 MB/sec.

 

As I recall these times are close to what they have always been--- say half hour one way or the other.

 

 

Link to comment

Duration: 14 hours, 43 minutes, 52 seconds. Average speed: 75.4 MB/sec

 

38TB parity check - same for me as in previous versions

 

Myk

If anything, 6.1.3 increased my speed by a good margin on my SAS2LP controller.  Shaved an hour off of the parity check times and brought me back to the times that I had before the pre-emptible kernel was introduced in 6.0b14d.

I wish I could say the same.:(

Average speed: 22.8 MB/s

Looks like it'll take 50+ hours to complete on 17TB.

Link to comment

My parity check seems to be running around the same, maybe slightly slower but then again I'm using an i7. That said has anyone noticed the web GUI being very sluggish with this release? I've tried clearing the history but that only resolves it for a bit and then it goes really slow at switched tabs again.

 

Can you be more precise about the pages where this is observed?

 

I don't have any sluggish-net myself apart from the VMs page which sometimes takes a bit longer to load.

 

The worst pages are Dashboard, Docker, & VMs. A vast majority of the time trying to load those pages takes a good 5-10 seconds of loading. Another tab that is hit or miss is the settings page.

Link to comment

Parity check on my test server, Celeron G1620 with 11 SSDs of various sizes

 

V6.1.2 - Duration: 11 minutes, 34 seconds. Average speed: 259.4 MB/sec

V6.1.3 - Duration: 11 minutes, 34 seconds. Average speed: 259.4 MB/sec

 

I have an older test server based on intel core 2 duo, will test on the weekend if I can.

 

Link to comment

A couple of more data points (See server Spec's below):

 

  My Media Server did the parity CHECK (No update) in 7 hours, 44 minutes, 57 seconds. Average speed: 107.6 MB/sec.

 

  My Test Bed Server did the Parity Check in 7 hours, 50 minutes, 39 seconds. Average speed: 106.3 MB/sec.

 

As I recall these times are close to what they have always been--- say half hour one way or the other.

We have pretty much the same hardware except SASLP.

What is your cpu usage when parity check is active?

Link to comment

A couple of more data points (See server Spec's below):

 

  My Media Server did the parity CHECK (No update) in 7 hours, 44 minutes, 57 seconds. Average speed: 107.6 MB/sec.

 

  My Test Bed Server did the Parity Check in 7 hours, 50 minutes, 39 seconds. Average speed: 106.3 MB/sec.

 

As I recall these times are close to what they have always been--- say half hour one way or the other.

We have pretty much the same hardware except SASLP.

What is your cpu usage when parity check is active?

 

Unfortunately, I have to use the GUI (Dashboard page) to check CPU usage.  I have the GUI set set not to update the screen without a manual refresh during parity checks.  The CPU usage ranged between 29% and 98% with the majority of the readings being +80%. 

 

In my experience, the GUI really seems to suck up the CPU cycles so I am not so sure that these are accurate readings.

Link to comment

Mainserver in my sig: average speed on 6.1.2 and 6.1.3 82 MB/sec

 

Testbuild on G2020 with 4 GB memory running 6.0.0 103 MB/sec.

 

I used to get average speeds off over 115 MB/s on my mainserver running V5.

 

I wonder how much the inclusion of Dynamix has "contributed" to these lower speeds especially on lower powered hardware.

Or was it the move to XFS?

Link to comment

ikosa, your first post (the one that started this) appears to be the only one that implicates 6.1.3 (changes in 6.1.3 from 6.1.2).  But later in this post, you never mention 6.1.2, only 6.0.1 and 6.1.3.  Do you possibly have a test result from 6.1.2?  It's still available on the LimeTech download page, if you want to test again with 6.1.2.

 

rick.p, you show a significant difference between 6.1 and 6.1.3.  I just want to confirm that you feel the test regime was identical, no known support issues since 6.1?  Would it be possible to restore bzroot and bzimage from a flash backup that has your old 6.1 on it, and repeat the 6.1 test?  As above, you can also try 6.1.2, see what difference that makes.

 

Please feel free to correct me, but the cases mentioned here and elsewhere seem to fall into 3 classes - those with maxed out CPU, those with SAS2LP cards, and the rest.  I believe we should remove the first 2 classes from this discussion, not because they aren't serious issues, but because they really aren't parity check software issues.

 

If your CPU is maxed out (or close to it), then NOTHING is going to work right!  It's not just parity checks.  For some, it's single core low speed CPU's at fault, don't know about others.  Gary is right, and they *should* work fine with the minimal demands of only running a NAS, and nothing more.  But that's a separate issue.  My advice for those in this class is to try another parity check with NO webGui running at all.  You already know how long a parity check can run.  Start the parity check from the webGui then close the browser completely, on all stations running the webGui, and don't start it again until you know it's done.  (You can use browsers for other things, but don't try opening the webGui, anywhere.)  With this test, you can determine what percentage of impact the webGui is having on the CPU.

 

If you have a SAS2LP card, the problem is well knownjohnnie.black's outstanding work clearly shows the severe drop in performance from v5 to v6, especially after v6-beta6.  (A side note, the numbers in parentheses show that not using the SAS2LP results in no change at all from v5 to v6.1.1.  If that test setup is still available, I hope johnnie.black will consider adding data points for 6.1.2 and 6.1.3! (with and without the SAS2LP))

 

That leaves the rest, and I'm not sure how many there are?  For those comparing with v5 speeds, both of the first 2 classes involve changes from v5 - it seems very possible that there may be other v5 to v6 changes that could be responsible.  Then others believe there are changes in recent v6 versions.  We need to know exactly which versions are responsible.  Apart from that, testing without the webGui running still seems a good test, to see what impact it has.

Link to comment

Parity checks are glacially slow for me too. My 68TB server just took 30 hours @ 55.4MB/sec. Normally it would take around 19-20 hours @ about 80MB/sec. There was no activity on the server during the parity check. CPU is an X3470, HBA's are 2 x SASLP-MV8 and 1 x SAS2LP-MV8, motherboard a Supermicro - X8SIL.

 

The only Docker I run is Plex, no VM's (unfortunately enabling VM features causes unRAID to randomly crash).

Link to comment

The only Docker I run is Plex, no VM's (unfortunately enabling VM features causes unRAID to randomly crash).

 

Plex runs daily library scans and scheduled tasks for backup and optimization purposes, these can interfere with your parity check.

 

The schedule I have set for those tasks should exclude them from being a significant factor. During the parity check I was monitoring the storage throughput window on System Stats on it was rare to see reads go above 800MB/sec. Normally I would see reads well above 1GB/sec and generally around 1.25 - 1.5GB/sec. CPU usage was reasonable at around 20%. Ultimately it's not a problem, just unusual.

Link to comment

To look at the full picture you need to take Dockers and VMs into the equation as well.

 

When timing the parity check execution it would be better to have both Dockers and VMs disabled and minimize 'external' factors.

Here's the really, really strange thing with my system.  (And its repeatable 100% back down to 6.0b14d when I was first testing the pre-emptible kernel for Tom).

 

If I disable docker, my parity check speeds actually take a big hit (~30-40 MB/s).  If I leave it enabled (and just doing its normal day to day things), then I get my speed back.

 

In retrospect, I guess that throws more of a loop into the SAS2LP equation, since with docker enabled, all the channels are the SAS2LP are active.  If I disable docker, then only 7 of the 8 channels are active.

 

The server is actually very, very busy right now chewing threw md5's on the entire system, but after its done I'll rerun the tests and post the results (hopefully sometime tonight)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.