unRAID Server Version 6.2.2 Available


limetech

Recommended Posts

The file at https://raw.githubusercontent.com/limetech/unRAIDServer/master/unRAIDServer.plg is 6.2.1.  I'm not sure if in 6.2.1 they changed the location where it downloads the file, I didn't see 6.2.2 released on github.com.

 

After updating to 6.2.1, I see that they did change the location.  Now it is grabbing the plugin from https://raw.github.com/limetech/webgui/6.2/plugins/unRAIDServer/unRAIDServer.plg

 

So in theory, someone going from < 6.2.1 could download that plg file and drop it in /tmp/plugins and then update without having to go through multiple steps

Link to comment
  • Replies 91
  • Created
  • Last Reply

Top Posters In This Topic

When NCQ first became a thing here, there was a fair amount of discussion and testing, and we found it wasn't helpful, actually resulted in slower performance most of the time (except for a rare exception).  So it was turned off, defaulted off, but provided as an option for anyone to experiment with, and change.  There's been a couple of generations of hard drives since then, and it may be time to test again, if anyone is willing.

 

Haven't tested this for a while, so did some quick tests, was especially curious on how my SSD server would perform:

 

SSD server - 4% slower on average with NCQ on

Normal server - 5% slower on average with NCQ on - HDDs used were 4TB WD Blues (same as Greens)

 

I only tested sequential write speed because that's what interests me, results may be different for random writes.

Link to comment

Very interesting results r.e. NCQ.    I'm surprised the NCQ overhead results in an actual slowdown, but clearly it does.  The whole purpose of NCQ is to reduce seek overhead by minimizing average seek distances.    This would normally be much more apparent with random writes; but in UnRAID I'm not sure you'd benefit from this due to the 4 I/O's per write that take place (actually that's now 6 with dual parity systems) ... so the "next" I/O likely doesn't happen in time to benefit from NCQ.

 

But I'd have thought the impact would be neutral -- i.e. no seek benefit, but not requiring more time.  It'd be very interesting to see the actual head movements that are taking place with/without NCQ, but without modified, instrumented drives there's no way to do that.

 

Link to comment

Updated fine to 6.2.2 but have since noticed that using the system buttons plugin to shut the server down no longer works.  I get the "shutting down" screen but it never actually shuts down, I have to physically press the server power button.

 

Not sure if that is a problem with the OS or the plugin, but this was working fine in 6.2.1 to my knowledge.

 

Will have to dig out some diagnostics as the server appears to start fine without a parity check.

 

 

Link to comment

Is there some general guideline for the increase in parity check time when adding a second parity drive?  My checks were approximately 11 hours with one parity drive on 6.2.0.  I've since added a second parity drive.  I'm at 18 hours and it looks like I have about another 6 hours to go.  Normal?  Or some issue I should investigate?

 

I'm on a mix of built-in motherboard SATA ports and a Supermicro SASLP-MV8.

Link to comment

Is there some general guideline for the increase in parity check time when adding a second parity drive?  My checks were approximately 11 hours with one parity drive on 6.2.0.  I've since added a second parity drive.  I'm at 18 hours and it looks like I have about another 6 hours to go.  Normal?  Or some issue I should investigate?

 

I'm on a mix of built-in motherboard SATA ports and a Supermicro SASLP-MV8.

 

I'd recommend looking for other threads about this, but here is what I remember:

 

1. If your second parity drive is twice as big as the first, then you can expect that parity check times will double since it has to confirm that all the extra bits are zero.

 

2. Another possibility would be if you used the tunables tester on an earlier version of unRAID, the results may not work as well on 6.2 or with two parity drives.  A tunables tester for 6.2 is underway, but not released yet:

  https://lime-technology.com/forum/index.php?topic=29009.msg507866#msg507866

 

3. If you have a really low end cpu it might not natively support the AVX2 instruction set:

  https://lime-technology.com/forum/index.php?topic=51308.msg491919#msg491919

this is mentioned (at a high level) in the release notes.

 

4. If your SATA card is bandwidth constrained it will slow down as you add more drives.

 

If none of that applies, then maybe there is a loose cable or something?  Someone may be able to tell from your diagnostics.

Link to comment

Is there some general guideline for the increase in parity check time when adding a second parity drive?  My checks were approximately 11 hours with one parity drive on 6.2.0.  I've since added a second parity drive.  I'm at 18 hours and it looks like I have about another 6 hours to go.  Normal?  Or some issue I should investigate?

 

I'm on a mix of built-in motherboard SATA ports and a Supermicro SASLP-MV8.

 

I'd recommend looking for other threads about this, but here is what I remember:

 

1. If your second parity drive is twice as big as the first, then you can expect that parity check times will double since it has to confirm that all the extra bits are zero.

 

2. Another possibility would be if you used the tunables tester on an earlier version of unRAID, the results may not work as well on 6.2 or with two parity drives.  A tunables tester for 6.2 is underway, but not released yet:

  https://lime-technology.com/forum/index.php?topic=29009.msg507866#msg507866

 

3. If you have a really low end cpu it might not natively support the AVX2 instruction set:

  https://lime-technology.com/forum/index.php?topic=51308.msg491919#msg491919

this is mentioned (at a high level) in the release notes.

 

4. If your SATA card is bandwidth constrained it will slow down as you add more drives.

 

If none of that applies, then maybe there is a loose cable or something?  Someone may be able to tell from your diagnostics.

 

Thanks, that's very helpful.  It finished up at 25 hours, an average of 43.9MB/s, which is a little more than twice my previous average time.  To answer your questions:

 

1.  Both drives are the same size: 4TB

2.  I had run tunables.  I did it using the hacked script when I upgraded to 6.2.0, but before the second parity drive was added.  Will investigate this.  Might need to revert or change parameters given the new drive.

3.  I don't think this is it.  I have a relatively recent Haswell Xeon.  Intel's ARC seems to indicate AVX2 support.

4.  Could be this.  I should probably also confirm both parity drives are on motherboard ports instead of the SATA card.

 

Many thanks.

Link to comment

Thanks, that's very helpful.  It finished up at 25 hours, an average of 43.9MB/s, which is a little more than twice my previous average time.  To answer your questions:

 

That is on the slow side, the SASLP is somewhat bandwidth limited but it's still capable of 80MB/s fully loaded, if you post the board you're using (including which slots are used by the controller(s)), which sata ports are in use and your current tunable settings we might be able to give you some advice to improve that.

Link to comment

Thanks, that's very helpful.  It finished up at 25 hours, an average of 43.9MB/s, which is a little more than twice my previous average time.  To answer your questions:

 

That is on the slow side, the SASLP is somewhat bandwidth limited but it's still capable of 80MB/s fully loaded, if you post the board you're using (including which slots are used by the controller(s)), which sata ports are in use and your current tunable settings we might be able to give you some advice to improve that.

I've been pretty solidly averaging ~89MB/s for awhile now with the same general drive make-up. 

 

I'm on this motherboard: http://www.newegg.com/Product/Product.aspx?Item=N82E16813182821

It has 14 SATA ports, and looks like I wired 12 of them into my case.  My case's other 8 all go to the SASLP (for now, at least).

 

It looks like my current layout has 6 drives (5 in array + 1 warm spare) on the SASLP and 8 on the motherboard (7 in array + 1 mounted non-array drive). 

 

It looks like Parity 2 is on the SASLP, so I should rectify that.  Notably, that would have been the 5th array drive on that card.  I have 3 empty wired slots on the mobo I could easily relocate drives to, so I'll probably just move 3 drives over... re-run tunables... and see how it goes?  Seem reasonable?

 

If that goes well, maybe rewire the case to move 2 of the bays currently assigned to the SASLP over to the two unconnected motherboard ports for future use.

Link to comment

Tunables script is not yet optimized for v6.2, with that board and the number of disks in use you should get much better performance, SASLP will be the limiting factor.

 

You didn't post your tunables, although not universal, try the ones below, they give good performance with almost all hardware I've tried and should be better than whichever settings you're using.

tunables.jpg.4e917e72b6988cc876457f4d4af33528.jpg

Link to comment

Tunables script is not yet optimized for v6.2, with that board and the number of disks in use you should get much better performance, SASLP will be the limiting factor.

 

You didn't post your tunables, although not universal, try the ones below, they give good performance with almost all hardware I've tried and should be better than whichever settings you're using.

I thought I read earlier in the thread that since one of the tunables was removed the script was suboptimal, but the normal scans didn't change that value anyway.  Maybe I misunderstood.  In any case, I think sync_window changed increased to the next power of two and that was it.  I only changed the one parameter.

 

Using the same order as that image, my settings are 128, 1408, 512, 192... so, vastly different.  Will give those a shot.

 

I have 16GB ECC RAM and sit at about 25% utilization, typically, if it matters.

Link to comment
  • 1 month later...

the update broke Community Applications because the DockerClient.php file moved from /usr/local/emhttp/plugins/dynamix.docker.manager to /usr/local/emhttp/plugins/dynamix.docker.manager/include

 

Now I'm going to have to figure out which version of unRaid is running and then require_once the appropriate path

Pretty sure that happened during the rc phase of 6.1

 

But the simple solution is to first off ensure CA is up to date and secondly update unraid to 6.2.4

 

Sent from my SM-T560NU using Tapatalk

 

 

Link to comment

the update broke Community Applications because the DockerClient.php file moved from /usr/local/emhttp/plugins/dynamix.docker.manager to /usr/local/emhttp/plugins/dynamix.docker.manager/include

 

Now I'm going to have to figure out which version of unRaid is running and then require_once the appropriate path

Pretty sure that happened during the rc phase of 6.1

 

But the simple solution is to first off ensure CA is up to date and secondly update unraid to 6.2.4

 

Sent from my SM-T560NU using Tapatalk

Ironically, you have replied to a bot that copy/pasted your own post from here.
Link to comment

the update broke Community Applications because the DockerClient.php file moved from /usr/local/emhttp/plugins/dynamix.docker.manager to /usr/local/emhttp/plugins/dynamix.docker.manager/include

 

Now I'm going to have to figure out which version of unRaid is running and then require_once the appropriate path

Pretty sure that happened during the rc phase of 6.1

 

But the simple solution is to first off ensure CA is up to date and secondly update unraid to 6.2.4

 

Sent from my SM-T560NU using Tapatalk

Ironically, you have replied to a bot that copy/pasted your own post from here.

>:(  I'm just impressed I managed to actually type anything at 3:30 am
Link to comment

the update broke Community Applications because the DockerClient.php file moved from /usr/local/emhttp/plugins/dynamix.docker.manager to /usr/local/emhttp/plugins/dynamix.docker.manager/include

 

Now I'm going to have to figure out which version of unRaid is running and then require_once the appropriate path

Pretty sure that happened during the rc phase of 6.1

 

But the simple solution is to first off ensure CA is up to date and secondly update unraid to 6.2.4

 

Sent from my SM-T560NU using Tapatalk

Ironically, you have replied to a bot that copy/pasted your own post from here.

>:(  I'm just impressed I managed to actually type anything at 3:30 am

The only reason I went looking for it is because their only other post was obvious spam.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.