unRAID Server Release 6.2.0-rc4 Available


Recommended Posts

First time using 6.2? please read the Original 6.2-beta Announcement Post first.

 

Also please review the Announcement posts for previous 6.2-rc releases, especially if this is your first upgrade to 6.2-rc.

 

IMPORTANT

[*]Your server must have access to the Internet to use the unRAID 6.2 rc.

[*]Posts in this thread should be to report bugs and comment on features ONLY.

 

HOW TO REPORT A BUG

Think you've found a bug or other defect?  Ask yourself these questions before posting about it here:

[*]Have I successfully tested this bug with all my plugins disabled (booting into safe mode)?

[*]Can I recreate the bug consistently and have I documented the steps to do so?

[*]Have I downloaded my diagnostics from the unRAID webGui after the bug occurred, but before I rebooted the system?

Do not post about a bug unless you can confidently answer "Yes" to all three of those questions.  Once you can, be sure to follow these guidelines, but make sure to post as a reply on this thread, not as a new topic under defect reports (we track bug reports for beta/rc releases independent from the stable release).

 

INSTALLING AND UPDATING

If you are currently running previous 6.2-beta/rc release, clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade.

 

Alternately, navigate to Plugins/Install Plugin, copy this text into the box and click Install:

https://raw.githubusercontent.com/limetech/unRAIDServer-6.2/master/unRAIDServer.plg

 

You may also Download the release and generate a fresh install.

 

RELEASE NOTES

 

Added support in user share file system and mover for "special" files: fifos (pipes), named sockets, device nodes.

 

Fixed array autostart: behavior now like it used to be: autostart only if no config change vs. last boot.

 

Various other small fixes/improvements.

 

unRAID Server OS Change Log
===========================

Version 6.2.0-rc4 2016-08-18
----------------------------

Base distro:

- curl: version 7.50.1 (CVE-2016-5419, CVE-2016-5420, CVE-2016-5421)
- libidn: version 1.33 (CVE-2015-8948, CVE-2016-6261, CVE-2016-6262, CVE-2016-6263)
- openssh: version 7.3p1 (CVE-2015-8325, CVE-2016-6210)

Linux kernel:

- version 4.4.18
- upgraded out-of-tree Intel 10Gbit Ethernet driver ixgbe: version 4.3.15

Management:

- handle the situation when no network config file exists
- improved mac "make bootable" script device detection
- md/unraid: correct "new array" detection (so as to only permit autostart if no config change)
- mover: support moving 'special files'
- shfs: add 'mknod' support

webGui:

- display date as numeral in Parity history
- expanded CPU thread pairings
- expanded Share status and additional help
- expanded system devices text
- fixed disk name in SMART report of diagnostics
- fixed invalid Docker placeholder icons on dashboard
- fixed network counters and errors on dashboard, when VLANs are used
- fixed selection of list elements
- fixed suppress disk utilization threshold settings for additional disks in cache pool
- fixed suppress spinup group on ALL cache devices
- improve responsiveness of unassigned devices
- include nvme disks in SMART report of diagnostics
- purge Docker icon after container removal, also helps reload outdated icons

Link to comment
  • Replies 185
  • Created
  • Last Reply

Top Posters In This Topic

Glad to see autostart behave as before.

 

Tom, just out of curiosity, are these differences on CPU usage during a parity check with dual parity something changed on unRAID or related to the Linux kernel updates (or other factors)?

 

The Q calculation is pretty expensive, especially for less-capable CPU's.  The kernel has a number of hand-coded assembly functions to do the math and it picks the "best" one upon boot (by measuring it).  Refer to:

http://lxr.free-electrons.com/source/lib/raid6/algos.c?v=4.4

 

Best performance is had when your CPU supports AVX2 instruction set:

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2

 

Link to comment

Dang >:( , just switched over from unraid 6.1.9 .

 

Now my web-ui no longer works.

 

Terminal shows

 

nohup: redirecting stderr to stdout

/var/local/emhttp/network.ini: line1: [eth0]: command not found

/var/local/emhttp/network.ini: line16: DESCRIPTION:0=: command not found

/var/local/emhttp/network.ini: line17: USE_DHCP:0=no: command not found

/var/local/emhttp/network.ini: line18: IPADDR:0=192.168.1.201: command not found

/var/local/emhttp/network.ini: line19: NETMASK:0=255.255.255.0: command not found

 

any ideas?

Link to comment

Dang >:( , just switched over from unraid 6.1.9 .

 

Now my web-ui no longer works.

 

Terminal shows

 

nohup: redirecting stderr to stdout

/var/local/emhttp/network.ini: line1: [eth0]: command not found

/var/local/emhttp/network.ini: line16: DESCRIPTION:0=: command not found

/var/local/emhttp/network.ini: line17: USE_DHCP:0=no: command not found

/var/local/emhttp/network.ini: line18: IPADDR:0=192.168.1.201: command not found

/var/local/emhttp/network.ini: line19: NETMASK:0=255.255.255.0: command not found

 

any ideas?

 

Run diagnostics from your console and post the resulting zip file. This is stored in the /logs folder on your flash device.

 

Link to comment

The Q calculation is pretty expensive, especially for less-capable CPU's.  The kernel has a number of hand-coded assembly functions to do the math and it picks the "best" one upon boot (by measuring it).  Refer to:

http://lxr.free-electrons.com/source/lib/raid6/algos.c?v=4.4

 

Best performance is had when your CPU supports AVX2 instruction set:

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2

 

Thanks for the informative links, but sorry I wasn't clear, my question is not about the high CPU usage (I was using a very low end CPU for those tests, hence high CPU utilization is expected), but about the difference in CPU usage between releases, if you click to expand the animated GIF I posted above you can see how it varies considerably with each v6.2 release, I wanted to know if there are changes being made to the unRAID driver or they are caused by the different kernels (or other factors).

 

I looked at the algorithm used for RAID6, and though always the same, speed varies with each release, with lower CPU usage coinciding with fastest results, b18 and rc4, so I guess this answers my question.

Link to comment

The Q calculation is pretty expensive, especially for less-capable CPU's.  The kernel has a number of hand-coded assembly functions to do the math and it picks the "best" one upon boot (by measuring it).  Refer to:

http://lxr.free-electrons.com/source/lib/raid6/algos.c?v=4.4

 

Best performance is had when your CPU supports AVX2 instruction set:

https://en.wikipedia.org/wiki/Advanced_Vector_Extensions#CPUs_with_AVX2

 

Thanks for the informative links, but sorry I wasn't clear, my question is not about the high CPU usage (I was using a very low end CPU for those tests, hence high CPU utilization is expected), but about the difference in CPU usage between releases, if you click to expand the animated GIF I posted above you can see how it varies considerably with each v6.2 release, I wanted to know if there are changes being made to the unRAID driver or they are caused by the different kernels (or other factors).

 

I looked at the algorithm used for RAID6, and though always the same, speed varies with each release, with lower CPU usage coinciding with fastest results, b18 and rc4, so I guess this answers my question.

 

Let me expand on Johnnie.black's observations.  I have converted my Testbed server (spec's below) to dual parity.  These are the non-correcting parity test times and  speeds for three recent releases of both the beta and rc candidates.

 

6.2 beta 19      7Hrs, 53min, 25sec    105.3MB/s

6.2 rc3          17Hrs, 15min, 42sec      48.3MB/s

6.2 rc4          12Hrs,  58min, 22sec      64.2MB/s

 

While rc4 times are much improved over rc3, they are still much longer than they were for beta 19.  (In fact, the beta 19 time was within a few minutes of the single parity check times!) 

 

I have attached my diagnostics file from 6.2 rc4

rose-diagnostics-20160819-0751.zip

Link to comment

Let me expand on Johnnie.black's observations.  I have converted my Testbed server (spec's below) to dual parity.  These are the non-correcting parity test times and  speeds for three recent releases of both the beta and rc candidates.

 

6.2 beta 19      7Hrs, 53min, 25sec    105.3MB/s

6.2 rc3          17Hrs, 15min, 42sec      48.3MB/s

6.2 rc4          12Hrs,  58min, 22sec      64.2MB/s

 

While rc4 times are much improved over rc3, they are still much longer than they were for beta 19.  (In fact, the beta 19 time was within a few minutes of the single parity check times!) 

 

I have attached my diagnostics file from 6.2 rc4

I haven't installed rc4 yet but I had b19 and am now running rc3. None of the releases have made a significant difference in my parity check speed, including going to dual parity. I'm running an i5 CPU so maybe this is just a CPU issue.
Link to comment

I am getting the following error when i reboot unraid.

 

/sbin/ldconfig: /usr/lib64/libffi.so.6 is not a symbolic link

 

Not sure where to look to try and resolve it..

 

Any ideas?

 

Check if you have any plugins installing a different version of this file. Start in safemode to run without plugins.

Link to comment

Let me expand on Johnnie.black's observations.  I have converted my Testbed server (spec's below) to dual parity.  These are the non-correcting parity test times and  speeds for three recent releases of both the beta and rc candidates.

 

6.2 beta 19      7Hrs, 53min, 25sec    105.3MB/s

6.2 rc3          17Hrs, 15min, 42sec      48.3MB/s

6.2 rc4          12Hrs,  58min, 22sec      64.2MB/s

 

While rc4 times are much improved over rc3, they are still much longer than they were for beta 19.  (In fact, the beta 19 time was within a few minutes of the single parity check times!) 

 

I have attached my diagnostics file from 6.2 rc4

I haven't installed rc4 yet but I had b19 and am now running rc3. None of the releases have made a significant difference in my parity check speed, including going to dual parity. I'm running an i5 CPU so maybe this is just a CPU issue.

 

Yes, it is.  First, look back at this post:

 

      http://lime-technology.com/forum/index.php?topic=51308.msg491919#msg491919

 

Support for the AVX2 instruction set began with processors released in late 2011.  The PassMark number also has a big influence on the speed.  Ideally, the Parity check process speed should be limited by the hard drive read speed but there is a point at which it is becomes CPU limited.  But there is also some issue going on with the kernel in unRAID.  Perhaps Limetech can resolve this issue.  I certainly don't have the information to determine that.  There is also some evidence that Intel processors handle dual parity better than AMD processors.  (This again could be a kernel issue.)  The problem as I see it is that some folks are with older systems and low end processors (and no one has the sightless  idea of what is a low end processor at this point) are going to get ambushed if they decide to make the leap to dual parity unless they are forewarned about what to expect...

Link to comment

 

Added support in user share file system and mover for "special" files: fifos (pipes), named sockets, device nodes.

 

The user who brought to my attention the issue with mknod on RC3 has let me know that it is working properly on RC4.  I don't personally run those affected containers, so I cannot speak to this personally.
Link to comment
The problem as I see it is that some folks are with older systems and low end processors (and no one has the sightless  idea of what is a low end processor at this point) are going to get ambushed if they decide to make the leap to dual parity unless they are forewarned about what to expect...

Does this meaningfully effect copy speeds as well? If so, then yes, it's a pretty big deal.

 

If the only thing it changes is bulk parity computation, then it's not nearly as big a deal. Yes, it's less than ideal to extend disk rebuild times, but given the price you are paying, I think the benefit is well worth it. Consider if it takes twice as long to rebuild a disk with dual parity, but you still are protected from a second failure during that process. Seems like a worthwhile tradeoff. Worth mentioning, yes, but warning of an ambush? Seems a little overblown.

Link to comment

Single core Sempron CPUs should be taken out to the back paddock and shot.  Tell your kids it's "gone to the farm".

 

Any bargain-basement Sandy Bridge/Ivy Bridge Celeron will run rings round a Sempron.  Heck, even an ancient Pentium G620 can do 100MB/s parity build, and that's running on an old B75 board with 3Gb/s SATA.

Link to comment

Single core Sempron CPUs should be taken out to the back paddock and shot.  Tell your kids it's "gone to the farm".

 

Any bargain-basement Sandy Bridge/Ivy Bridge Celeron will run rings round a Sempron.  Heck, even an ancient Pentium G620 can do 100MB/s parity build, and that's running on an old B75 board with 3Gb/s SATA.

Even my quad core sempron on an am1 platform (single channel only) can max out my rw speeds on parity checks and rebuilds.  Not using dual parity though

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

The problem as I see it is that some folks are with older systems and low end processors (and no one has the sightless  idea of what is a low end processor at this point) are going to get ambushed if they decide to make the leap to dual parity unless they are forewarned about what to expect...

Does this meaningfully effect copy speeds as well? If so, then yes, it's a pretty big deal.

 

If the only thing it changes is bulk parity computation, then it's not nearly as big a deal. Yes, it's less than ideal to extend disk rebuild times, but given the price you are paying, I think the benefit is well worth it. Consider if it takes twice as long to rebuild a disk with dual parity, but you still are protected from a second failure during that process. Seems like a worthwhile tradeoff. Worth mentioning, yes, but warning of an ambush? Seems a little overblown.

 

What if your monthly parity check are taking fifteen plus hours for a 3Tb array?  What is your WAF? Remember in order to implement dual parity, you have to procure (most, likely purchase) a drive that is at least as large as your largest data drive.  If you are running older hardware, it is because you are concerned a bit about cost...

Link to comment

Single core Sempron CPUs should be taken out to the back paddock and shot.  Tell your kids it's "gone to the farm".

 

Any bargain-basement Sandy Bridge/Ivy Bridge Celeron will run rings round a Sempron.  Heck, even an ancient Pentium G620 can do 100MB/s parity build, and that's running on an old B75 board with 3Gb/s SATA.

Even my quad core sempron on an am1 platform (single channel only) can max out my rw speeds on parity checks and rebuilds.  Not using dual parity though

 

Sent from my LG-D852 using Tapatalk

Dual parity is where the PROBLEM starts!  Until you install a dual parity setup, you have no idea what is going to be happening to your parity check speeds!  You may be right about your setup BUT until you actually test it be careful about bragging.  (OH, that old Sempron 140 will do a 105MB/s on a single single parity check...)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.