unRAID Server Release 6.2.0-rc2 Available


Recommended Posts

  • Replies 170
  • Created
  • Last Reply

Top Posters In This Topic

Bumping the Kernel within a RC cycle is pretty unusual, assume this was an exception to fix something?

It's a kernel "maintenance" release, ie, bug fixes only.  unRAID-6.2 will stay on kernel-4.4.

Excellent, thanks for the clarification.

Link to comment

Hi,

 

Been running all the betas and RCs of 6.2, just installed rc2 and rebooted, everything went fine until my TeamSpeak server docker didn't start :(

I can't decide if it's an rc2 issue or a docker issue - But my other 3 dockers all worked fine without any intervention. No matter what I do I can't get TS to start :P

 

 

I've attached a diag_bundle

 

And here's the TS log:

 

-------------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
We do accept donations at:
https://www.linuxserver.io/donations
-------------------------------------
GID/UID
-------------------------------------
User uid: 99
User gid: 100
-------------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
finding fastest mirror
Getting list of mirrors...done.
Testing latency to mirror(s)
Getting list of launchpad URLs...done.
Looking up 3 status(es)
1. mirror.sax.uk.as61049.net
Latency: 189.02 ms
Org: Exascale
Status: Up to date
Speed: 1 Gbps
2. mirror.as29550.net
Latency: 189.49 ms
Org: XILO Communications Ltd.
Status: Up to date
Speed: 1 Gbps
3. mirror.sov.uk.goscomb.net
Latency: 208.49 ms
Org: Goscomb Technologies Limited
Status: Up to date
Speed: 1 Gbps
New config file saved to /defaults/sources.list
We are now refreshing packages from apt repositories, this *may* take a while
Ign http://mirror.sax.uk.as61049.net trusty InRelease
Get:1 http://mirror.sax.uk.as61049.net trusty-updates InRelease [65.9 kB]
Get:2 http://mirror.sax.uk.as61049.net trusty-security InRelease [65.9 kB]
Get:3 http://mirror.sax.uk.as61049.net trusty Release.gpg [933 B]
Get:4 http://mirror.sax.uk.as61049.net trusty-updates/main Sources [351 kB]
Get:5 http://mirror.sax.uk.as61049.net trusty-updates/restricted Sources [5,217 B]
Get:6 http://mirror.sax.uk.as61049.net trusty-updates/universe Sources [199 kB]
Get:7 http://mirror.sax.uk.as61049.net trusty-updates/multiverse Sources [5,946 B]
Get:8 http://mirror.sax.uk.as61049.net trusty-updates/main amd64 Packages [992 kB]
Get:9 http://mirror.sax.uk.as61049.net trusty-updates/restricted amd64 Packages [23.5 kB]
Get:10 http://mirror.sax.uk.as61049.net trusty-updates/universe amd64 Packages [470 kB]
Get:11 http://mirror.sax.uk.as61049.net trusty-updates/multiverse amd64 Packages [14.3 kB]
Get:12 http://mirror.sax.uk.as61049.net trusty-security/main Sources [149 kB]
Get:13 http://mirror.sax.uk.as61049.net trusty-security/restricted Sources [3,920 B]
Get:14 http://mirror.sax.uk.as61049.net trusty-security/universe Sources [44.1 kB]
Get:15 http://mirror.sax.uk.as61049.net trusty-security/multiverse Sources [2,550 B]
Get:16 http://mirror.sax.uk.as61049.net trusty-security/main amd64 Packages [629 kB]
Get:17 http://mirror.sax.uk.as61049.net trusty-security/restricted amd64 Packages [20.2 kB]
Get:18 http://mirror.sax.uk.as61049.net trusty-security/universe amd64 Packages [170 kB]
Get:19 http://mirror.sax.uk.as61049.net trusty-security/multiverse amd64 Packages [4,850 B]
Get:20 http://mirror.sax.uk.as61049.net trusty Release [58.5 kB]
Get:21 http://mirror.sax.uk.as61049.net trusty/main Sources [1,335 kB]
Get:22 http://mirror.sax.uk.as61049.net trusty/restricted Sources [5,335 B]
Get:23 http://mirror.sax.uk.as61049.net trusty/universe Sources [7,926 kB]
Get:24 http://mirror.sax.uk.as61049.net trusty/multiverse Sources [211 kB]
Get:25 http://mirror.sax.uk.as61049.net trusty/main amd64 Packages [1,743 kB]
Get:26 http://mirror.sax.uk.as61049.net trusty/restricted amd64 Packages [16.0 kB]
Get:27 http://mirror.sax.uk.as61049.net trusty/universe amd64 Packages [7,589 kB]
Get:28 http://mirror.sax.uk.as61049.net trusty/multiverse amd64 Packages [169 kB]
Fetched 22.3 MB in 19s (1,155 kB/s)
Reading package lists...
*** Running /etc/my_init.d/30_install_gsm.sh...
*** Running /etc/my_init.d/40_update_gsm_ts3.sh...
[36mInfomation![0m The current user (abc) does not have ownership of the following files:
/config/lgsm/functions/check_permissions.sh: line 20: column: command not found
find: `standard output': Broken pipe
find: write error

[36mInfomation![0m The current user (abc) does not have ownership of the following files:
/config/lgsm/functions/check_permissions.sh: line 20: column: command not found
find: `standard output': Broken pipe
find: write error

*** /etc/my_init.d/40_update_gsm_ts3.sh failed with status 1


*** Killing all processes...

 

The update and the failure of the docker may be entirely coincidental. Here is the support thread for that particular docker.

 

http://lime-technology.com/forum/index.php?topic=43603

 

Thanks sparky - I'll see what the good old Linuxserver guys say :)

 

i'm one of those guys, lol.

Link to comment

Is this safe?

The safest for you is to wait for the final release. A release candidate is for experienced users only. If you use it on your regular array, you might lose data or run in to complicated issues.

 

On he other hand, and to keep things in perspective, I'm not aware of anyone reporting data loss that can be attributed to running any of the 6.2 beta or rc versions.

 

Link to comment

Is this safe?

The safest for you is to wait for the final release. A release candidate is for experienced users only. If you use it on your regular array, you might lose data or run in to complicated issues.

 

On he other hand, and to keep things in perspective, I'm not aware of anyone reporting data loss that can be attributed to running any of the 6.2 beta or rc versions.

 

 

Has anybody recovered from a dual drive failure yet with dual parity?

Link to comment

Has anybody recovered from a dual drive failure yet with dual parity?

 

Multiple times with the first betas on my test server, always successfully.

 

Out of curiosity does this include a scenario where you lose a parity drive and a data drive at once? Or both parity drives but no data drives? Just curious as to the results.

Link to comment

Has anybody recovered from a dual drive failure yet with dual parity?

 

Multiple times with the first betas on my test server, always successfully.

 

Out of curiosity does this include a scenario where you lose a parity drive and a data drive at once? Or both parity drives but no data drives? Just curious as to the results.

 

All scenarios I could think of, including the ones you mention plus dual data disk failure, rebuild one failed disk and adding a second parity at the same time, and I thinks that's it.

Link to comment

I am assuming its DNS related as i tried to ping google from the server and it failed but it will ping ip addresses.... I will try to update them and reboot..

 

EDITED: Yes, it was the DNS server settings. Updated them now everything works again.  If you have the same problem check your DNS settings.

Link to comment

I'm trying to add a new(ish) disk to the array but after I've assigned it to an empty slot (disk 2 slot) and started the array it just loops and goes straight back to "click start to bring array online and pre-clear disk, etc" without the array actually starting. The disk hasn't been used on unRAID before and is freshly cleared in OS X.

 

I recall an issue like this quite some time ago (perhaps V5 era). Diagnostics attached.

 

Cheers

tower-diagnostics-20160716-1035.zip

Link to comment

Same problem here. I pre-cleared and formatted a drive on my test machine but when moved to my main machine and assigned to a slot, the array refuses to start. 

 

I'm trying to add a new(ish) disk to the array but after I've assigned it to an empty slot (disk 2 slot) and started the array it just loops and goes straight back to "click start to bring array online and pre-clear disk, etc" without the array actually starting. The disk hasn't been used on unRAID before and is freshly cleared in OS X.

 

I recall an issue like this quite some time ago (perhaps V5 era). Diagnostics attached.

 

Cheers

Link to comment

possible we have issue with driver for LSI cards.. after using plugin to test 6.2.0 all disk missing..

all was working in 6.19

dmesg log =mpt2sas_cm0: failure

 

LSI 2008. in IT mode. ibm1015

Running passtrough in ESxi 5.5

 

Check the following thread: https://lime-technology.com/forum/index.php?topic=49481.0

 

"The workaround solution was to add mpt3sas.msix_disable=1 option to the syslinux config."

 

It worked for me when I upgraded to 6.2 beta 21 and it still works in RC2.

Link to comment

Same problem here. I pre-cleared and formatted a drive on my test machine but when moved to my main machine and assigned to a slot, the array refuses to start. 

 

I'm trying to add a new(ish) disk to the array but after I've assigned it to an empty slot (disk 2 slot) and started the array it just loops and goes straight back to "click start to bring array online and pre-clear disk, etc" without the array actually starting. The disk hasn't been used on unRAID before and is freshly cleared in OS X.

 

I recall an issue like this quite some time ago (perhaps V5 era). Diagnostics attached.

 

Cheers

 

I had this happen to me. I ended up having to do a new config before the start array button would work and show the format new disk button

Link to comment

possible we have issue with driver for LSI cards.. after using plugin to test 6.2.0 all disk missing..

all was working in 6.19

dmesg log =mpt2sas_cm0: failure

 

LSI 2008. in IT mode. ibm1015

Running passtrough in ESxi 5.5

 

Check the following thread: https://lime-technology.com/forum/index.php?topic=49481.0

 

"The workaround solution was to add mpt3sas.msix_disable=1 option to the syslinux config."

 

It worked for me when I upgraded to 6.2 beta 21 and it still works in RC2.

 

 

yep that worked.. seems it an issue with a patch needed for esxi..looking into it

 

thanks

Link to comment

I'm trying to add a new(ish) disk to the array but after I've assigned it to an empty slot (disk 2 slot) and started the array it just loops and goes straight back to "click start to bring array online and pre-clear disk, etc" without the array actually starting. The disk hasn't been used on unRAID before and is freshly cleared in OS X.

 

I recall an issue like this quite some time ago (perhaps V5 era). Diagnostics attached.

 

Cheers

 

Yes, this is a bug.  Fixed in next release.

Link to comment

Anybody else seeing loads of XFS errors since this update?

 

My log is filled with constant errors shown below. I/O errors on Windows based machines trying to write to the array aswell

 

"Jul 16 12:00:58 AVALON kernel: XFS (md3): xfs_log_force: error -5 returned."

my experience is that this normally means the disk has dropped offline.
Link to comment
Guest
This topic is now closed to further replies.