unRAID Server Release 6.0-rc3-x86_64 Available


Recommended Posts

I've upgraded my two systems from b15 to RC3 as well. Overall it's been a smooth experience. I really like the improvements that have been made to docker and VM management. If nothing else, it's allowed me to shutdown a separate ESXi host I had running a couple CentOS servers and consolidate some of my hardware, which is much appreciated.

 

I do have two issues to report though. One of my systems has 10x 3TB platter drives in the array and 1x 500GB cache (all via SAS expander), all of which are connected via an IBM m1015 flashed to IT mode. The first time I booted the machine after applying (via GUI) the upgrade to RC3, I encountered all sorts of errors in the log about btrfs errors and timeouts. Eventually I figured out that for whatever reason the cache drive (which is formatted btrfs) had changed from /dev/sde to /dev/sdk. Unfortunately the array had already automatically started, and was "using" /dev/sde as the cache drive. I put "using" in quotes since there was obviously no device at that location, even though the GUI showed the cache as green-balled, and docker was trying to use /mnt/cache/docker-data/ to load the docker configs. Once I noticed the change, I stopped the array, reassigned the cache to /dev/sdk, and restarted. This seemed to resolve the issue, though I had to delete and rebuild docker.img.

 

The second issue is as jimbobulator reported; I was seeing btrfs checksum errors in the syslog for /dev/loop0. Originally I assumed the corruption was a result of the cache issue mentioned above. However, now that I see someone else experiencing the same thing I'm wondering if they're not related. After all, if the cache drive was assigned to the wrong location, docker shouldn't be able to access it to write to it at all, corruption or otherwise.

 

Anyway, my systems are now up and running smoothly. I just thought I'd chime in and document my experiences. I'm going to be watching to see if the btrfs errors or the drive assignment issues recur.

 

-A

Link to comment
  • Replies 446
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

... The first time I booted the machine after applying (via GUI) the upgrade to RC3, I encountered all sorts of errors in the log about btrfs errors and timeouts. Eventually I figured out that for whatever reason the cache drive (which is formatted btrfs) had changed from /dev/sde to /dev/sdk...

Don't know exactly what your problem was, but this was not it. It is completely normal for the drive "letter" to change between boots. unRAID keeps track of which disk is which by using the drive's serial number.
Link to comment

Also completed an upgrade to RC3 recently, and I was so impressed with how easy it was to setup VM's that I shutdown my VMWare ESX server and transplanted the MB/CPU/RAM from that server into my unRaid server.

 

I now have unRaid running on a Quad Core 3.4Ghz CPU with 32GB RAM and slowly migrating all the VM's I had running on ESX to it (well, rebuilding them and restoring data backups). It's really nice to be able to mount unraid shares directly inside a VM without having to bother with SMB/NFS and I now have one less machine contributing to my power bill!

 

Only issue I picked up so far was this:

 

May 25 22:55:45 pooh kernel: python[21802]: segfault at 58 ip 000000000052f1cb sp 00002aedb4a02ac0 error 4 in python2.7[400000+2bd000]
..
May 26 00:04:05 pooh kernel: python[30095]: segfault at 58 ip 000000000052c8d8 sp 00002b4a97574140 error 4 in python2.7[400000+2bd000]
May 26 01:03:44 pooh kernel: python[29942]: segfault at 58 ip 000000000052c8d8 sp 00002acb5e8926e0 error 4 in python2.7[400000+2bd000]
May 26 01:08:37 pooh kernel: python[9038]: segfault at 58 ip 000000000052c8d8 sp 00002b87bacfb220 error 4 in python2.7[400000+2bd000]

 

I presume that's caused by a docker, because Python has been removed from unRaid? Any idea what might be causing this, or where I can go look which docker is causing this? The syslog has a PID, but by the time it's logged that PID is gone. I also dont' know if this is RC3 related or because of the new MB/CPU.

 

 

Link to comment

... The first time I booted the machine after applying (via GUI) the upgrade to RC3, I encountered all sorts of errors in the log about btrfs errors and timeouts. Eventually I figured out that for whatever reason the cache drive (which is formatted btrfs) had changed from /dev/sde to /dev/sdk...

Don't know exactly what your problem was, but this was not it. It is completely normal for the drive "letter" to change between boots. unRAID keeps track of which disk is which by using the drive's serial number.

 

Fair enough. Now that you mention it, I think I knew that actually...  :P

 

In spite of that, I still suspect a problem with changing drive assignment, regardless of whether unRAID keeps track of the drives by label, or uuid, or path, etc. I realize a drive's reported serial number isn't likely to change... the only response I have to that is to shrug.  ;) As I mentioned, while I was seeing the errors and the array was started the GUI showed the cache drive as green balled. When I stopped the array, the cache was listed as "unassigned", and /dev/sdk was the only drive available in the dropdown. I mentally latched onto the most obvious difference -- the drive path. There were likely other differences I didn't notice. To me, this implies that the drive was located at one location when the array started, but then that reference changed somehow after the array started, invalidating that reference.

 

Anyway, if it happens again I'll pay more attention. At the time I was just trying to restore functionality ASAP.

 

-A

Link to comment

The second issue is as jimbobulator reported; I was seeing btrfs checksum errors in the syslog for /dev/loop0. Originally I assumed the corruption was a result of the cache issue mentioned above. However, now that I see someone else experiencing the same thing I'm wondering if they're not related. After all, if the cache drive was assigned to the wrong location, docker shouldn't be able to access it to write to it at all, corruption or otherwise.

 

Not sure about your case, but for me the checksum errors are within the btrfs docker image file.  My drive is formatted ext4.  Here's another case I found on the support forum with no responses [b15]:

http://lime-technology.com/forum/index.php?topic=39372.0

 

edit: Nevermind - I see you specifically mentioned loop0.  Sorry.  So probably the same issue.

Link to comment

 

The second issue is as jimbobulator reported; I was seeing btrfs checksum errors in the syslog for /dev/loop0. Originally I assumed the corruption was a result of the cache issue mentioned above. However, now that I see someone else experiencing the same thing I'm wondering if they're not related. After all, if the cache drive was assigned to the wrong location, docker shouldn't be able to access it to write to it at all, corruption or otherwise.

 

Not sure about your case, but for me the checksum errors are within the btrfs docker image file.  My drive is formatted ext4.  Here's another case I found on the support forum with no responses [b15]:

http://lime-technology.com/forum/index.php?topic=39372.0

 

edit: Nevermind - I see you specifically mentioned loop0.  Sorry.  So probably the same issue.

I've experienced the same message regarding loop0 checksum errors and dockers freezing (unable to start or stop them).Seemed worse when my cache pool was circa 50% full. unfortunately I forgot to save the log, not very helpful I know - but wanted to voice another experience of this problem. 

Link to comment

... The first time I booted the machine after applying (via GUI) the upgrade to RC3, I encountered all sorts of errors in the log about btrfs errors and timeouts. Eventually I figured out that for whatever reason the cache drive (which is formatted btrfs) had changed from /dev/sde to /dev/sdk...

Don't know exactly what your problem was, but this was not it. It is completely normal for the drive "letter" to change between boots. unRAID keeps track of which disk is which by using the drive's serial number.

 

Fair enough. Now that you mention it, I think I knew that actually...  :P...

 

For reference this wasnt always the case. I believe it was changed in 5.x (or earlier) but it explains why you might have thought this.

Link to comment

 

Only issue I picked up so far was this:

 

May 25 22:55:45 pooh kernel: python[21802]: segfault at 58 ip 000000000052f1cb sp 00002aedb4a02ac0 error 4 in python2.7[400000+2bd000]
..
May 26 00:04:05 pooh kernel: python[30095]: segfault at 58 ip 000000000052c8d8 sp 00002b4a97574140 error 4 in python2.7[400000+2bd000]
May 26 01:03:44 pooh kernel: python[29942]: segfault at 58 ip 000000000052c8d8 sp 00002acb5e8926e0 error 4 in python2.7[400000+2bd000]
May 26 01:08:37 pooh kernel: python[9038]: segfault at 58 ip 000000000052c8d8 sp 00002b87bacfb220 error 4 in python2.7[400000+2bd000]

 

 

 

I saw a similar error in my syslog today and also assumed it was docker related.  I'll see if I can find the entry.

 

EDIT:  Found it...

 

May 25 17:17:07 unRAID kernel: python[29037]: segfault at 58 ip 000000000052c8d8 sp 00002b3de8c040e0 error 4 in python2.7[400000+2bd000]

 

Here is my list of containers...maybe we have one in common.

 

binhex-delugevpn	binhex/arch-delugevpn:latest
CouchPotato		hurricane/docker-couchpotato:latest
KODI-Headless		sparklyballs/headless-kodi-helix:latest
MariaDB			needo/mariadb:latest
MediaBrowser		mediabrowser/mbserver:latest
nzbgetvpn		jshridha/docker-nzbgetvpn:latest
Sonarr			hurricane/docker-nzbdrone:latest

 

John

Link to comment

Hi guys!

I am migrating from my old v5 to v6b14b and my v6Tower will have 1 disk outside the array (hosting my vm´s) and 9 disks in the array.

At the moment I still need to migrate 2 disks from my old v5 to v6.

Is it safe to upgrade now to rc3?

If yes, how should I proceed..just "Clicking 'Check for Updates' on the Plugins page "?

Rgds.

EDIT: I own 2 keys (pro and plus). I bought them when only v5 was existing. One of them is still in v5 and the other in v6b14b. At the moment I still need to migrate 2 disks from my old v5 to v6 .. and I would need those extra licensed drives in the plus license in rc3 (since I reached the 1parity + 7disks limitation). Since I am in the middle of the disks migration (it´s done each individual disk one by one using Teracopy) how would you proceed?

Link to comment

 

Only issue I picked up so far was this:

 

May 25 22:55:45 pooh kernel: python[21802]: segfault at 58 ip 000000000052f1cb sp 00002aedb4a02ac0 error 4 in python2.7[400000+2bd000]
..
May 26 00:04:05 pooh kernel: python[30095]: segfault at 58 ip 000000000052c8d8 sp 00002b4a97574140 error 4 in python2.7[400000+2bd000]
May 26 01:03:44 pooh kernel: python[29942]: segfault at 58 ip 000000000052c8d8 sp 00002acb5e8926e0 error 4 in python2.7[400000+2bd000]
May 26 01:08:37 pooh kernel: python[9038]: segfault at 58 ip 000000000052c8d8 sp 00002b87bacfb220 error 4 in python2.7[400000+2bd000]

 

 

 

I saw a similar error in my syslog today and also assumed it was docker related.  I'll see if I can find the entry.

 

EDIT:  Found it...

 

May 25 17:17:07 unRAID kernel: python[29037]: segfault at 58 ip 000000000052c8d8 sp 00002b3de8c040e0 error 4 in python2.7[400000+2bd000]

 

Here is my list of containers...maybe we have one in common.

 

binhex-delugevpn	binhex/arch-delugevpn:latest
CouchPotato		hurricane/docker-couchpotato:latest
KODI-Headless		sparklyballs/headless-kodi-helix:latest
MariaDB			needo/mariadb:latest
MediaBrowser		mediabrowser/mbserver:latest
nzbgetvpn		jshridha/docker-nzbgetvpn:latest
Sonarr			hurricane/docker-nzbdrone:latest

 

John

 

I have:

 

binhex/arch-madsonic

needo/couchpotato

needo/deluge

sparklyballs/headless-kodi-helix

needo/mariadb

needo/nzbdrone

gfjardim/nzbget

gfjardim/pyload

 

Looks like headless-kodi is common between the three of us?

Link to comment

i have segfaults in my log too.

 

and whilst i have koma which is kodi-headless plus mariadb, i also have Couchpotato, which we all have in common.

 

we may have different containers for couch, but they're still couch.

 

so don't hang me just yet, tie the first knot around my neck perhaps, but make it loose, lol.

Link to comment

I am currently trying to upgrade to RC3 from 6.0-beta10a using the preferred upgrade method (clicking "check for updates" in the Plugins tab) and it is saying that no updates are available.  Am I doing something wrong, or is it just that you are unable to upgrade to RC3 from such an old version?  I have no problem with using the manual update process, but I am curious as to why this isn't working for me.

 

I've attached an syslog just in case it offers any information.  If there is no clear reason why this isn't working for me, I'll just use the manual process which correctly if I'm wrong, but is:

 

To manually upgrade, it is only necessary to copy these files from the zip file to the root of your USB Flash device:

 

- bzimage

- bzroot

- license.txt

- readme.txt

- syslinux/syslinux.cfg (see below)

 

Thank you in advance!!!

syslog-2015-05-27.txt

Link to comment

i have segfaults in my log too.

so don't hang me just yet, tie the first knot around my neck perhaps, but make it loose, lol.

 

Hahah, no worries :), you're right, we all have that one in common too and it could be any docker using Python2.7.

 

From your logs, do you know when you upgraded to RC3, to see if it started happening after that only?

Link to comment

i have segfaults in my log too.

so don't hang me just yet, tie the first knot around my neck perhaps, but make it loose, lol.

 

Hahah, no worries :), you're right, we all have that one in common too and it could be any docker using Python2.7.

 

From your logs, do you know when you upgraded to RC3, to see if it started happening after that only?

 

I am still on b15.

Link to comment
Guest
This topic is now closed to further replies.