unRAID OS version 6.3.1 Stable Release Available


limetech

Recommended Posts

I still get KP at boot up. Had to go back to 6.2.4. Diag attached is from 6.2.4

 

Kernel Panic - not syncing: Attemped to kill init! exit code=0x0000009

 

won't let me attached diag but you can find them from 2 days ago here http://lime-technology.com/forum/index.php?topic=56204.msg537031#msg537031

 

Good to know, I wont bother trying to update then.  They didn't even acknowledge this was a problem in the last build, so I wasn't expecting it to be resolved in this one.

Just because this particular problem was not addressed in last release doesn't mean it's not "acknowledged".  There were several other issues that necessitated 6.3.1 being released.  We look at all issues that are reported.  It becomes difficult to track things down if we can't reproduce and if people don't work with us on it.

Link to comment
  • Replies 174
  • Created
  • Last Reply

Top Posters In This Topic

I still get KP at boot up. Had to go back to 6.2.4. Diag attached is from 6.2.4

 

Kernel Panic - not syncing: Attemped to kill init! exit code=0x0000009

 

won't let me attached diag but you can find them from 2 days ago here http://lime-technology.com/forum/index.php?topic=56204.msg537031#msg537031

 

Good to know, I wont bother trying to update then.  They didn't even acknowledge this was a problem in the last build, so I wasn't expecting it to be resolved in this one.

Just because this particular problem was not addressed in last release doesn't mean it's not "acknowledged".  There were several other issues that necessitated 6.3.1 being released.  We look at all issues that are reported.  It becomes difficult to track things down if we can't reproduce and if people don't work with us on it.

 

Thanks for the update. It wasn't responded to in the other thread so I just wanted to make sure it didn't get lost. I replied to your post on the other  topic.

 

Link to comment

Just did a parity check with 6.3.1, and interestingly it's 18 minutes faster than 6.3 was  :)

 

With 6.2.4 my times (for about 6 parity checks) varied between 8:59:47 and 9:00:53 -- i.e. VERY consistent.

With 6.3 (one check) it took 9:27:22

With 6.3.1 (one check) it took 9:09:58

 

With v5 on this same system they were always within a few seconds of each other, and were almost an hour faster.

 

It IS an older system -- a Pentium E5300 -- so I suspect it's just a function of the higher CPU demands of v6.

 

Link to comment

Just did a parity check with 6.3.1, and interestingly it's 18 minutes faster than 6.3 was  :)

 

With 6.2.4 my times (for about 6 parity checks) varied between 8:59:47 and 9:00:53 -- i.e. VERY consistent.

With 6.3 (one check) it took 9:27:22

With 6.3.1 (one check) it took 9:09:58

 

With v5 on this same system they were always within a few seconds of each other, and were almost an hour faster.

 

It IS an older system -- a Pentium E5300 -- so I suspect it's just a function of the higher CPU demands of v6.

Things are getting quite a bit more complex these days, especially with all kinds of background processes and browser-based javascript polling.  Kinda hard to make apples/apples comparisons.

Link to comment

VM's performance still horrible for me with 6.3.1. Downgrading to 6.2.4.

 

Diagnostics attached, however nothing out of normal.

 

Anyone running X58 motherboards? All normal with this release?

 

 

EDIT: Could be my drives setup? I don't have cache disk and for VM drive I passthrough one SSD for each VM.

 

EDIT 2: Even when it freezes the CPU load is normal, perhaps the issue is with passthrough GPU? There is any way to test it?

 

We need to get some more information from you to help diagnose.  Please explain what you mean by "horrible performance".  how are you testing / validating this?  What was the experience in 6.2.4 vs. 6.3.0/6.3.1?

Link to comment

VM's performance still horrible for me with 6.3.1. Downgrading to 6.2.4.

 

Diagnostics attached, however nothing out of normal.

 

Anyone running X58 motherboards? All normal with this release?

 

 

EDIT: Could be my drives setup? I don't have cache disk and for VM drive I passthrough one SSD for each VM.

 

EDIT 2: Even when it freezes the CPU load is normal, perhaps the issue is with passthrough GPU? There is any way to test it?

 

We need to get some more information from you to help diagnose.  Please explain what you mean by "horrible performance".  how are you testing / validating this?  What was the experience in 6.2.4 vs. 6.3.0/6.3.1?

 

By "horrible performance" I mean:

6.2.4 - Able to use Adobe Premiere, After Effects, Photoshop and Google Chrome with several tabs.

6.3.0/1 - Barely able to use Google Chrome with 5 tabs. Premiere takes 10x longer to open.

 

There is any diagnostic software or something I could do to help?

 

--------------

 

I made a clean 6.3.1 install, clean parity, 3 ssd as cache (diagnostics attached). Performance still bad as before.  :-\

 

So far there aren't any new hardware limitation?

My hardware:

Evga SR-2 X58 - NF200 bridge chip - Intel 5520 chipset

2x Xeon X5675

Quadro NVS 295 + 2x GTX 960 4gb

inefavel-diagnostics-20170209-2227.zip

Link to comment

Just did a parity check with 6.3.1, and interestingly it's 18 minutes faster than 6.3 was  :)

 

With 6.2.4 my times (for about 6 parity checks) varied between 8:59:47 and 9:00:53 -- i.e. VERY consistent.

With 6.3 (one check) it took 9:27:22

With 6.3.1 (one check) it took 9:09:58

 

With v5 on this same system they were always within a few seconds of each other, and were almost an hour faster.

 

It IS an older system -- a Pentium E5300 -- so I suspect it's just a function of the higher CPU demands of v6.

Things are getting quite a bit more complex these days, especially with all kinds of background processes and browser-based javascript polling.  Kinda hard to make apples/apples comparisons.

 

Understand.  FWIW when I run a check I simply start it; shut down the GUI; and then look at it the next morning -- so there are no Web GUI updates.  Display updates are also disabled in the settings.  But I understand that there's a lot more "under the hood" with the v6 releases -- and the E5300 is likely struggling with the CPU demands.  I just found it interesting that it actually improved with the 6.3.1 update ... it's been getting longer with every v6 release before that.

 

Link to comment

I just finished one and my results actually are about 10% worse..

 

I think no conclusions can be drawn from these figures, we would need more statisical mass to draw those..

 

Interesting.  I HAVE noted that for any given release, my timings are VERY consistent --- generally no more than a minute's difference between checks.    Significant differences are only evident between different releases.

 

 

Link to comment

Just to ask, why are people running a parity check after an upgrade? I wouldnt think a new release would break parity.

 

Those who have slow machines had issues with parity speed after the v6 upgrade compared to v5.  There's been changes ongoing throughout v6 releases to hopefully address those issues.

 

When I say slow, I'm talking single core Semprons, Core2Duos, Atoms and the likes.

Link to comment

When you write "clean parity", what do you mean ? 

 

Fresh formatted drives.

 

If you did that your system will be rebuilding parity right now.. And that WILL slow it down... Is that what is happening maybe ?

 

Also.. There is no reason I can imagine why formatting your parity drives will have any effect... They do not even require formatting...

Link to comment

When you write "clean parity", what do you mean ? 

 

Fresh formatted drives.

 

If you did that your system will be rebuilding parity right now.. And that WILL slow it down... Is that what is happening maybe ?

 

Also.. There is no reason I can imagine why formatting your parity drives will have any effect... They do not even require formatting...

 

Why are you trying to deviate the center point? That's really weird.

 

I just made a fresh install to test. I thought that the performance issue could be something with my settings on 6.2.4 being wrong in the 6.3.1, that's why I tested with a fresh install. However on both, fresh installed 6.3.1 or upgraded from 6.2.4 the performance took a giant hit.

Link to comment

Sorry, I do not understand ?  You mention that you are having performance issues and you also mention that you have formatted you parity drives... formatting those drives will generate a parity rebuild and that will certainly have impact on performance.. Seems like a relevant remark to make ?  Also I just wanted to mention to tou that formatting parity drives is mot something you need to do, there is no benefit at all in that action, just thought that would ne good for you to know ?

 

Also.... just trying to help..

 

 

Verzonden vanaf mijn iPhone met Tapatalk

Link to comment

Sorry, I do not understand ?  You mention that you are having performance issues and you also mention that you have formatted you parity drives... formatting those drives will generate a parity rebuild and that will certainly have impact on performance.. Seems like a relevant remark to make ?  Also I just wanted to mention to tou that formatting parity drives is mot something you need to do, there is no benefit at all in that action, just thought that would ne good for you to know ?

 

Also.... just trying to help..

 

 

Verzonden vanaf mijn iPhone met Tapatalk

 

Sorry, but in this case is irrelevant. The main point is to figure out what is incompatible, hardware or my settings.

Again, I had performance issues upgrading from 6.2.4 to 6.3.0 or 3.2.1 so I wanted to test a new config from scratch.

It's kind obviously that my performance tests were made when there aren't any parity check or parity rebuild running.

 

A lot of stuffs in the 6.3.0 version got updated. There is a way to download the 6.3.0-rc1, rc2 etc? So I can try narrow down what is causing it?

Link to comment

Sorry, I do not understand ?  You mention that you are having performance issues and you also mention that you have formatted you parity drives... formatting those drives will generate a parity rebuild and that will certainly have impact on performance.. Seems like a relevant remark to make ?  Also I just wanted to mention to tou that formatting parity drives is mot something you need to do, there is no benefit at all in that action, just thought that would ne good for you to know ?

 

Also.... just trying to help..

 

 

Verzonden vanaf mijn iPhone met Tapatalk

 

Sorry, but in this case is irrelevant. The main point is to figure out what is incompatible, hardware or my settings.

Again, I had performance issues upgrading from 6.2.4 to 6.3.0 or 3.2.1 so I wanted to test a new config from scratch.

It's kind obviously that my performance tests were made when there aren't any parity check or parity rebuild running.

 

A lot of stuffs in the 6.3.0 version got updated. There is a way to download the 6.3.0-rc1, rc2 etc? So I can try narrow down what is causing it?

 

Sorry, can't help you there. Someone else probably can though. Succes solving your issues, it'll get fixed, it always does.

Link to comment

So I hit another kernel panic this morning. I forgot to turn on the Fix Common Issues troubleshooting last night but I was able to capture a picture.

 

Same scenario as I posted about in 6.3.0:

 

CA Backup kicks off at 5 am from cache drive to /mnt/disk17

WebGui is unavailable

SSH into unRAID (surprisingly) still works

VM's remain working

 

ozefEmx.jpg

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.