unRAID OS version 6.5.0 Stable Release Available


Recommended Posts

3 hours ago, Dhovin said:

@bonienl I updated my DNS settings as you suggested but the issue remains. Seems my original post wasn't very clear. CA Autoupdate works fine. it updates the plugins with out an issue. The problem is the plugin page loads slow and does appear to check for updates but then reloads the page completely with the same "unknown" status on all plugins.

 

Version 6.5.0 always makes an online check for the plugin status, this should not be the case when "Check for Updates" is displayed. This is corrected in the next version. Perhaps you want to test this next version when it is available.

 

  • Like 1
Link to comment
2 hours ago, Shyrka973 said:

Hi,

 

Does the Swap File Plugin work with this version 6.5.0 ?

 

Thanks.

I wonder how many still use swap file, and why? 64bit unRAID V6 with enough RAM is surely a better approach, and if you are trying to do too much with too little RAM then I don't know how much swap file will help. Maybe someone can correct me.

Link to comment
On 3/16/2018 at 9:52 AM, yippy3000 said:

I did reboot after the upgrade. A second reboot hung but now things seem to working after I forced the reboot via SSH

And after a few days the issues is back.

 

WebGUI won't load the docker status and the logs show this:

 

Mar 18 10:23:59 Aeris nginx: 2018/03/18 10:23:59 [error] 2797#2797: *349587 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.103, server: , request: "POST /plugins/dynamix.docker.manager/include/DockerUpdate.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.11", referrer: "https://192.168.1.11/Docker"

Link to comment
13 minutes ago, yippy3000 said:

And after a few days the issues is back.

 

WebGUI won't load the docker status and the logs show this:

 

Mar 18 10:23:59 Aeris nginx: 2018/03/18 10:23:59 [error] 2797#2797: *349587 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.103, server: , request: "POST /plugins/dynamix.docker.manager/include/DockerUpdate.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.11", referrer: "https://192.168.1.11/Docker"

You need to post a complete diagnostics file.  You can do this from the command by using the   diagnostics   command.  

Link to comment
1 hour ago, trurl said:

I wonder how many still use swap file, and why? 64bit unRAID V6 with enough RAM is surely a better approach, and if you are trying to do too much with too little RAM then I don't know how much swap file will help. Maybe someone can correct me.

Swap files can actually do a lot of good because often when a machine runs out of RAM it's some program leaking RAM or sleeping processes that very seldom needs the RAM.

 

And memory leaks are normally allocated in a way that the data isn't accessed again - so the LRU logic can send the data out to the swap file for a very low cost to regain the RAM.

 

Also lots of programs consumes a significant amount of RAM on startup that they never touches again and Linux with speculatively write this data to the Swap file but keep a copy in RAM. All to be able to instantly release more RAM in case a program needs it. So unless the machine actively needs the majority of RAM as regularly accessed data, the swap can be used at a very low cost.

Link to comment

perhaps unraid should do a startup check for known incompatible settings/plugins/etc.. and warn the user in the log that gets flagged as red. since i know every time i upgrade unraid, i always look at the syslog screen for any red and make sure nothing new is there to warn me before starting the array. seems like an easy way to warn people they need to remove x plugin/change x.. vs having them search the forums or gleam it from the changelog

Link to comment
On 3/13/2018 at 9:13 PM, Hoopster said:

I noticed my Main server motherboard had a very recent BIOS update available with fixes specifically related to spectre/meltdown.  I updated the BIOS and attempted to upgrade once again to 6.5.0.

 

No joy, it still hangs at Loading /bzroot...ok

 

Once again, I had to roll back to 6.4.1

 

What can I check?  I can't get diagnostics since it won't boot with 6.5.0.

What Motherboard and CPU did you have the issue?  I am still going thru the posts before I upgrade

Link to comment
1 hour ago, Paul_Ber said:

What Motherboard and CPU did you have the issue?  I am still going thru the posts before I upgrade

 

The issue was resolved.  All four of us who reported the issue have the same motherboard (ASRock Rack C236 WSI) and it turns out booting in UEFI mode is now required.  This is the fix that worked for all of us with this board:

 

Conclusion:  With unRAID 6.5.0, the ASRock C236 WSI can no longer be booted in "legacy" mode (non-UEFI boot and EFi- folder on flash drive).  UEFI boot is now required and boot priority #1 must be UEFI: {Flash Drive} with all others disabled and the flash folder EFI- must be renamed to EFI.

Link to comment
19 minutes ago, Dephcon said:

So all my shares just disappeared.  This is happened before in the past (waaay back), but now I have diags!

 

I can probably sort it by stopping/starting the array, just wanted to provide the diags.

vault13-diagnostics-20180320-1150.zip

 

Several users on v6.5 are getting this:

 

Mar 19 19:49:31 vault13 kernel: BUG: unable to handle kernel NULL pointer dereference at 0000000000000038

Not sure what's causing them, for some(most?) appears to be related to plex.

Link to comment

Upgraded from 6.5.0-RC6. Everything works well.

 

@bonienl One minor thing I noticed is that on the VMs page, when hovering over the CPUs, the checked CPUs is not accurate in the popup (the ones assigned to the VM). I thought this might be due to having a custom sort order set so i tried deleting config\plugins\dynamix.vm.manager\userprefs.cfg and checked it again but it is still wrong. Diagnostics attached.

 

Thanks to all for your hard work!

 

 

filesvr-diagnostics-20180320-1207.zip

Link to comment
23 minutes ago, GHunter said:

One minor thing I noticed is that on the VMs page, when hovering over the CPUs, the checked CPUs is not accurate in the popup

 

Yeah, I can reproduce this. Need to find out what is causing it and how to fix.

Thanks for reporting.

 

Fixed in next release

Edited by bonienl
Link to comment
33 minutes ago, Dephcon said:

So all my shares just disappeared.  This is happened before in the past (waaay back), but now I have diags!

 

I can probably sort it by stopping/starting the array, just wanted to provide the diags.

vault13-diagnostics-20180320-1150.zip

 

4 minutes ago, bonienl said:

 

Yeah, I can reproduce this. Need to find out what is causing it and how to fix.

Thanks for reporting.

 

 

@bonienl just an update, I had to reboot to recover my shares, array stop/start didn't cut it. and i am using the LS.IO plex container if that matters/relates.

Link to comment
18 minutes ago, johnnie.black said:

appears to be related to plex.

 I'm also seeing this call trace, but I don't have plex. My call trace is referencing privoxy for the most part, and also Embyserver

 

Probably kernel related? Tom posted this on page 4: 

 

Link to comment
17 minutes ago, Dephcon said:

 

 

@bonienl just an update, I had to reboot to recover my shares, array stop/start didn't cut it. and i am using the LS.IO plex container if that matters/relates.

 

Your log shows a large mover action kicking in just after midnight on the 19th (yesterday). Do you know if shares were still visible before that time, or when did you notice shares disappeared?

Link to comment
59 minutes ago, bonienl said:

 

Your log shows a large mover action kicking in just after midnight on the 19th (yesterday). Do you know if shares were still visible before that time, or when did you notice shares disappeared?

 

Just noticed about an hour ago when all my jobs in NZBget started failing as it was unable to create tmp files for the downloads. 

 

According the the NZBGet log, the first of a huge string of failures started around Tue Mar 20 2018 12:26:05.  I did retry a bunch of the failed downloads so I'm not sure they still show up as failures in the history anymore so it might be 20-30mins before then. 

Edited by Dephcon
Link to comment

Hi, I think I am missing something here, but I cannot find the "edit XML" option if I click any of my VMs. Can anybody point me in the right direction please? Thanks, and sorry if it mentioned somewhere, search did not come up with anything useful to me.

Link to comment
On 3/18/2018 at 11:30 AM, bonienl said:

 

Version 6.5.0 always makes an online check for the plugin status, this should not be the case when "Check for Updates" is displayed. This is corrected in the next version. Perhaps you want to test this next version when it is available.

 

Actually, I think i figured it out. The plugin page appears to trigger ping requests to Google DNS regardless of the DNS settings in Network Settings. My firewall blocks all outgoing ping requests. I created an exception for the two Google DNS ip addresses and everything started working fine. I had to do the same thing for github previously because of CA.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.