Unraid OS version 6.9.1 available


Recommended Posts

3 hours ago, JorgeB said:

If you have an LSI and 8TB Ironwolf drive(s) (only model ST8000VN004) probably best to stick with v6.8 for now, there have been multiple users with issues that can result in disable disks after upgrading to v6.9, on the other hand if anyone using them upgraded without issues please post back. 

 

 

I have one ST8000VN004 on a LSI SAS2008 which dropped out due to read errors, this was about two days after I upgraded to 6.9.0 and right after I changed fans. I assumed a bad connection, reseated the indeed not fully inserted plug and everything runs fine again. Still on 6.9.0, how can I help?

Link to comment
6 minutes ago, YB96 said:

Still on 6.9.0, how can I help?

For now I'm mostly interested in someone not having issues with that combo, to see if there's something different, like a specific LSI/drive firmware or even HBA model, but in case your issue was really a connection problem and if it doesn't happen again in the next couple of weeks or so then please PM me your diags at that time.

Link to comment
20 minutes ago, JorgeB said:

For now I'm mostly interested in someone not having issues with that combo, to see if there's something different, like a specific LSI/drive firmware or even HBA model, but in case your issue was really a connection problem and if it doesn't happen again in the next couple of weeks or so then please PM me your diags at that time.

Forgot to mention: before I updated to 6.9.0, I went with the betas and both RCs since about december. No other problems occured. 

 

HW-information:

LSISAS2008: FWVersion(20.00.07.00)

disk ATA ST8000VN004-2M21 SC60 /dev/sde 8.00TB

 

In case you want the full logs now, tell me. Hope I remember to send them in a few weeks if not.

Edited by YB96
Link to comment
25 minutes ago, YB96 said:

HW-information:

LSISAS2008: FWVersion(20.00.07.00)

disk ATA ST8000VN004-2M21 SC60 /dev/sde 8.00TB

Those are the most usual, so no need for diags, if you don't have more issues, especially after some heavy IO like a parity check, it might not be a general problem, which unfortunately will make it harder to fix.

Link to comment

FYI - obviously not a common problem, but 6.9.1 refused to show up for me in Tools --> Update OS.  I was able to update the old fashioned way by downloading 6.9.1 from unraid.net and manually copying over the bz* files to the flash.

 

Even though it now shows 6.9.1, this is what I saw in Update OS when on 6.9.0 as well (status unknown and no update appearing and no way to check):

 

image.png.e22afc4162dfe771b72a19c5a7bd60da.png

 

EDIT:  And just to be clear, there was no check for updates button in the GUI just like there is not one now.

 

On the backup server, things functioned normally,

Edited by Hoopster
Link to comment
5 hours ago, Zonediver said:

 

That's the solution? Uninstalling a "needed" Plugin?

In my Case, its working, BUT... look at this... this is new since v6.9...

By the way: Whats up with  bonienl? He was not seen anymore since 11. Nov 2020...

He is responsible for this plugin...

 

You catch more flies with honey.

 

craigr

  • Like 1
Link to comment
11 minutes ago, Gunny said:

Upgraded from 6.9 to 6.9.1, on restart a notification popped up saying an unsafe shutdown was detected and so a parity check was started automatically.

Diags saved on the flash drive would confirm but there have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid forces the shutdown after a couple of seconds even if the set time is much higher, changing the setting (Settings -> Disk Settings) to re-apply it fixes the issue.

Link to comment
14 minutes ago, JorgeB said:

Diags saved on the flash drive would confirm but there have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid forces the shutdown after a couple of seconds even if the set time is much higher, changing the setting (Settings -> Disk Settings) to re-apply it fixes the issue.

I had no issues upgrading from 6.9 RC2, but I had already turned off docker, shutdown my VMs, and stopped my array to make the FLASH backup.  In that state, I upgraded to 6.9.1 and restarted. 

 

I think on both RC1 and RC2 I had the unsafe shutdown notification after upgrading.  So if you are running dockers or VM's, it may be a good idea to shut them down before the upgrade.  However, when I got the unsafe shutdown, there were no parity sync errors anyway so it's probably not a big deal.

 

craigr

Link to comment
22 minutes ago, JorgeB said:

Diags saved on the flash drive would confirm but there have been a couple of reports where it looks like the shutdown time-out is not being honored, i.e., Unraid forces the shutdown after a couple of seconds even if the set time is much higher, changing the setting (Settings -> Disk Settings) to re-apply it fixes the issue.

Thanks for the Settings tip, I'm not too worried, I was overdue for a parity check anyways.  The weird thing was I had already manually shut down all vms and docker containers, so I don't know what would've caused it to hang for 90 seconds (my previously set timeout) unless of course that bug you mentioned doesn't use the default value as the lower bounds, and instead force shutdowns immediately.  Either way I've changed the timeout to 120s just to be safe.

Link to comment
2 minutes ago, Gunny said:

you mentioned doesn't use the default value as the lower bounds, and instead force shutdowns immediately.

Yes, that what was happening to the other users, shutdown time-out was set to more than 30 secs but Unraid started forcing the shutdown almost immediately, after just a couple of seconds.

Link to comment
6 minutes ago, Gunny said:

Thanks for the Settings tip, I'm not too worried, I was overdue for a parity check anyways.  The weird thing was I had already manually shut down all vms and docker containers, so I don't know what would've caused it to hang for 90 seconds (my previously set timeout) unless of course that bug you mentioned doesn't use the default value as the lower bounds, and instead force shutdowns immediately.  Either way I've changed the timeout to 120s just to be safe.

Well that shoots my theory down.  Did you have the array stopped or was it still running?

 

craigr

Link to comment
10 minutes ago, craigr said:

Well that shoots my theory down.  Did you have the array stopped or was it still running?

 

craigr

The array was started, but nothing was using it since all VMs and containers were stopped.  Is it good practice to manually stop the array prior to restarting?

Link to comment
2 minutes ago, SavellM said:

I have ST8000VN0022

And I have a ton of issue with the drives going into disabled state!

So I would include these drives too when the sleep they do not wake up cleanly.

Which firmware do you have?

 

Edited by SimonF
Link to comment
19 minutes ago, Gunny said:

The array was started, but nothing was using it since all VMs and containers were stopped.  Is it good practice to manually stop the array prior to restarting?

Not usually necessary and a pain too because you have to stop docker and all VM's first.  I was just thinking it might be a work around for others and also give some insight into the problem.

 

craigr

Link to comment

Is there any documentation on how to configure the Watchdog timer? I have searched the forums and looked briefly through the system options. I'm not sure I want to enable in the command line if it's supported somewhere else.

 

Quote

Linux kernel:

version 5.10.21

CONFIG_WATCHDOG: Watchdog Timer Support

 

Edited by BCinBC
Typo.
Link to comment

Dumb question, but I'm a linux newb. I see a fair few comments about it being easier with 6.9.* to add modules. I can tinker and screw around some weekend, so I'm not asking for someone to do the work for me, but my motherboard has a module for reading more sensors. https://github.com/electrified/asus-wmi-sensors

 

Is this feasible for a relative newbie to do? Can I build it as a plugin or even put it on the app store if I figure out that far?

Link to comment
18 hours ago, limetech said:

We added a 'no-cache' header to NoVNC web access so that future Unraid OS releases will no longer have stale web components.

@limetech I saw this in the fixes. Do these requests also have `max-age=0` set to guarantee the stale content is flushed? If not 'no-cache' will simply lock the stale cache in place indefinitely/until the age is hit.


Edit: It looks like `cache-control: max-age=0` is set in 6.9.0 (Woot, not an issue).

Is the intent never to have NoVNC resources cached, so Unraid can guarantee updated front end bundles? If so you may want to add `no-store` in addition to `no-cache` so anyone using a reverse proxy to access noVNC will not end up with resources cached at a higher level like their ISP.

 

 

 

 

Edited by paperblankets
Suggested adding no-store header.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.