Unraid OS version 6.9.1 available


Recommended Posts

2 hours ago, Gunny said:

The array was started, but nothing was using it since all VMs and containers were stopped.  Is it good practice to manually stop the array prior to restarting?

 

Personally and I do mean personally I always do the following:

Stop all Dockers

Spin up all Drives (below does it anyways, but......)

Stop the Array

Shutdown/Reboot 

 

I do that simply because if a docker hangs I can wait for it to shutdown vs wondering what's hung and why my machine isn't shutting down. So I assume control of each step because I don't like unclean shutdowns and having to wait for a parity check to fire up if something goes sideways. ;)

 

I've not done that a few times and had good results, but there was a few times in the past when I had to eventually login and pray when I forced it to shutdown nothing would go wrong. 

  • Like 2
Link to comment
1 hour ago, paperblankets said:

@limetech I saw this in the fixes. Do these requests also have `max-age=0` set to guarantee the stale content is flushed? If not 'no-cache' will simply lock the stale cache in place indefinitely/until the age is hit.


Edit: It looks like `cache-control: max-age=0` is set in 6.9.0 (Woot, not an issue).

Is the intent never to have NoVNC resources cached, so Unraid can guarantee updated front end bundles? If so you may want to add `no-store` in addition to `no-cache` so anyone using a reverse proxy to access noVNC will not end up with resources cached at a higher level like their ISP.

 

 

 

 

 

We are following instructions on the bottom of this page:

https://github.com/novnc/noVNC/blob/master/docs/EMBEDDING.md

 

You can see where we do this starting on line 229 of /etc/rc.d/rc.nginx

 

Do you think this should be tweaked some more?

Link to comment
2 hours ago, BCinBC said:

Is there any documentation on how to configure the Watchdog timer? I have searched the forums and looked briefly through the system options. I'm not sure I want to enable in the command line if it's supported somewhere else.

 

 

 

We just turned on CONFIG_WATCHDOG=y because that was a necessary precondition to get the nct7904 driver to be compiled.  We didn't turn on any of the actual drivers that activate watchdog support.

 

  • Like 1
Link to comment
1 hour ago, limetech said:

 

We are following instructions on the bottom of this page:

https://github.com/novnc/noVNC/blob/master/docs/EMBEDDING.md

 

You can see where we do this starting on line 229 of /etc/rc.d/rc.nginx

 

Do you think this should be tweaked some more?

I'll upgrade this evening and take a look at that line and get back to you. `no-store` will help in cases where a user has setup a reverse proxy/vpn that they access novnc based containers through if unraids intent is to make sure the user always gets the must up to date front end build. Currently ISP level caches will cache the novnc files, only browsers respect the `no-cache` header.
https://codeburst.io/demystifying-http-caching-7457c1e4eded#885d

Outside of the reverse proxy use case I believe you guys are properly cache busting for NOVNC. 🎉
I made a note to ask the novnc guys if they think they should add `no-store` to the embedding docs or if it's too much of an edge case for all users.

Edited by paperblankets
Fixed a case where I typed no-cache instead of no-store
  • Thanks 1
Link to comment

I am having an issue attempting to upgrade to 6.9.1 every time I attempt to apply the upgrade I get the following error:

plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.md5 ... done

writing flash device - please wait...
Archive: /tmp/unRAIDServer.zip
plugin: run failed: /bin/bash retval: 1

Once  I received the the flash drive is not read/write error.  Is it possible my flash drive is failing?

 

If I need to replace the flash drive is it possible to re-use an old 16gb ssd, or emmc module that is attached to a USB enclosure?  I think it would have a better endurance than a USB flash drive?

Link to comment

Upgraded from 6.9.0 and seems to have resolved the issues with 6.9.0

There some things that could still use fixing, like /etc/profile forcing bash to start in /root, which breaking some of my tmux scripts

Also, I personally don't think its neccessary to force 777 access to the ./dev/dri, but I understand why you would want to take the easy way out and make things easier for users. Though it should be up to the docker image creators/maintainers to account for this.

 

It simply is not the Unix mentality to just willy nilly assign all access to entire directories (I'm looking at the default/recommended Unraid file permissions) nor is it a good security practice.

Link to comment
43 minutes ago, Ender331 said:

I am having an issue attempting to upgrade to 6.9.1 every time I attempt to apply the upgrade I get the following error:


plugin: updating: unRAIDServer.plg
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.zip ... done
plugin: downloading: https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.1-x86_64.md5 ... done

writing flash device - please wait...
Archive: /tmp/unRAIDServer.zip
plugin: run failed: /bin/bash retval: 1

Once  I received the the flash drive is not read/write error.  Is it possible my flash drive is failing?

 

If I need to replace the flash drive is it possible to re-use an old 16gb ssd, or emmc module that is attached to a USB enclosure?  I think it would have a better endurance than a USB flash drive?

How much memory does your system have?

Link to comment
4 hours ago, limetech said:

 

We are following instructions on the bottom of this page:

https://github.com/novnc/noVNC/blob/master/docs/EMBEDDING.md

 

You can see where we do this starting on line 229 of /etc/rc.d/rc.nginx

 

Do you think this should be tweaked some more?

 

@limetech I think

                add_header  Cache-Control: no-cache, no-store

Would be a safe change.

 

I opened a pull request for this change to get feedback:  https://github.com/novnc/noVNC/pull/1532

If I'm overlooking something I imagine they will call it out there.

Link to comment

Is anyone having the issue wherein when you go to VM's > and click on Edit, it pulls up an edit for the wrong VM? Moreover, when I click add a VM, it pre-populates it with the config from the one that it keeps pulling in the "edit" screen. This just started today I believe, and I upgraded to 6.9.1 last night.

 

I should add, its happening on multiple browsers - which makes me think its a server-side issue. maybe the nginx cache change from the changelog? idk

Edited by AccurateAesthetics
Link to comment

I don't have a way of testing, but does the options for GPU drivers need to be looked at? i.e. for people getting black screens in GUI mode etc.

 

Disabling modesetting You may want to disable KMS for various reasons, such as getting a blank screen or a "no signal" error from the display, when using the Catalyst driver, etc. To disable KMS add nomodeset as a kernel parameter. See Kernel parameters for more info. Along with nomodeset kernel parameter, for Intel graphics card you need to add i915.modeset=0 and for Nvidia graphics card you need to add nouveau.modeset=0. For Nvidia Optimus dual-graphics system, you need to add all the three kernel parameters (i.e. "nomodeset i915.modeset=0 nouveau.modeset=0").

 

Link to comment
3 minutes ago, hawihoney said:

For those of us, who use the NVIDIA driver:

 

Do we need to delete the plugin before update and reinstall after update (with Docker stop, Docker start). Will it stay that way?

 

See my response above. I am not sure if I am the exception but the plugin was removed after the upgrade/reboot. I had to reinstall it.

Link to comment
51 minutes ago, hawihoney said:

For those of us, who use the NVIDIA driver:

 

Do we need to delete the plugin before update and reinstall after update (with Docker stop, Docker start). Will it stay that way?

 

Try and upgrade the plugin prior to updating Unraid, worst case scenario is it will fail to install after reboot in new Unraid version and you will need to reinstall the plugin.
I upgraded prior to the plugin update being ready, so I had to do the latter.

Link to comment
1 hour ago, ultimz said:

Update went fine except for the the Nvidia plugin that was removed... I had to reinstall it.

Have you a active internet connection on boot?

The plugin will automatically download the new version for you, but you need to have a internet connection on boot otherwise it will fail.

 

1 hour ago, hawihoney said:

Do we need to delete the plugin before update and reinstall after update (with Docker stop, Docker start). Will it stay that way?

No, you only need a active internet connection on boot and it will download the new driver (keep in mind that the boot will take a little longer since it's downloading the new driver ~130MB).

As @tjb_altf4 said if you don't have a Internet connection the worst thing can happen is that you have to reinstall the Plugin and also disable and enable the Docker Daemon or reboot once more.

 

17 minutes ago, tjb_altf4 said:

I upgraded prior to the plugin update being ready, so I had to do the latter.

Hopefully the next time a update is released this won't happen again.

I check for new versions now every 15 minutes and have everything automated so that about 1 hour and 15 minutes after a release the Plugins are updated, even if I'm sleeping... :D

  • Like 2
  • Thanks 1
Link to comment
18 minutes ago, ich777 said:

Have you a active internet connection on boot?

The plugin will automatically download the new version for you, but you need to have a internet connection on boot otherwise it will fail.

I should have... but not 100% sure. Thanks for a great plugin... was quick to solve my issue after the reboot.

Link to comment
5 minutes ago, ultimz said:

I should have... but not 100% sure. Thanks for a great plugin... was quick to solve my issue after the reboot.

I eventually can add a pause so the Plugin waits again for about 20 or a maximum of 30 seconds for a internet connection.

 

You don't virtualize PfSense/IPFire/Opensense or have PiHole running on Unraid or something similar so that you have no connection?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.