Controlled shutdown on reboot after upgrade


Recommended Posts

I think this has been the case for a while but I've really noticed it on the past three upgrades to 6.9, .1 and .2. After the upgrade files are downloaded and installed, a banner appears on the Dashboard stating and upgrade has been installed with a clickable Reboot link. When the link is clicked, it seems that a hard reboot occurs as it appears that no VMs or Dockers are shut down and the array isn't taken offline. On reboot a parity check starts up.

It would be great if the link performed a proper shutdown of dockers, VMs and the array before rebooting.

Link to comment
50 minutes ago, Bluecube said:

I think this has been the case for a while but I've really noticed it on the past three upgrades to 6.9, .1 and .2. After the upgrade files are downloaded and installed, a banner appears on the Dashboard stating and upgrade has been installed with a clickable Reboot link. When the link is clicked, it seems that a hard reboot occurs as it appears that no VMs or Dockers are shut down and the array isn't taken offline. On reboot a parity check starts up.

It would be great if the link performed a proper shutdown of dockers, VMs and the array before rebooting.


The reboot after an upgrade is meant to be a ‘clean’ shutdown of the array.     There has been some indication that the values for timeouts in Settings for closing Docker containers and VMs may not be getting honoured (or lost) after the upgrade (which would be a bug) and resetting them can help.   Before rebooting it might be worth seeing if you can first successfully Stop the array - and if not try and find out why before rebooting as if you cannot successfully Stop the array you will always get an unclean shutdown/reboot.

Link to comment
Just now, Bluecube said:

I have no problems shutting down the array. If I do this and restart all is well. 


 

strange that you get a problem on reboot then as it the shutdown/reboot first issues an array stop and then only if the timeouts for dockers and/or VMs are reached does it start trying to force them down.    You might want to check the timeouts for dockers and VMs under the settings for the respective services (may need to be in advanced view to see these timeouts) and try increasing them slightly to see if that helps.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.