[SOLVED] Upgraded ESXi hypervisor, array won't start...


Recommended Posts

Happy New Year Folks,

 

So I have just upgraded my ESXi host from 5.5 to 6.5d. 

Array HDDs are physically attached via RDM. It seems like ESXi 6 has a slightly different naming convention for the drives, so what used to be called WDC_WD4000F9YZ-09N20L1_WD-WMCxxxxxxxx is now WDC_WD4000F9YZ-0_WD-WMCxxxxxxxx (basically, the f/w version has been removed from the disk ID - actually sensible). So now unRAID believes the config is stale and all its drives are gone... Oh well. Trying to just "seat" them into their right places causes the UI to say "wrong" on each, and consequentially the array won't start.

 

I think the solution is to go "New Config", assign the right disks to their right (old) slots as they should have been, without any "preserve current assignment", and restart. But, I need some confirmation / assurance that this will do what I expect it to: find the drives in their correct slots and start the array successfully (btw, the drives are encrypted, if that changes anything). Specifically, I don't want to lose any data 🙂

 

So: is the above the correct sequence I should go through? Anything else I should be mindful of?

 

Thanks!!

Edited by doron
Resolve
Link to comment
5 hours ago, johnnie.black said:

Yes, just make sure all assignments are correct, especially the parity drive, and you can check parity is already valid before starting the array for the first time.

Thank you. Indeed, new config worked as expected and without a hitch.

Link to comment
58 minutes ago, StevenD said:

RDMs are a bad idea.  Why don’t you pass through the entire controller?

Basically because the controller is shared with the hypervisor. I could move drives around to get around that, but from my vantage point, RDMs have been working for me flawlessly for years and years now - if it ain't broke, you know...

Link to comment
39 minutes ago, doron said:

Basically because the controller is shared with the hypervisor. I could move drives around to get around that, but from my vantage point, RDMs have been working for me flawlessly for years and years now - if it ain't broke, you know...

But, now it is somewhat broken. 

 

You really should be able to boot barenetal unraid, without ESXi, just in case you run into real issues and nobody will help you because you are virtualized. 

 

Controllers are dirt cheap. 

Edited by StevenD
Link to comment

 

7 hours ago, StevenD said:

You really should be able to boot barenetal unraid, without ESXi, just in case you run into real issues and nobody will help you because you are virtualized. 

 

Oh, agreed 100%. And I can: once I set the BIOS to boot from the Unraid flash, it will, and will then see all the array drives (that are currently assigned raw via RDM) natively. I will not have all the other VMs currently under this hypervisor, but Unraid will work bare metal.

 

Thanks!

 

Link to comment
13 hours ago, doron said:

once I set the BIOS to boot from the Unraid flash, it will, and will then see all the array drives (that are currently assigned raw via RDM) natively.

But they won't be recognized, you would have to set a new config and reassign them all, check the box saying parity is valid, and then endure a parity check.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.