All drive names change with any OS update


Recommended Posts

My setup is 2 Parity drives and 5 array drives, when I perform ANY update to the OS it changes all of my drive names (adds a bunch of characters to the original names). Because the first part is the same I can place them in their appropriate slots through the dropdown, but they all show as wrong. I can rollback the OS version and its all fine, this has been the case for multiple versions I've attempted updating to.

Please help? Pic attached for context.

Screenshot 2022-08-05 152812.png

Link to comment
  • 7 months later...
On 8/6/2022 at 3:17 AM, JorgeB said:

This can happen with some raid controllers, or with an LSI HBA using a very old firmware, please post the diagnostics

resurecting this but im experianceing the same issue after moving to 6.12 rc2

 

Suggestions? how to get them back to original names and what happened to my other drives? they show up as pool to be mounted ...

my cache drives show like this:

image.thumb.png.5b4909a831a74c7f8b3ccdf9350b22f4.png

 

hpunraid-diagnostics-20230324-0705.zip

Link to comment

Question, I did do a new config and my data and parity drives coming back online now.. but the pool for my cache drives dont show up , but they are all in the unassigned drives. they all show a mount point but do not show as a valid drive to use in the pool that was created. 

Screen shots to follow so you can see what im trying to get across ..   

image.thumb.png.d3d355367bcb9214c90cea3a64875a83.png

 

But i can not assign the drives in the existing pool should i just format the cache? I dont think much was stored and then reassign them?

 

image.png.bb16b5840f66cd7d059d31520e64d674.png

Link to comment
6 minutes ago, Wiseone001 said:

But i can not assign the drives in the existing pool should i just format the cache? I dont think much was stored and then reassign them?

Those are connected to a RAID controller, that is not recommended, was cache that single 600GB logic volume you can assign?

Link to comment
1 minute ago, JorgeB said:

Those are connected to a RAID controller, that is not recommended, was cache that single 600GB logic volume you can assign?

No that was a single disk i left unassigned as possible back up for a failed drive..

So you think the controller created a raid config of all those drives and is combining outside of unraid as individual disks?

Is there any harm in resetting the cache pool and rebuilding the drives? (IE unmounting/ removing from cache pool and then formatting and reinstalling as new drive to cache pool?

 

Link to comment

Not familiar with that controller, but taking a second look I see the problem:

 

Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sdj problem getting id
Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sdh problem getting id
Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sdg problem getting id
Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sde problem getting id
Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sdf problem getting id
Mar 24 03:57:54 HPUnraid emhttpd: device /dev/sdi problem getting id

 

This happens when Unraid sees multiple disks with the same identifier, in this case "600GB_DISK_B", see if you can change that, until you do they won't work with Unraid.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.