Disk ID's changed. Disaster


Recommended Posts

Hello and Help!

 

I've just replaced a failing cache drive using the method describede in one of the tutorials, However......

 

After powering the machine back up all the drive ids changed! For example, my first data drive which was "sde" is now "sdb" for example and this is now the case for all the drives including parity and cache. When i tried to start the array all drives showed red crosses.

 

None of the other drives were unplugged so how can this happen??? I thought the only way to change the drive id (sda,sdb,sdc,etc) was to physically plug them into different headers on the Motherboard?

 

Anyway, i then foolishly used the "New config" tool without the "retain all" switch and this has just made matters worse. Now all of the drives are showing blue squares next to them and are threatening to wipe all contents if i attempt to start the array!

 

Where do i go from here? I do have a screenshot of my original disk config but not a USB Unraid backup.

 

I'm running version 6.6.6

 

Many thanks, Gerald.

Link to comment

P.S.

 

I did a parity Check yesterday which showed 100% correct.

 

If i tick the "Parity is already Valid" box can I then start the array without my 2 parity disks being erased?

 

Also, why does Unraid want to Erase (Format?) my two parity disks anyway? No doubt this is due to the incorrectly issued "New Config" Instruction. 🤥😳

Link to comment
2 minutes ago, Jetjockey said:

P.S.

 

I did a parity Check yesterday which showed 100% correct.

 

If i tick the "Parity is already Valid" box can I then start the array without my 2 parity disks being erased?

 

Also, why does Unraid want to Erase (Format?) my two parity disks anyway? No doubt this is due to the incorrectly issued "New Config" Instruction. 🤥😳

 

Haven't checked your diagnostics yet. I still don't understand this part:

54 minutes ago, Jetjockey said:

When i tried to start the array all drives showed red crosses.

 

Let us take a look at your diagnostics just to see if we can see anything obviously wrong.

 

The default for New Config is to rebuild parity since New Config lets you make changes that would invalidate parity, such as adding or removing disks, or in the case of parity2, just changing their order.

 

"Format" is definitely the wrong word when talking about parity. It is perhaps the most misused and misunderstood word on these forums.

 

Link to comment

Errrm. OK, I think I get that.

 

One question though, if I may?

 

All of my disks now have Blue squares next to them which makes me very nervous! If i start the array now will the disks be accepted and show all the folders and contents (mostly movies)? Or will unraid now apply the new config i requested in error and wipe those data disks also?

 

Many thanks.

Link to comment
Just now, Jetjockey said:

Errrm. OK, I think I get that.

 

One question though, if I may?

 

All of my disks now have Blue squares next to them which makes me very nervous! If i start the array now will the disks be accepted and show all the folders and contents (mostly movies)? Or will unraid now apply the new config i requested in error and wipe those data disks also?

 

Many thanks.

New Config doesn't write to any disks assigned as data. It only (optionally) rewrites all of parity from the parity calculation.

 

Looking at your diagnostics though, I have to wonder if Unraid knows what filesystem they are. Probably it will figure it out when you start. Do you know what filesystem they were? If it gives you a checkbox to Format anything, don't check it.

Link to comment

Yup, I understand where your coming from on the format front, please excuse my use of generalisation.

 

Also, sorry, not all the disks had red crosses next to tyhem, i obviously didnt word that sentence correctly. What i should have said is that those drives who's id had changed (sda, sdb, etc) all had red crosses next to them.

 

I still dont understand why these id's changed, but what i did to rectify the situation was this:-

 

Unplug all drives from the SATA ports and then one by one plug them back in in the same order they were originally listed in my screen shot eg:-

Parity 0 = sda, Parity 1 =sdb, and so on. i rebooted between each drive being added. The id's now stay the same on each and every reboot.

 

Thanks

Link to comment

It really doesn't matter which device designation is given to a disk. Linux uses a dynamic assignment and this may vary each time you reboot your system.

 

Unraid uses the serial number of the disks, this is unique doesn't change ever unless you replace the disk with another disk.

 

Not sure how you unplugged the disks, but this should never be done when the array is operational. Some systems support hot-swap (really depends on the hardware you have), but swapping or replacing disks must always be done when the array is stopped. In any case when not sure, always shutdown the system before fiddling with the disks.

 

  • Upvote 1
Link to comment
2 minutes ago, Jetjockey said:

Yup, I understand where your coming from on the format front, please excuse my use of generalisation.

 

Also, sorry, not all the disks had red crosses next to tyhem, i obviously didnt word that sentence correctly. What i should have said is that those drives who's id had changed (sda, sdb, etc) all had red crosses next to them.

 

I still dont understand why these id's changed, but what i did to rectify the situation was this:-

 

Unplug all drives from the SATA ports and then one by one plug them back in in the same order they were originally listed in my screen shot eg:-

Parity 0 = sda, Parity 1 =sdb, and so on. i rebooted between each drive being added. The id's now stay the same on each and every reboot.

 

Thanks

 

Probably the thing that made the sdX designations change was simply replacing the cache earlier. Since Unraid doesn't care about that there was no reason for concern and no reason to take any action. Please don't pay any attention to sdX in future unless we specifically tell you to (unlikely).

 

Any disks with redX isn't normal based on your (possibly incomplete) description though. And of course you rebooted before getting us the diagnostics so we can't see anything of the events leading up to this. So we will probably never know since we have no reliable evidence.

 

2 minutes ago, bonienl said:

Not sure how you unplugged the disks, but this should never be done when the array is operational.

If you unplugged disks with the system running, then that would explain the redX's. But if so you left that part out of your description.

 

2 minutes ago, Jetjockey said:

Yes I do have the file system details luckily, as it was shown in my original screenshot. Does that help me?

Not sure it matters but maybe.

Link to comment

Looking at the diagnostics, I see all disks in the array set to status "NEW DISK" (because you did New Config).

In this state it will attempt to use the existing file system of that disk when starting the array.

 

Go ahead start the array, but as @trurl mentioned stop immediately with further actions and ask for advice when any disks show up as unmounted and the format option is offered.

Edited by bonienl
Link to comment

No disks were unplugged or replugged with the system running, hence the statement about the system being rebooted between each action.

 

The description was accurate in the first post. After changing the cache drive several other drive sdx's changed.

When I then started the array those drives who’s id's had changed all had red crosses next to them.

 

At that point the troubles began.

Link to comment
2 minutes ago, Jetjockey said:

When I then started the array those drives who’s id's had changed all had red crosses next to them.

Changing ID will not cause disks to go "red-crossed". Something else must be at play in your system.

Since you did a "New Config" the only way forward is to start the array with the disk assignments as you had them before.

Link to comment

Cheers for that, i'll set it to run overnight (It's 9pm here).

 

Plex has been reinstalled and seems to be fine, i'm going to go and run a few films now just to check.

 

The new SSD cache drive (500 GB Evo 860) is very quick, it puts the outgoing mechanical Samsung 1 TB to shame.

I can only imagine how fast these new Intel Optane drives must be. It's a shame that 3 TB SSD's are not the Norm!

too rich for my blood. I suppose i could install an SSD into the array just for Documents or lightly used material.

Less heat, quieter, and less power draw.

 

Has anybody any experience with that?

 

Thank you.

Link to comment
11 minutes ago, Jetjockey said:

I suppose i could install an SSD into the array just for Documents or lightly used material.

SSDs are a bad idea in the array. Any writes to them will be no faster because parity has to update. Also it is unclear whether some models may invalidate parity behind the scenes since they do some housekeeping on their own.

 

Just put them in a redundant cache pool and make a cache-prefer User Share for anything you really need speed for. Probably don't really need that speed in many cases since the network may limit certain uses anyway.

Link to comment

That all makes good sense. Yes, I think I’ll pop another SSD in to make a redundant pool instead. 

 

Im thinking of upsizing my existing 3TB Toshiba drives (5400rpm). Can anybody recommend a good NAS suitable 6 or 8 TB drive from personal experience?

 

The last lot of Seagate 500GB Enterprise drives I bought (10 yrs ago), all failed miserably!

 

Thanks.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.