EMPTY and Remove a Drive Without Losing Parity


NAS

Recommended Posts

@-Daedalus @garycase

 

Gary -

 

I think you're missing the point. If unRAID seeks to move more into appliance mode and away from having a skilled IT person operating it, features like "I want to remove this disk" would make sense, without a lot of "you have to do this first" stuff a person would just have to know to do or find knee deep in the forums.

Link to comment

I understand what you're saying ... I just don't think it's even close to a common function.  WHY would you want to remove a disk that had data on it ??   Replace it? -- Yes.   But remove it?   Folks who manipulate their data -- moving it around to different disks; restructure their shares; etc. aren't likely in the "naïve appliance user" category.   And for them to simply move all the data off a disk they want to remove from the array isn't a daunting chore at all.

 

Personally, if I DID want to remove a disk, I'd simply remove it and do a New Config.   (Obviously I'd confirm all was well before doing that)

 

I simply don't think a "Remove a disk" function is needed -- but I agree it would be a nice function for those cases where somebody wants it.   Zeroing a drive with DD and then doing a New Config with the "parity is already valid" box is definitely more complex than a simple "Remove an empty drive" function.    I just don't think that "emptying the drive" needs to be a GUI function => in fact, there's already a plugin that will do this [ UnBalance ].

 

Link to comment

It is a common ask. I agree with you that most experienced users do not need it often or at all. But every newbie asks (if not on the forum, in his head). And there is one use case that is relatively common. The desire to use a recently added disk to rebuild a failing disk. The answer "you shouldn't have added it in the first place" is not exactly helpful. And typically the user is convinced to buy a new one. But the removal method we've discussed (emptying then zeroing) would enable that as painlessly as possible. And there is no other way I know.

Link to comment

It could also be argued that it's somewhat expected. I'll grant you removing a disk is far less common than adding one, but unRAID's big callout is how easily disks can be added, and that you don't have to faff about with matches drives, pools, etc. like you do with something like ZFS. By this, you'd think an obvious extension would be easily adding, removing, and replacing disks.

Link to comment
7 hours ago, trurl said:

 

So you meant this

 

 

No, actually I meant just what I said. An extension of the current approach "Easily add and replace disks", would be "Easily add, replace, and remove disks". These are the main operations a user would expect to be able to do with something designed around easy data management, and it stands to reason all options should be (roughly) equally easy.

Link to comment
3 hours ago, -Daedalus said:

 

 it stands to reason all options should be (roughly) equally easy.

No, removing a disk is much more difficult to engineer to be idiot proof, and it's much more rarely used. The first tenant of unraid is data safety and integrity. Adding blank capacity is easy, replacing disks is a little more rigorous, because you have to ensure that the data is safely transferred from the old to the new. REMOVING disks is risky. How do you deal with the data currently on the disk to be removed? You have to take into account many more scenarios, all much more risky than a simple add or replace.

 

Many of the possible methods are outlined in this forum, but none are completely without risk.

 

Removing disks comes up so rarely compared to adding and replacing, it's not a high priority for limetech to develop an easy button, especially when there are methods already in place to accomplish it if the user really needs to remove a disk.

Link to comment
4 hours ago, jonathanm said:

No, removing a disk is much more difficult to engineer to be idiot proof, and it's much more rarely used. The first tenant of unraid is data safety and integrity. Adding blank capacity is easy, replacing disks is a little more rigorous, because you have to ensure that the data is safely transferred from the old to the new. REMOVING disks is risky. How do you deal with the data currently on the disk to be removed? You have to take into account many more scenarios, all much more risky than a simple add or replace.

 

Many of the possible methods are outlined in this forum, but none are completely without risk.

 

Removing disks comes up so rarely compared to adding and replacing, it's not a high priority for limetech to develop an easy button, especially when there are methods already in place to accomplish it if the user really needs to remove a disk.

 

I agree with some of this but respectfully disagree with the conclusion.

 

The priority of the enhancement should certainly be a consideration, but so should the level of effort. This is not like implementing dual parity, or figuring out how to make Ryzen c-states work. It is well defined, the techniques are clear, and by and large the code already exist. This would not be a lengthy enhancement.

 

There are three parts:

1 - Copy the data off - a lot like the mover script, but moving from the "drive to remove" to the array. It could get snagged up if there is not enough space. It would take some testing to confirm everything got moved off. Not trying to say trivial, there are some exception cases that would have to be thought through. But very doable. And if things got too complex, it could punt the job back to the user.

2 - Zero the disk - dead simple

3 - Remove the disk from the config - maybe a little tricky if the array is not brought offline. But I think the stop array button could be used to stop the array and remove the disk from the array.

 

And I DO think it would be used. I already gave one use case ("I added a drive to the array and now want to use it to rebuild a failed disk"). But here is another maybe more satisfying use case.

 

Say a user buys a new 8T drive, and wants to use it to replace 2 3T drives (freeing a slot and adding 2T of free space). He can replace one of the disks with the 8T drive. But now he wants to distribute the contents of the other disk to the array and remove it. Today he would be using unbalance or Krusader, performing a parity check, doing a new config, performing a parity build. and if they are a good citizen, doing one more parity check at the end. If this feature were in place, push the remove drive button, and the whole operation would happen with parity maintained. Stop the array and the disk is free. People would use this feature!

  • Like 2
  • Upvote 1
Link to comment

Long time I'm not active on this forum, unfortunately no much free time, but just want to say that after all this time I still fully agree that this should be implemented, and in fact I'm surprised that it wasn't yet after all this time and all changes on unRaid? I remember Tom mentioning that such feature could lead to mistakes, but there are certainly ways to prevent them, like only allowing to do this for a drive with empty file system or only for a file system with a single file with specific filename just to fully avoid any possible mistake.... but IMO it is definitively logic that users may want to remove a drive from the array without needing do a parity resync (that would unprotect the array while doing it) or without the need to use command line to do it, that is much more prone to mistakes (i.e. if user zeros the wrong drive by mistake).

Link to comment
15 hours ago, bjp999 said:

There are three parts:

1 - Copy the data off - a lot like the mover script, but moving from the "drive to remove" to the array. It could get snagged up if there is not enough space. It would take some testing to confirm everything got moved off. Not trying to say trivial, there are some exception cases that would have to be thought through. But very doable. And if things got too complex, it could punt the job back to the user.

2 - Zero the disk - dead simple

3 - Remove the disk from the config - maybe a little tricky if the array is not brought offline. But I think the stop array button could be used to stop the array and remove the disk from the array.

 

Basically agree, but as I noted earlier it's more complex if you consider the non-NAS functions that are now common in UnRAID.   Dockers and/or VM's that use the disk to-be-removed would have to be either shut down during the process or somehow notified to not use that disk.

 

Excluding those, it is indeed a fairly simple function => step (1) is already available in UnBalance, which could perhaps be invoked to empty the disk (and STOP the process with a "NOT ENOUGH SPACE"  if there isn't sufficient space to do it);  step (2) is indeed very simple (albeit LONG) -- but would also require the system be set to NOT do any further writes to the disk (this would perhaps mitigate the Dockers/VM issue as far as the removal goes ... but could generate questions as to "why my Docker isn't working anymore);  and step (3) should be virtually instantaneous (just modifying the config) ... although as bjp999 noted it may be tricky with the array online.

 

Link to comment
  • 3 months later...
4 minutes ago, nick5429 said:

I know the P+Q parity scheme is a lot more complex than just a simple XOR.

 

Is the manual procedure in the first post valid with dual parity?

 

It's valid, just make sure you keep the other disks assignments after the new config, i.e., say you remove disk3 from a 5 disk array, you need to leave that slot empty or parity2 won't be valid no more, also don't forget to enable turbo write before starting.

  • Like 1
Link to comment
6 minutes ago, trurl said:

Anytime you write to an md device, parity is maintained.

Of course.  But can I then remove that device, do a "New Config", and tell unraid "trust me, the parity is still good even though I removed a device" with dual parity mode active?

 

The answer is trivially yes in single parity mode where P is a simple XOR; I didn't see this directly addressed for dual parity where the calculations are much more complex (and the procedure was defined before dual parity mode existed), so wanted to ask.

Edited by nick5429
Link to comment

As Johnnie noted, as long as you maintain all of the same slots for the other disks, it will still be valid.    This does, however, add another element of risk to using this procedure instead of simply doing a New Config and letting parity rebuild -- which is still how I remove disks.  [A very rare process in my case]

 

Link to comment
  • 2 weeks later...

I agree that this would be a nice feature to have.  I've actually been wanting to downgrade my server to use SSDs instead of the big HDDs that are in it right now; I don't need as much storage as I once did and I'd love an easy way to replace them all one-by-one without starting entirely from scratch on the whole thing.

Link to comment
  • 2 months later...

I know this is an older thread but wanted to add something. I am a newish unRaid user and made the mistake of dumping every drive I had lying around (yes, I know now that it was stupid) including 3 tiny 320GB drives. These drives are old and haven't been written to yet. I want to pull them and drop in a 5TB drive. At the same time I want to add more SSDs to my cache pool.

 

Now I could just leave them all in but it will push me past my 12 drive limit and I don't want to go to a pro license. I for one, would think a simple button solution would be great!

Link to comment
  • 11 months later...

Some years later; I'd like to remove a drive because I have a drive that's just slow. It's old and it's slowing down the whole array when calculating parity.

I get that it's not a common practice but a gui button that said "remove this drive" with the option to trust parity or not (OMG make sure you Zero the drive before you trust it message) would be nice. The idea of reconfiguring the array makes my bum twitch, there's so much room for error.

 

 

Here is a brand new config vs the same config minus a drive, means I'm not having to triple check that the parity drive is definitely the parity drive, yes definitely, sure sure, that's the parity drive. For sure. Or I could just click a button that said "remove drive" and I know that the system has kept everything else as it was.  The rebuild parity or trust parity is a matter of user preference, but the total rebuild of an array config should be something reserved for MAJOR hardware changes or lost Flash drives.

Link to comment

Using the New Config option with the option to retain current assignments effectively allows this!   After doing the New Config you can go to the Main tab to remove the drive in question without much chance of getting it wrong.    You can then start the array to rebuild parity.   

 

One thing that I do not think is currently enforced is to disable the option to trust parity before starting the array if you remove a drive.   Obviously if you have removed a drive parity will be invalid and need rebuilding.

Link to comment

To get back to the point of this feature request. No doubt something like what I am going to say is already somewhere in this 8 page thread, but since we have awakened it again.

 

What you would want is a button to clear an empty array disk while it is in the array then unassign it, without requiring any additional user intervention. The button would only appear if a disk is mountable and empty.  Some of the code to do something like this already exists.

  • Upvote 1
Link to comment

One thing to consider is what happens if the process is interrupted such as a power outage.

 

In that event, parity is still valid (except for the effects of an unclean shutdown) but the disk is not mountable. When you boot up, you would have an unformatted disk still assigned to the array. The user could format the disk then the button would appear and they could restart the process from the beginning.

 

A parity check following the procedure would be a good idea whether or not it was interrupted.

Link to comment

Should the user be allowed to stop the array while the procedure is taking place?

 

Been a while since I have needed to clear a disk. What happens if you add a disk to a new slot and let Unraid clear it? Can you stop the array while it is clearing?

 

Not really a comparable scenario since parity isn't updating while clearing a new disk to be added, but it would be updating parity while clearing a disk to be removed.

 

I guess stopping the array would be just like the case of the power outage in my previous post. On starting, there would be an unformatted disk assigned, etc.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.