Jump to content

I need to remove my parity drive from my array. How? the proper way pls


Recommended Posts

Ok, So for now, i want to remove my parity drive and run my array without parity (I understand the risk).

 

I'll add a parity drive later.

 

For now, how do I remove it properly?

 

I saw this on the wiki page

 

1.Stop the array by pressing "Stop" on the management interface. Un-assign the drive on the 2.Devices page, then return to the unRAID Main page.

3.Select the 'Utils' tab

4.Choose "New Config"

5.Agree and create a new config

 

I stopped the array, and unassigned the parity drive.

 

Now, when I go to the 'Utis' tab, and click "New Config" I get this warning which is scaring me off, so I didn't do it

 

This is a utility to reset the array disk configuration so that all disks appear as "New" disks, as if it were a fresh new server.

 

This is useful when you have added or removed multiple drives and wish to rebuild parity based on the new configuration.

 

DO NOT USE THIS UTILITY THINKING IT WILL REBUILD A FAILED DRIVE - it will have the opposite effect of making it impossible to rebuild an existing failed drive - you have been warned!

 

Can someone help?

Link to comment

If you are scared that is a good thing. Removing parity is not smart as you become susceptible to losing data if one of your drives develops errors or falls.

 

"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

(Your question is a little like asking if it is okay to remove the spare tire from your car despite a message on the spare tire that says not to, or you may get stranded!)

Link to comment
"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

I don't want to do this, as I'm going to insert the drive back again later on, as if it's a new parity drive.

Link to comment

"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

I don't want to do this, as I'm going to insert the drive back again later on, as if it's a new parity drive.

 

You can always add a parity drive to your configuration later, but a New Config IS what you want to do if your goal is to run without parity and you don't want constant warnings about configuration errors (i.e. a "missing" drive).

 

Link to comment

... What does it exactly do?  Does it ease my data on all the drives in the array?

 

I presume you mean "erase" => and the answer is NO ... it has NO impact on your data.  You're simply defining a configuration without parity.

 

First, be sure you know WHICH drive is currently the parity drive (the safest thing to do is print out the Main Web GUI page that shows all your drive assignments before you start).

 

Then just do the New Config and assign all of your data drives to the new configuration.

 

Done :-)    All of your data will be intact, and you'll have an array without a parity drive.    Not sure why you want to do this, but as long as you understand the risk, this will do it.

 

Link to comment

"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

I don't want to do this, as I'm going to insert the drive back again later on, as if it's a new parity drive.

 

You can always add a parity drive to your configuration later, but a New Config IS what you want to do if your goal is to run without parity and you don't want constant warnings about configuration errors (i.e. a "missing" drive).

 

Ok, will "new config" erase any data on my drives? this is what is scaring me after reading the message "......as fresh disks" when i click the "new config"

Link to comment

"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

I don't want to do this, as I'm going to insert the drive back again later on, as if it's a new parity drive.

 

You can always add a parity drive to your configuration later, but a New Config IS what you want to do if your goal is to run without parity and you don't want constant warnings about configuration errors (i.e. a "missing" drive).

 

Ok, will "new config" erase any data on my drives? this is what is scaring me after reading the message "......as fresh disks" when i click the "new config"

 

Absolutely NOT.  Your data will all be just fine  :)

Link to comment

"New config" is your ticket if this is really what you want to do. I might suggest a parting parity check and final review of the smart attributes of all your drives before doing it.

 

I don't want to do this, as I'm going to insert the drive back again later on, as if it's a new parity drive.

 

You can always add a parity drive to your configuration later, but a New Config IS what you want to do if your goal is to run without parity and you don't want constant warnings about configuration errors (i.e. a "missing" drive).

 

Ok, will "new config" erase any data on my drives? this is what is scaring me after reading the message "......as fresh disks" when i click the "new config"

You would only lose data if you have a 'failed' drive when you click the New Config as the system would stop emulating the failed drive.    That is why the warning is there - New Config is not the way to recover from a failed drive without losing data.
Link to comment

done, and it worked. thanks everyone!

 

I will explain why I'm removing my parity, I have a good reason:

 

- So I'm using the LSI card, which is suppose to support 24 drives, and it's attached to an intel extender.

 

When I added this new LSI card and replaced the old Dell PERC H200 (which was supporting 16 drives only), I was running my system with 16-17 drives. things were fine.

 

When over time, I started adding more drives (reached around 21 drives, including the parity), I started facing an issue.  When running parity check, the speed was extremely slow.

 

Another issue I had, several times at different occations is these errors, and during them the system hangs for seconds and then it runs

 

Jun 22 11:45:42 Tower kernel: scsi target0:0:16: enclosure_logical_id(0x5001e674641b1fff), slot(17)
Jun 22 11:45:43 Tower kernel: sd 0:0:16:0: task abort: SUCCESS scmd(f3edf3c0)
Jun 22 11:46:19 Tower kernel: sd 0:0:10:0: attempting task abort! scmd(f748d480)
Jun 22 11:46:19 Tower kernel: sd 0:0:10:0: [sdl] CDB: 
Jun 22 11:46:19 Tower kernel: cdb[0]=0x88: 88 00 00 00 00 00 00 ae 2d 30 00 00 04 00 00 00
Jun 22 11:46:19 Tower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001e674641b1feb), 

 

I posted previously about this error, and many people told me that it could be the power supply not sufficient for the drives, some other people suggested that the cables could be lose....etc.

 

Because these errors only appeared when I was increasing disks, I knew that it's because that my card seems to not support 21 drives (btw, I bought the LSI card from ebay, from someone on Hong Kong, don't know if it's a fake one or not...)

 

So (for the time being), to avoid these errors and poor performance, and since higher capacity drives are available now, I'll try to digest (even though I have 24 slots in the rig) that my system can handle a maximum of 20 drives (or even 18).

 

So you will ask, what does this have to do with the removal of the parity? lol it's coming..

 

Before removing the parity drive, I was planning to remove one of the 2 TB drives from the array, and in order to do that, I had to manually copy the data from that drives to another one.  So here, I faced the issue, where it was extremely slow and hanging when moving the data.

 

So I decided to remove the parity drive, to improve the migration of the data of the 2 TB, and once completed, I'll remove the 2 TB drive, and add again the parity drive, and rebuild the parity from scratch.

 

and that's why I had to go through all this hassle...

Link to comment

Be SURE you understand the "user share bug" BEFORE you copy the data off that drive ==> THAT will cause a complete loss of the data you copy !!

 

... in short, just don't copy using user share references => do it with the disk shares.    That's not absolutely necessary, but it's the easiest way to ensure you don't make a mistake.    e.g. don't copy to or from \\Tower\MyShare  ... use \\Tower\diskx\MyShare  instead.

 

Link to comment

Be SURE you understand the "user share bug" BEFORE you copy the data off that drive ==> THAT will cause a complete loss of the data you copy !!

 

... in short, just don't copy using user share references => do it with the disk shares.    That's not absolutely necessary, but it's the easiest way to ensure you don't make a mistake.    e.g. don't copy to or from \\Tower\MyShare  ... use \\Tower\diskx\MyShare  instead.

 

I do that anyway, I never knew about any bug until you mentioned it.  What is it?

 

Now, when I'm ready to use the same old parity drive, what are the steps to put it back in? The parity is invalid in the drive I think, so I want it to build from scratch.  I would appreciate the help.

 

thanks

Link to comment

When you want to add a parity drive you simply Stop the array;  assign a parity drive; and then Start the array.

 

The system will then do a parity sync to the newly assigned drive.

 

The same applies on a disk that has been already used for parity (which is my old parity drive)?

Link to comment

Yes, it still applies.  Since it's a new configuration, UnRAID has no "memory" of what other disks might have been in previous configurations => 'nor does it matter what's on the disk you assign -- it's all going to be wiped out anyway when it does the parity sync.

 

Link to comment

done, and it worked. thanks everyone!

 

I will explain why I'm removing my parity, I have a good reason:

 

- So I'm using the LSI card, which is suppose to support 24 drives, and it's attached to an intel extender.

 

When I added this new LSI card and replaced the old Dell PERC H200 (which was supporting 16 drives only), I was running my system with 16-17 drives. things were fine.

 

When over time, I started adding more drives (reached around 21 drives, including the parity), I started facing an issue.  When running parity check, the speed was extremely slow.

 

Another issue I had, several times at different occations is these errors, and during them the system hangs for seconds and then it runs

 

Jun 22 11:45:42 Tower kernel: scsi target0:0:16: enclosure_logical_id(0x5001e674641b1fff), slot(17)
Jun 22 11:45:43 Tower kernel: sd 0:0:16:0: task abort: SUCCESS scmd(f3edf3c0)
Jun 22 11:46:19 Tower kernel: sd 0:0:10:0: attempting task abort! scmd(f748d480)
Jun 22 11:46:19 Tower kernel: sd 0:0:10:0: [sdl] CDB: 
Jun 22 11:46:19 Tower kernel: cdb[0]=0x88: 88 00 00 00 00 00 00 ae 2d 30 00 00 04 00 00 00
Jun 22 11:46:19 Tower kernel: scsi target0:0:10: handle(0x0014), sas_address(0x5001e674641b1feb), 

 

I posted previously about this error, and many people told me that it could be the power supply not sufficient for the drives, some other people suggested that the cables could be lose....etc.

 

Because these errors only appeared when I was increasing disks, I knew that it's because that my card seems to not support 21 drives (btw, I bought the LSI card from ebay, from someone on Hong Kong, don't know if it's a fake one or not...)

 

So (for the time being), to avoid these errors and poor performance, and since higher capacity drives are available now, I'll try to digest (even though I have 24 slots in the rig) that my system can handle a maximum of 20 drives (or even 18).

 

So you will ask, what does this have to do with the removal of the parity? lol it's coming..

 

Before removing the parity drive, I was planning to remove one of the 2 TB drives from the array, and in order to do that, I had to manually copy the data from that drives to another one.  So here, I faced the issue, where it was extremely slow and hanging when moving the data.

 

So I decided to remove the parity drive, to improve the migration of the data of the 2 TB, and once completed, I'll remove the 2 TB drive, and add again the parity drive, and rebuild the parity from scratch.

 

and that's why I had to go through all this hassle...

 

This is to update you guys on the above post.

 

After removing the parity, and removing 2 drives of 2TB, the moving of files was working fine (write speed was about 40-45 MB).  I emptied the two drives of 2TB, removed it from the array, and re-added the parity again, but the problem still existed. BUT, the parity-sync speed was improved significantly, where it was before about 2 MB (total disks in the array was 21 disks), and it improved to 12 MB (19 disks in the array).

 

However, while rebuilding the parity, I was having difficulty sometimes accessing my array (via the network mapped drives)....and the errors still appeared, but much more less than before.

 

So my assumption is still I need to remove more drives, which I'm planning to do. I un-assigned the parity drive for now.

 

Will keep you posted on the progress/updates.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...