Jump to content

Swap active SMR parity drive with active CMR data drive


Recommended Posts

Newer to unRaid and NAS tech in general. After getting my array finally setup the way I wanted, I came across the sticky thread on SMR drives. I have since realized that it is possible that every drive I've had issues with in the last 6 years have been SMR or TGMR drives, perhaps because I've been using them like regular drives. Now understanding the purpose and technology behind SMR, I think I should rearrange my array. Here is a copy of my array stats from the last 11 days of uptime. This is representative of normal weekly usage. I'm constantly writing to Disk 1 data drive (backups/data/small and big files), and reading from the rest (mostly media/large files). 

unraid-array.thumb.png.b6b47a528ff7a547a9cda4549f168ae0.png

The two parity drives are Seagate Compute SMR drives. The 4 data drives are Ironwolf Pro CMR drives. After reviewing this usage, it seems the best decision would be to swap Disks 2 and 3 with the parity drives. If I understand everything correctly, I'll get better performance and longevity by using the CMR drives as parity, and the more economical SMR drives will perform well as mostly read drives in Disk 2 and 3. 

 

Assuming all my assumptions are correct, what is the best method of performing this swap? My assumed order of operations is: 

 

1) Remove Parity 2, Replace as Disk 5.

2) Copy Disk 2 to Disk 5.

3) Move Disk 2 to Parity 2. Rebuild parity. 

4) Remove Parity 1. Replace as Disk 2. 

5) Copy Disk 3 to Disk 2. 

6) Move Disk 3 to Parity 1.

7 ) Move Disk 5 to Disk 3. 

8) Rebuild parity. 

solidsnake-diagnostics-20200422-1210.zip

Link to comment

There is a better way to do what you want! 

First decide exactly which data disks you want to move to parity.

N.B. I am assuming that you have valid parity?

  1.  Stop array and Unassign one parity disk and one data disk
  2. Start array to commit these changes.  At this point Unraid will say it is emulating the missing data drive.  You should still be able to see all its contents just as if were still assigned.
  3. Stop array and assign parity drive you have just unassigned in place of the missing data drive
  4. Start array to rebuild the data drive by writing emulated drive contents to data drive (which was the parity drive).   Keep the data drive you have just unassigned intact just in case anything goes wrong with the rebuild.
  5. After rebuild completes successfully stop array and assign old data drive to replace the parity drive and then start array to build parity

In theory you could combine steps 4 and 5 to save time but then you would have no recovery option if anything went wrong with the rebuild of the data drive.  Using two separate. steps means you keep the original data drive intact until you can confirm that the rebuild step completed successfully at the expense of additional elapsed time.

 

After first parity/data drive swap has completed successfully you can repeat the procedure for the other parity drive and a different data drive.

 

BTW:  Although there is nothing wrong with having dual parity drives it is probably overkill for such a small array.   You might be better of using the second parity drive as an additional data drive instead.   Do you have good backups of all critical data on your array as parity should not be used instead of having backups?

  • Thanks 1
Link to comment
2 minutes ago, itimpi said:

There is a better way to do what you want!...

Thank you for the detailed response. This makes perfect sense on what to do. 

 

I have offline backups of all critical data. I ended up with those Compute drives from another purpose, as I was building this server out, so I just added them to the array. Otherwise it would of only been 4 Ironwolf Pros and single parity. Typing this response is reminding me why I had setup the Compute drives in parity in the first place. The Ironwolf Pros have data recovery included. In the case of catastrophic failure of everything, I'd rather have Seagate working on recovering a drive that has data on it, and not parity bits. On the other hand, every drive that I have that has read errors is a SMR type drive. So, do I want my SMR parity drives to underperform and fail faster because of the writes, to potentially save the ability for Seagate to recover data from a failed drive, that is already backed up by parity and offline? This turned into me thinking out loud through the forum. I think I'm going to perform the change as scheduled, and move the Ironwolves to parity, since I'm rarely going to be writing large files to the SMR disks, so they should last a while under those conditions. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...