Jump to content
One2go

Added 4TB drives to 9 drive Array of 2TB drives. How to proceed?

22 posts in this topic Last Reply

Recommended Posts

Just need a bit of direction on how to proceed with the increase in the array size. I hated to muck with it but it was either delete content or increase array size. I decide it is time for an increase.

 

I have had an array of 8 data drive size 2TB plus a 2TB parity drive. I purchased 4 HGST NAS drives size 4TB each. I upgraded to UnRaid v6.1.4 added all the needed plugins especially the excellent preclear plugin. I finished preclearing the four 4TB drives.

 

So at present I have a functioning array and four precleared 4TB unassigned drives. I know I could use a new config and assign the drives accordingly, with a new 4TB parity drive and three 4TB data drives, but what should I do with the old 2TB parity drive? I read numerous pages but can't come quite to grips with it. Do I need to format the old parity drive or just let it be and assign it to the array as a data drive?

 

Before starting the upgrade I ran a parity check on the whole array of 2TB drives and no corrections needed to be made and the parity is valid. One place I read is to just replace the parity drive and let it rebuild the parity on the new 4TB drive. In another place I read just backup the config directory of the flash drive and start with new config. What is the easiest method to now run the 4TB parity & data drives and use the old 2TB parity drive as a data drive?

 

I do really like version 6 and the excellent plugins for it to make the admin of the server so much easier, I have enjoyed the ride from version 4.7 to the latest and my hat off to the UnRaid community.

 

Thanks for your help

O2G

Share this post


Link to post

I assume you want to maintain the data on your current array, so first thing you need to do is replace the parity drive with one of your new 4TB drives, follow this article:

 

https://lime-technology.com/wiki/index.php/The_parity_swap_procedure

 

Once that is done, begin swapping out existing drives one at a time with your remaining 4TB drives, let each drive rebuild and then replace the next one.

 

As for your old parity drive you could pre clear it and then keep it as a spare drive or add it to the array as a data drive.

 

What license do you have?

Share this post


Link to post

Thanks for your help. I do have a PRO license.

 

You assumed correctly I want to keep the data that I have in the array. Just add the blank 4TB drives and keep the eight 2TB data drives the way they are, just use the new 4TB drives for added storage.

Share this post


Link to post

Since you want to actually increase not only the size of the array, but also the number of disks in it, the simplest approach is to simply do a New Config and assign one of the 4TB drives as parity, and ALL other drives as data -- your current data drives;  your old parity drive; and your other 4TB drives.

 

When you Start the array, it will do a parity sync on the new 4TB drives -- and it will also show 4 "unmountable" drives -- the 3 other 4TB drives and your old 2TB parity drive.    Wait for the parity sync to complete;  then check the button to Format those drives.

 

Doing this will eliminate the need to either pre-clear the old 2TB parity drive or wait for the array to clear it => it will only require formatting (a very quick process) ... just like the pre-cleared drives will.

 

Share this post


Link to post

I would do the same except would keep the old parity disk out of the new config, just in case there’s a problem with one of the data disks, you could use the old parity with the trust parity procedure.

 

If all goes well, would later preclear the old parity and add it to the array, not like you’re going to need the extra 2tb for now.

 

Share this post


Link to post

Actually the old parity disk will be intact even if it's in the config, as long as it's not formatted.  That's why I said to wait for the parity sync to complete before formatting the "unmountable" drives.

 

I should, however, have made it clear why it was important to wait for the parity sync to finish before doing the formatting.

 

 

 

Share this post


Link to post

Thank you all very much for your assistance and I will go about as follows.

 

1. Backup the Flash drive

2. Start a New Config from the Tools menu

3. Assign ALL drives to the new configuration with a 4TB drive to parity.

4. Start the array and a parity synch.

5. After parity is finished and valid format the 3 new 4TB drives and the old Parity drive.

 

If everything goes accordingly to plan I should have a new array of 12 drives plus  parity drive and 30TB of storage size.

 

Thanks again for all your help and hopefully someone else in my situation being fearful of changing a working setup will feel comfortable of changing his rig after reading these most helpful comments.

 

Thanks again,

O2G

Share this post


Link to post

That'll work fine.  Note that step #5 is all done at once -- you simply check a box that says to format the unaligned drives and they'll all be done at once.  Won't take more than 2-3 minutes.

 

Share this post


Link to post

I think I just discovered that a Parity Synch will take a lot longer then a Parity Check. The Parity rebuilding of eight 2TB Samsung Spinpoint F3 drives to a HGST 4TB Deskstar NAS drive after 50% completion still has some 24 hours to go. Well better save then sorry, and no redballing or defective drives. If everything goes according to plan the server should be finished with the upgrades and ready once the weekend is over  :(

 

Well, just went past the 50% mark and the remaining time dropped drastically, just 3 hours to go and the speed increased multiple times, looks like it is done with the filled 2TB data drives.

Share this post


Link to post

Usually the parity sync duration is very similar to a parity check, but doing the sync after adding the 4TB drives it’s going to take much longer than 2TB drives only, also depending on what controller(s) you’re using you could be hitting a bottleneck with the 4 extra disks.

Share this post


Link to post

Thanks it just dropped drastically in remaining time with just 3 hours to go. The parity drive plus 5 data drives are on the MB ports, 3 of the drives are on an AOC-SASMV8 controller and 4 drives are on an Adaptec Serial ATA II RAID 1430SA controller. From what I read the slowest drive and configuration will be the max speed until that drive is out of the picture.

 

Since this was my first time to enlarge an existing array in size and discs I am sure there would have been shortcuts but if it finishes with the data intact and the discs protected and up an running I don't mind taking the longer road.

 

I do appreciate all the help, comments and advise which makes UnRaid such an excellent solution for content storage.

Share this post


Link to post

The Parity rebuilding of eight 2TB Samsung Spinpoint F3 drives to a HGST 4TB Deskstar NAS drive after 50% completion still has some 24 hours to go.

 

This looks way too long for the controllers you are using, are any of Samsung disks model HD203WI?

Share this post


Link to post

ALL is WELL. 12 data drives online protected in the array and a 4TB parity drive assigned. Everything went as planned and I almost doubled my storage size. Judging from past usage this should be good for a few years and longer with good content management and no hording  ;D

 

7 of the Samsung drives are HD203WI and one of them is a newer HD204UI. The HD203WIs have worked flawless for over 5 years, just write content to them and do a parity check ever so often with no hick ups what so ever, just did what is says on the tin  ;D The HD204UI had some bad publicity but it likewise has now worked for over 4 years without a hitch. But as I said once I reached the 50% point of 2TB in the array, the computed remaining time dropped drastically and total time to Synch all drives finished in just less then 24 hours.

 

Behind this keyboard is one happy puppy, thanks again for all the assistance and hopefully another 5 totally uneventful years with the 2 UnRaid servers are ahead of me.

Share this post


Link to post

These disks appear to have an issue with Unraid V6 during parity check/sync and disk rebuilds, I had 3 servers with one or more HD203WI and it was causing big slowdowns in all of them, see my post here.

 

I ended up selling some of them and moving the rest to a backup server, still have the issue but at least it’s only on one server.

 

Without a reason I can find, sometimes is much worse than others, as you can see from my last 2 parity checks, HP N54L with 4 x HD203WI 2TB:

 

Description: Duration: 8 hours, 20 minutes, 46 seconds. Average speed: 66.6 MB/sec

Description: Duration: 18 hours, 37 minutes, 25 seconds. Average speed: 29.8 MB/sec

 

For comparison, the last one took longer than another HP N54L with 3x Seagates 8TB + 1x Toshiba 3TB:

 

Description: Duration: 15 hours, 53 minutes, 3 seconds. Average speed: 139.9 MB/sec

 

 

I suspect some firmware issue and I’m not optimistic on finding a solution.

Share this post


Link to post

Thanks for the information. I have also a HP micro server that has six HGST 3TB drives in it and have yet to compare the parity check times. I don't know if I am up to changing the HD203WI discs since they are 98% full none of them have them have more then 40GB of free space. Basically they just sit there with content and I normally do just twice a year a parity check making sure I can restore content if 1 drive fails. Once I delete content from them I may consider replacing one at a time especially if the HGST 4TB drives are available at a bargain price like on Black Friday, $254 for 2 of them.

 

Again thanks for you valuable information.

N2L

Share this post


Link to post

I also would not replace them just because of this, the disks have been very reliable, for me more so than the newer HD204UI, and the issue is not present during normal read/writes to the array.

 

On another note, I believe most users recommend doing a parity check once a month, that’s what I do.

 

Share this post


Link to post

The HD203WI's are indeed very reliable drives.    I still have one in one of my desktops ... but none in any of my UnRAID servers.    They're only 500GB/platter drives, but clearly even that doesn't explain the really slow speeds Johnnie is seeing on some parity checks -- and the lack of consistency is really strange.

 

Question for Johnnie:  Do you have v6 set to NOT update the display [settings - Display Settings - Page Update Frequency - Disabled]  and have you set nr_requests set to 8 for all drives?    These changes made a BIG difference in the parity check performance under v6 on my older system.    FWIW I've still got that system running v6.1.3, as the changes to 6.1.4 didn't work as well, and based on the feedback from 6.1.5 and 6.1.6 I think I'll wait for a bit more stability before moving forward.  But 6.1.3 with the nr_requests changes and the display updates disabled provides a very stable parity check that matches the v5 timings.

 

I agree there's no reason to replace the HD203WI's ...especially since they're effectively static in their content.  I have a bunch of old 500GB/platter 2TB WD EADS units in my media server that haven't had anything written to them in years -- 8 are completely full (< 1GB of free space) ... so they are only used for streaming movies from them and for the periodic parity checks (I do these once/quarter).

 

 

Share this post


Link to post

Question for Johnnie:  Do you have v6 set to NOT update the display [settings - Display Settings - Page Update Frequency - Disabled]  and have you set nr_requests set to 8 for all drives?    These changes made a BIG difference in the parity check performance under v6 on my older system.    FWIW I've still got that system running v6.1.3, as the changes to 6.1.4 didn't work as well, and based on the feedback from 6.1.5 and 6.1.6 I think I'll wait for a bit more stability before moving forward.  But 6.1.3 with the nr_requests changes and the display updates disabled provides a very stable parity check that matches the v5 timings.

 

I tried the nr_requests tweak when it was discovered and it didn't make any difference, same goes for the change in the parity check engine made in v6.1.4.

 

It's visible in the graphs I posted on the other post that the disks spend some time at ~1/3 speed, what varies is how much time they stay like that during a parity check, sometimes 50%, other times almost 100%, like that last parity check.

Share this post


Link to post

The variance in the speeds is indeed very perplexing.    Do this all work fine at normal speed with v5 ??

 

Share this post


Link to post

I'm not 100% sure,  I've only noticed this behavior on V6, but I'm almost certain I would have noticed on V5 if it was the same, as I am somewhat obsessed with parity check speeds.

Share this post


Link to post

... as I am somewhat obsessed with parity check speeds.

 

:) :)  [i've noticed]

 

But I'm REALLY glad you have the test gear you do (as I'm sure many others are as well) ... your set of test SSDs allows you to do some excellent testing that most of us couldn't reasonably do because of the time it would take;  and the flexibility of your testing gear is equally impressive.

 

IF, however, this wasn't a problem with v5, that makes it even stranger.    I know there are some controller-related issues that have surfaced with v6, but it's REALLY strange that the drives themselves would cause this inconsistent behavior.

 

Any chance you have an active plugin that may be using the array during the checks or perhaps a user who's streaming a video?  ... or the mover emptying the cache?    Any activity on the array VERY significantly slows things down during parity checks (as I'm sure you know) ... I'm just curious if there's something you may not have been thinking about.  (perhaps some new v6 plugin that you hadn't previously used)

 

 

Share this post


Link to post

The disks are now on a small backup server, only plugins installed are powerdown and dynamix system stats, I’m not using the server during parity checks, in fact and except for the monthly parity check this server is only on for a couple of hours once a week to sync the backups.

 

Note also that as soon as I removed these disks from 3 other servers that have no common hardware with the HP N54L the issue disappeared.

 

Maybe I’ll try doing the next monthly parity check with V5 just to confirm if it’s a V6 issue.

 

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.