Seagate targeting Q4 for 14TB TDMR Drive release


Tybio

Recommended Posts

2 hours ago, pwm said:

So maybe I should hold off switching a couple of 10 TB parity drives to 12 TB and sit still in the saddle until later this year and go directly to 14 TB.

Heh, my OCD wants me to wait for 16TB. I upgraded from 1 -> 2 -> 4 -> 8. Probably confirmation bias, but I can point to issues with some models of 3, 5, and 6 TB drives. I'm in no hurry to replace my 8's. (Yes, I know there are issues with some models of 1,2,4, and 8, but shhh, don't tell my OCD.)

Link to comment
2 minutes ago, jonathanm said:

Heh, my OCD wants me to wait for 16TB. I upgraded from 1 -> 2 -> 4 -> 8. Probably confirmation bias, but I can point to issues with some models of 3, 5, and 6 TB drives. I'm in no hurry to replace my 8's. (Yes, I know there are issues with some models of 1,2,4, and 8, but shhh, don't tell my OCD.)

 

It's just that I have a couple of 12 TB drives I have had laying around now for 4-5 months but never taken the time to switch in as new parity. A bit of waste to keep them just laying there. But it would be irritating to spend the time switching and then I end up buying 14 TB disks just a couple of months later and once more have to wait until I get the time to replace parity to be able to make use of them.

Link to comment
1 minute ago, pwm said:

 

It's just that I have a couple of 12 TB drives I have had laying around now for 4-5 months but never taken the time to switch in as new parity. A bit of waste to keep them just laying there. But it would be irritating to spend the time switching and then I end up buying 14 TB disks just a couple of months later and once more have to wait until I get the time to replace parity to be able to make use of them.

If you are using dual parity (sounds like it) then I don't think the parity swap procedure is risky at all, and fits that scenario exactly. You just have to start the array once with your intended target slot drive removed, then you can stop the array, put the new 14TB drive in as one of your parity drives and the replaced parity drive in the target slot. Single operation, minimal downtime.

Link to comment
2 minutes ago, bonienl said:

 

You make it sound 12TB is already old technology. I wish I have these "laying around" but my latest search ended up empty handed.

 

And that's not what I mean. But the parity size affects what size of data disks I can buy and add.

 

Right now, I prefer to not add more 10 TB disks to the array which means I need to update the parity before next time I grow the array.

 

I was planning to migrating about 100 TB more data to the machine during my vacation which does mean I need to add quite a lot of disk. If I delay that migration for the xmas vacation, then I can manage the remaining disk space without having to buy new disks until I get the option to decide between 12TB or 14TB disks. Right now, I have most of the data splitted over four machines, and I want to end up with one machine with just about everything on and a second machine as hot standby with a significant amount of backup data and offline USB drives for some of the data. So in the end, lots and lots of disk surface to buy while number of spindles and drive connectors / machine matters a lot - I don't have the option to go for any extremely noisy 19" cases that can take 36 or 48 drives but needs jet-engine ventilation to keep everything cool in the summer.

Link to comment
1 hour ago, bonienl said:

I am in the process of replacing 4TB for 10TB disks, but it seems disk prices are on the rise again...

 

 

Yes, I have a 30+ 4TB drives to replace. But for this, I can settle for 10 TB drives since for secondary machines I don't have the same strong need to count # of spindles. It's enough with 14+2 10TB disks for the array of the secondary server and then two or three 10TB 2-disk mirrors in some app server or even workstation.

 

Besides the drives I have already bought that are waiting to be inserted, I need to buy at least 4-6 disk 12TB or larger for the main server and at least 10 disks 10TB or larger for the backup server. And 4-8 8TB USB disks for tertiary storage.

 

Disks can quickly add up to quite serious money. Last time I did a little larger disk regeneration it costed about $5000 for the disks.

Link to comment
39 minutes ago, remotevisitor said:

One issue with these growing disk sizes is that the parity check time is moving into the 2 days timescale.   I keep hoping the ability to break the check into partial runs so that it could be performed overnight over multiple days would become availabile.

 

Yes, it would be good if unRAID could support a setting where x% of the capacity is tested every night. Or a new test started every x days and spread it out allowed max y hours/night

Link to comment
3 hours ago, remotevisitor said:

One issue with these growing disk sizes is that the parity check time is moving into the 2 days timescale.   I keep hoping the ability to break the check into partial runs so that it could be performed overnight over multiple days would become availabile.

 

That seems long, I just did a parity check on my older than dirt 1275v2 with all drives on HBAs and it took ~18 hours to do 10TB.  Granted, I'm using the seagate 7200rpm drives...but even with 5400s I can't see it taking over twice as long.  I went the pure HBA route as my MB has mostly SATA2 ports ;).

Link to comment

You are lucky .... I have a mixture of 4TB, 6TB and 8TB disks (admittedly still on Supermicro SASLP-MV8 cards) which currently take around 27 hours for a parity check.

 

I know my brother who has a similar setup has improved his times a bit by moving to an LSI card.

 

i am just in the process of upgrading a 4TB data disk to my first 10TB data disk (+ 10TB parity) so expect my times to increase a bit more;  which might finally make me decide to make the move to an LSI card as well.

 

I have previously had issues with my 6TB disks and the SASLP-MV8 cards with the disks dropping off line, which I found I could work around by setting the 6TB disks to not spin down; so move to an LSI card should help remove the necessity to keep them spinning.    This matches an observation some time ago, by Squid (if I remember correctly), that some of the Marvell controller issues appear to be related to specific disk firmware.

Link to comment

I just moved from LSI cards back to the SAS2LP,  I had an issue where streaming from one disk would pause for 15-20 seconds when a second disk on the card spun up.  I've never had a problem with my Supermicro cards, and only swapped because I was thinking about doing some VMs.  From my experience, a big mistake to try the LSIs, now have useless cards that cost money :).

 

For reference, on the SAS2LP I'm seeing the same speed as on the LSI, parity check starts at about 240MB/sec...obviously that's not where it ends!

Link to comment
12 minutes ago, LammeN3rd said:

the Parity check on my  Array with 10TB disks (7.2K) takes about 16 hr  52 min without the 6TB drives that would be just over 14 hours :)

 

These big drives are quick, as long as your controller can handle it ;)

 

Your results are in line with mine, I still have 3x8TB drives so the slow down toward the end of those drives accounts for the extra time on my check.  They are still fast drives, but any time you have different sizes you will get longer checks. 

Link to comment
2 hours ago, LammeN3rd said:

the Parity check on my  Array with 10TB disks (7.2K) takes about 16 hr  52 min without the 6TB drives that would be just over 14 hours :)

 

These big drives are quick, as long as your controller can handle it ;)

 

The disadvantage with the small drives, is that they slow down unRAID at start when all drives are involved. And that is when the large drives have their best raw transfer rates - that they can't make use of because of the slower drives.

Link to comment
  • 2 weeks later...

So when you get a 12TB drive how long does it take to prepare it for the array.  I usually like to run the check 10 times to make sure the drive is working properly but it sounds like it will take several days to zero out the drive.  then to repeat this 10 times, it will take a month just to upgrade one drive.  That is if you do them one at a time.

Link to comment

10 times is a huge number of times.

 

If each pass takes longer time, you should consider to reduce the number of passes.

 

In the end, a "good" test would keep the drive busy for a reasonable number of hours of break-in time and then after that manage one or two write + read passes.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.