Array of all 2.5 in drives / 4tb or 5tb Seagates??


Recommended Posts

It will soon be more cost effective and it already is more space efficient to build our arrays with 2.5 in drives.  I haven't taken the plunge yet.  But I have a case that is ready with 2.5 in hotswaps and may try it soon.  As long as your case is set up for drives up to 15mm high, you can take advantage of these tiny 4 and 5 tb monsters.

 

Anybody with experience with these little 4tb or 5tb Seagates? 

 

https://www.amazon.com/Seagate-Barracuda-2-5-Inch-Internal-ST4000LM024/dp/B01LZMUNGR/ref=dp_ob_title_ce

Link to comment

I don't think it will be more cost efficient for an entire server to be 2.5in drives because they still cost more per GB.  I am using the 2TB version as my cache drive (because the only slot I had left in my case was a 2.5in slot). It is great, no issues, survived a 3 cycle preclear. 

I can buy low power 4TB 3.5in drives consistently for under $90. Over the life of the drive I don't think the electricity cost difference will make up the $40 difference in price.

But that's just my opinion. When the 2.5in drives start showing up in 8TB+ sizes, I will switch.

Link to comment
6 hours ago, whipdancer said:

I don't think it will be more cost efficient for an entire server to be 2.5in drives because they still cost more per GB. 

 

Sorry, I said "soon" but didn't define "soon".  Yes I agree today it still is cheaper to go with 3.5 in drives.  But the WAF (Wife Acceptance Factor) definitely leans towards the much smaller 2.5 inch solutions.

 

Seems most of us are still sticking with 3.5 at the present??

  • Upvote 1
Link to comment

Bear in mind that these drives use SMR technology to achieve their impressive capacity. They hide it quite well with a large (volatile) RAM cache and a persistent cache that is implemented in NAND flash rather than using a non-shingled band on the platters, which reduces the amount of head seeking enormously when emptying the persistent cache.

Link to comment
Bear in mind that these drives use SMR technology to achieve their impressive capacity. They hide it quite well with a large (volatile) RAM cache and a persistent cache that is implemented in NAND flash

Does this mean that they work better than the Seagate 8tb Archive drives. They are quite good..
Link to comment
1 hour ago, tr0910 said:

Does this mean that they work better than the Seagate 8tb Archive drives. They are quite good..

 

I agree that the Seagate Archive drives are pretty good and I was hoping that by now Seagate would have launched other, higher capacity Archive drives with NAND-based persistent cache. They are certainly better than the (also SMR-based) BarraCuda Compute drives found in their external USB cases. Instead they seem to be using the technology to make high capacity laptop drives possible. I have a couple of 2.5-inch 4 TB Seagate Expansion drives (external, USB) that I use for Time Machine backups and they are perfectly fine in that role. I also have several of the 2 TB version, some of which I use for other backup purposes and two of which I've shucked and used to breath new life into a pair of white MacBooks - you can tell they are ageing because they actually have space for a hard disk and an optical drive I replaced the optical drives with SSDs and combined the 240 GB SSD and the 2 TB HDD into a DIY Apple fusion drive and the combination works brilliantly.

 

So, yes, they seem pretty good. It's almost as though Seagate is embarrassed about SMR technology - but that's understandable as they have received a lot of criticism by people who are ignorant about the technology. At the moment you can buy an 8 TB IronWolf for around the same price as an 8 TB Archive, while a helium-filled 8 TB WD Red is just a little more. On that basis there's no point in buying the Archive because it has lost its one advantage - high capacity at low cost. You can now get 10 TB, 12 TB and even 14 TB non-SMR drives (at a price!) so it's disappointing that 16 TB and 20 TB Archive drives haven't been launched yet.

Link to comment

But nobody has done the unRaid indepth evaluation on the 4tb and 5tb 2.5 inch drives yet??

 

I wonder if they are being used in enterprise cloud storage for archival purposes?  Backblaze could certainly increase the density of their storage pods with these drives.  

 

The fact that no 3.5 inch 16 and 20tb drives exist may already point to a tipping point having been reached.  The end of the 3.5 inch drive?

Link to comment
On 4/1/2018 at 1:45 PM, John_M said:

So, yes, they seem pretty good. It's almost as though Seagate is embarrassed about SMR technology

It is rather complicated to implement - if they cache to flash then they get problems with wear. Look at total wear figures for a normal SSD - consider then total wear a very much smaller flash cache might have to handle if you assume that the user will perform a full rewrite of the HDD capacity a number of times during the lifetime.

 

Much of the wear can be reduced if also making use of RAM - but either the RAM must be small enough that all RAM content can be moved to flash after a power loss. Or you need SRAM and supercap to keep the data in RAM for months until the drive is powered again - but what if the supercap can handle 3 months and the user leaves the unit unpowered for 6 months? So RAM-based caches works best for caches that are relatively small or for server-class equipment where you really expect the user to power up the system again relatively quickly (like the RAM-cache in better RAID controller cards).

 

On 4/2/2018 at 2:07 AM, tr0910 said:

The fact that no 3.5 inch 16 and 20tb drives exist may already point to a tipping point having been reached.  The end of the 3.5 inch drive?

If you look at the release date for 10TB drives and that they since have continued with larger sizes, it has just not been time yet for 16 or 20 TB drives.

 

For a number of years, disks have grown with almost a factor two every two years. But if looking over a larger time span, the actual growth rate have varied because the growth has required very specific technological breakthroughs, and because of varying economical conditions that have affected the market needs.

 

10+ TB disks aren't standard disks for laptops or desktops, and end-user backup/archive requirements aren't really a driving factor for the HDD manufacturers. So the open question then is what the main economical factors are for the enterprise large-scale storage needs where they need to put much larger considerations on storage density, power density, connector density and cost per TB-year.

 

You can fit many more 2.5" drives in the same-size rack case. But there is a limited number of drives that can be hot-swap-accessed from front/back. So there will be a challenge when more and more storage servers will move to top-mounted drives that can't be hot-swapped meaning you need to handle redundancy through spare drives and through clustered servers where they run the server until a large percent of the drives are bad and then turns off the server and replaces every drive while other cluster-servers continues to serve the redundant file data from other disks.

  • Upvote 1
Link to comment
10 hours ago, pwm said:

It is rather complicated to implement - if they cache to flash then they get problems with wear. Look at total wear figures for a normal SSD - consider then total wear a very much smaller flash cache might have to handle if you assume that the user will perform a full rewrite of the HDD capacity a number of times during the lifetime.

The 2.5" SMR drives have a triple cache zone, DRAM, then flash and finally a PMR zone.

  • Like 1
Link to comment
5 hours ago, johnnie.black said:

The 2.5" SMR drives have a triple cache zone, DRAM, then flash and finally a PMR zone.

 

Yes, they need to implement a classical multi-tier solution and then try to figure out how large the different tier levels must be to work well without becoming too costly but not wear out the flash or too often running out of cache space and have the write speed drop. And the flash must always have enough free space that they can use capacitors or supercap to quick-dump the DRAM data to flash in case of power loss.

 

And your earlier benchmark indicates that they probably uses dynamic remapping of tracks. Did you post a newer benchmark after you had filled the drive full?

 

SSD by the way also needs multi-tier logic with DRAM and flash to handle indexing of the flash block remapping.

Link to comment

A number of file operations produces many small file writes that doesn't work well with flash blocks on the SSD or with the cache on a shingled drive.

 

An interesting link about the amount of work even for quite trivial disk updates:

https://arxiv.org/pdf/1707.08514.pdf

 

Note for example that a file rename on BTRFS followed by a fsync can result in 648 kB being written to the disk. Alas, the report doesn't mention the actual number of writes so we can compute the average block write size the drive gets hit with.

 

Caches are really extremely important for most types of storage block devices.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.