Jump to content

SSD overprovisioning on Array drive?


craigr

Recommended Posts

I use one SSD drive in my array it works perfectly well with parity because it supports "Deterministic read data after TRIM" ("DX_TRIM").  However, my current 64GB SSD drive has gotten too small for my needs, so I just ordered a "new" Samsung 840 PRO.  Limetech says they are likely to support TRIM on array drives in the future.  However, Limetech also says that they may only support "Deterministic read zeros after TRIM." 

 

The reason I went for the 840 PRO is because it's the only drive I could find that I was sure supports "DZ_TRIM" mode and I want to be as certain as possible that my SSD will support unRAID array TRIM in the future.  My question is, is it necessary (or good practice), and is it possible, to overprovision my new SSD drive?  When I replace the old small drive with the new larger drive unRAID will automatically want to format it and rebuild the array.  Is there any way that I can make unRAID leave 10% of the drive unformatted so that the drive can utilize overprovisioning?

 

Here is a link to Limetech about TRIM on array drives:

 

 

Thanks for any help!

Kind regards,

craigr

 

Link to comment

I don't think you can get real overprovisioning if you haven't bought a SSD with a controller designed for overprovisioning.

 

Overprovisioning isn't just that the SSD have more flash blocks than is visible in the file system. Overprovisioning means the drive can move flash blocks in/out of the overprovisioning pool as old data gets overwritten and new data needs erased flash blocks for storage.

 

Leaving a small part of the drive outside of the main partition can still give an advantage since not the full range of LBA can be addressed and written to. So the drive gets some additional choices for wear leveling. But the actual extent of the advantages you get would depend on how the specific wear leveling logic works. So you would actually need to fill your 90% large partition completely full and then perform overwrites of existing data and see how much slowdown you get compared to if you have a full disk that makes use of 100% of the capacity.

 

With a bit of luck, the specific drive may make good advantage of your forced 90% fill limit.

Link to comment
On 3/20/2018 at 4:21 PM, pwm said:

I don't think you can get real overprovisioning if you haven't bought a SSD with a controller designed for overprovisioning.

 

Overprovisioning isn't just that the SSD have more flash blocks than is visible in the file system. Overprovisioning means the drive can move flash blocks in/out of the overprovisioning pool as old data gets overwritten and new data needs erased flash blocks for storage.

As I understand it, the Samsung 840 PRO has the necessary provisions in its firmware to use unallocated space on the disc for overprovisioning.  The installer simply has to determine how much space is warranted for overprovisioning and then just leave the space unallocated.  One the Samsung 840 the recommended amount is 10%.

 

I have read that in Windows this is all that the Samsung "Magician" tool does.  It suggests that you leave some "X" amount of space unallocated, and then the hard drive controller firmware already knows what to do with it for wear leveling.

 

I may write Limetech directly about this.  It seems there must be a way to instruct unRAID to not allocate the entire disk during a new disc rebuild when using a larger disc in the old disc's place.  I also need to make sure this wont have a deleterious effect parity...

 

Thanks for your input,

craigr

Link to comment
On 3/20/2018 at 1:29 PM, johnnie.black said:

It should work by creating an HPA.

 

 

Neat idea, but I don't think my Supermicro board supports this.  Also, I want to leave 10% of the dis unallocated and I don't think HPA will reserve that much space.  I had an HPA board once and I think that it only consumed a couple megabytes.

 

craigr

Link to comment
2 hours ago, craigr said:

Neat idea, but I don't think my Supermicro board supports this.  Also, I want to leave 10% of the dis unallocated and I don't think HPA will reserve that much space.  I had an HPA board once and I think that it only consumed a couple megabytes.

 

craigr

It's not a board thing, it's a parameter sent to the drive with the correct set of commands, and yes, you can allocate ANY amount to HPA, even to the point of consuming all of the space, leaving only a few MB able to be partitioned. The space reserved by HPA, whether done by a rogue Gigabyte BIOS or manually set using the commands, is simply not reported to the OS as available. When you tell the OS to fill the drive with a single partition, that partition will be only as large as the HPA allows, and still report 100% filled. The rest of the space is simply not there as far as the OS is concerned.

Link to comment
3 hours ago, jonathanm said:

It's not a board thing, it's a parameter sent to the drive with the correct set of commands, and yes, you can allocate ANY amount to HPA, even to the point of consuming all of the space, leaving only a few MB able to be partitioned. The space reserved by HPA, whether done by a rogue Gigabyte BIOS or manually set using the commands, is simply not reported to the OS as available. When you tell the OS to fill the drive with a single partition, that partition will be only as large as the HPA allows, and still report 100% filled. The rest of the space is simply not there as far as the OS is concerned.

Well, regardless my Supermicro board does not have it.

 

Thanks for the help.

 

craigr

 

 

Link to comment

I'm thinking the best thing to do may be to create a new config in unRAID.  I used "Unassigned Devices" to create an XFS partition on the new Samsung SSD.  I then used cfdisk to resize the partition with a bit over 10% overprovisioned.  I copied all the contents from my existing SSD drive in the array to the new drive in "Unassigned Devices."

 

When I created this array I stupidly assigned my 64GB SSD to disk15 which at the time was my last disc... now I have a 64GB disc in the middle of my array.

 

I am planning  to copy the unRAID flash drive with my current config as a backup.  I'll then create a new config.  I have extra 8TB drives on hand so I will remove the existing parity drive and swap it with a new one.  I'll pull the old SSD and install the new.  I'll then assign the new 8TB drive as parity, assign my new SSD as disk 1, and assign the rest of my data discs while making sure my cache drive is the cache drive again.  At that point I can rebuild parity with my overprovisioned new SSD.

 

If anything goes wrong during parity rebuild, I can reinstall the original parity drive and SSD drive, and restore the unRAID flash backup... I think that will all work?

 

I just hope overprovisioning doesn't disrupt parity or I will be going back to rebuilding with the new SSD without overprovisioning.

 

craigr

Link to comment
5 hours ago, craigr said:

Well, regardless my Supermicro board does not have it.

 

Thanks for the help.

 

craigr

 

 

It's not set on the board bios, it's set on the command line with hdparm, and you chose at which sector the HPA starts, so you can set the HPA any size you want, like 10%, 20%, 50%, etc.

 

 

Link to comment
7 hours ago, johnnie.black said:

It's not set on the board bios, it's set on the command line with hdparm, and you chose at which sector the HPA starts, so you can set the HPA any size you want, like 10%, 20%, 50%, etc.

 

 

Oh.  I must have obviously been missing something!  I may look into this more now.  It would be nice to not have to totally redo parity and a lot of other stuff... clearly.

 

Thanks for the help guys.

 

craigr

Link to comment
11 hours ago, craigr said:

I'm thinking the best thing to do may be to create a new config in unRAID.  I used "Unassigned Devices" to create an XFS partition on the new Samsung SSD.  I then used cfdisk to resize the partition with a bit over 10% overprovisioned.  I copied all the contents from my existing SSD drive in the array to the new drive in "Unassigned Devices."

FYI, if as I understand you plan to add this disk to the array it won't work, unRAID requires that the partition extends to the end of the device, disk will appear unmountable with an invalid partition.

 

17 minutes ago, craigr said:

Oh.  I must have obviously been missing something!  I may look into this more now.  It would be nice to not have to totally redo parity and a lot of other stuff... clearly.

Note that to create an HPA it's best to use the onboard controller, some HBAs won' a accept the command, also note that some devices won't accept an HPA, i.e. it will remain the same size, but it's easy to try:

 

1) check the number of sectors, e.g.:

root@Test:~# hdparm -N /dev/sdg

/dev/sdg:
 max sectors   = 234441648/234441648, HPA is disabled

2) create HPA at 90% size:

root@Test:~# hdparm -N p210997483 --yes-i-know-what-i-am-doing /dev/sdg

/dev/sdg:
 setting max visible sectors to 210997483 (permanent)
 max sectors   = 210997483/234441648, HPA is enabled

3) reboot and hopefully the new size will stick.

 

I'm using for example this in one of my backup servers with all 2TB disks, I already had a spare 3TB disk so I changed it to 2TB also, since I don't plan to have other >2TB disks on this server anytime soon this way I avoid parity checking 3TB instead of 2TB.

 

test.thumb.png.62623057532c2a4ffd6d2082e4c5ba96.png

 

 

Link to comment

Hmmm... but will an HPA partition be usable for over provisioning by the drive?  Could I delete the HPA partition after the array is setup and parity is finished?  Or maybe if I just create the HPA partition and don’t format it?  

 

If I later delete the HPA partition, based on what you say I think it likely that unRAID will then see the disk as unmountable and want me to format it using the entire disc...

 

Thanks for all the help!

craigr

Link to comment
1 minute ago, craigr said:

Hmmm... but will an HPA partition be usable for over provisioning by the drive?

It should be, but can't be sure.

 

2 minutes ago, craigr said:

Could I delete the HPA partition after the array is setup and parity is finished?

Yes, but you'll need to rebuild the disk.

 

2 minutes ago, craigr said:

If I later delete the HPA partition, based on what you say I think it likely that unRAID will then see the disk as unmountable and want me to format it using the entire disc...

No, it will see it as the wrong disk because of the new size, but since it will be larger you can rebuild.

Link to comment
1 hour ago, johnnie.black said:

It should be, but can't be sure.

 

Yes, but you'll need to rebuild the disk.

 

No, it will see it as the wrong disk because of the new size, but since it will be larger you can rebuild.

...but that defeats the purpose than doesn't it ;-)

 

Thanks again,

craigr

Link to comment
2 hours ago, johnnie.black said:

For the overprovisioning to work (if it does work with this) you'll need to leave the HPA enable, I was telling what happen if you remove it.

 

This link describes that using HPA is indeed the way to overprovision an SSD, however read section on that page under Preparation where he states it's also necessary to "delete" all the blocks first, which can be done with blkdiscard command.  YMMV

 

On 3/20/2018 at 11:16 AM, craigr said:

However, Limetech also says that they may only support "Deterministic read zeros after TRIM."

 

Yes DZAT support would be necessary in order to properly update Parity.  An optimization as part of this would be to zero-check the data being written to Parity and instead of writing zeros, issue TRIM command to it.  This support is planned but not implemented at this time.

 

BTW, does anyone know the current industry trend in supporting DZAT?  For example, Samsung supports it on 840 PRO but not 850 PRO.  Is the industry moving away from this?

 

Link to comment

Thank you so much.  That was very easy and it worked perfectly.  I now have a Samsung 840 PRO 256GB that shows 230GB in Unassigned Devices.  Now all I have to do is swap out the old SSD and rebuild.  Awesome!

 

Do you mind if I share your instructional post on the public forum?

 

5 hours ago, limetech said:

BTW, does anyone know the current industry trend in supporting DZAT?  For example, Samsung supports it on 840 PRO but not 850 PRO.  Is the industry moving away from this?

 

I certainly do not know. I found it impossible to even discover what drives support DZAT.  I did a lot of googling and the only reference I found was a guy on a random forum stating that his 840 PRO supports DZAT and that his 850 PRO does not.  He wanted to know if Samsung had any plans to support DZAT on the 850 PRO in the future.  I would like to know if Samsung supports DZAT on the 860 or 960 as well.

 

Based on that one piece of information I rolled the dice and bought the 840 PRO on eBay for $65 with only 31 days of use and about 1TB written.  I probably saved myself some money anyway versus buying a newer drive ;-) 

 

I really expected to find a compiled list of know drives that support DZAT someplace, but I uncovered nothing of the sort.  I would have liked to have bought a more modern drive, but I didn't want to get stuck with a drive that did not support DZAT.  I did buy an "Inland Professional 240 GB" budget drive at Microcenter for $50 and checked it.  The drive did not sport "DZ_TRIM" or "DX_TRIM" so I returned it.

 

However, One of the difficulties in searching is what to search for; DZAT, DZ_TRIM, Deterministic read zeros after TRIM...

 

I suspect if you build unRAID to support DZAT the community will quickly compile a list.  That said, surely there are other situations other than unRAID where this information must be required?

 

Thanks again,

Craig Rounds

Link to comment
On 3/20/2018 at 1:29 PM, johnnie.black said:

It should work by creating an HPA.

 

 

 

On 3/22/2018 at 6:12 PM, jonathanm said:

It's not a board thing, it's a parameter sent to the drive with the correct set of commands, and yes, you can allocate ANY amount to HPA, even to the point of consuming all of the space, leaving only a few MB able to be partitioned. The space reserved by HPA, whether done by a rogue Gigabyte BIOS or manually set using the commands, is simply not reported to the OS as available. When you tell the OS to fill the drive with a single partition, that partition will be only as large as the HPA allows, and still report 100% filled. The rest of the space is simply not there as far as the OS is concerned.

OK, you guys were totally correct.  I messaged Limetech directly about this and he sent me links with detailed instructions on how to get overprovisioning to work based on these methods.  It looks like he posted the details to this thread already ;)  Here are the two paramount links:

 

You must first perform a secure erase:

https://www.thomas-krenn.com/en/wiki/SSD_Secure_Erase

 

Then use hdparm:

https://www.thomas-krenn.com/en/wiki/SSD_Over-provisioning_using_hdparm

 

I now have a 256GB 840PRO that shows 230GB in Unassigned Devices :)

 

Very kind regards,

craigr

Link to comment
12 hours ago, johnnie.black said:

FYI, if as I understand you plan to add this disk to the array it won't work, unRAID requires that the partition extends to the end of the device, disk will appear unmountable with an invalid partition.

 

Note that to create an HPA it's best to use the onboard controller, some HBAs won' a accept the command, also note that some devices won't accept an HPA, i.e. it will remain the same size, but it's easy to try:

 

1) check the number of sectors, e.g.:


root@Test:~# hdparm -N /dev/sdg

/dev/sdg:
 max sectors   = 234441648/234441648, HPA is disabled

2) create HPA at 90% size:


root@Test:~# hdparm -N p210997483 --yes-i-know-what-i-am-doing /dev/sdg

/dev/sdg:
 setting max visible sectors to 210997483 (permanent)
 max sectors   = 210997483/234441648, HPA is enabled

3) reboot and hopefully the new size will stick.

 

I'm using for example this in one of my backup servers with all 2TB disks, I already had a spare 3TB disk so I changed it to 2TB also, since I don't plan to have other >2TB disks on this server anytime soon this way I avoid parity checking 3TB instead of 2TB.

 

test.thumb.png.62623057532c2a4ffd6d2082e4c5ba96.png

 

 

This seems correct, but you must perform a secure erase or  blkdiscard first.  I'm only posting this so others in the future will not be confused.

 

Link to secure erase procedure:

https://www.thomas-krenn.com/en/wiki/SSD_Secure_Erase

 

Thanks again,

craigr

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...