Jump to content

(SOLVED) Encrypted Cache SSD incredibly slow


Recommended Posts

Hey guys,

my encrypted SSD got really slow after a couple of days (I'm talking 20-30MB/s while writing vs. over 200MB/s unencrypted). 

I assume this is because it's not possible to TRIM encrypted SSDs.

The only other thread concerning the trimming of encrypted devices is this one: SSD Trim not working, incompatible with encryption?

Therefore it's either something I do wrong or it's a really uncommon use case.


Any recommendations on SSDs that work particularly well without TRIM? Or SSDs to avoid? Or do you use HDDs if you want to encrypt your cache?


It's possible to workaround this protection: https://wiki.archlinux.org/index.php/Dm-crypt/Specialties#Discard.2FTRIM_support_for_solid_state_drives_.28SSD.29 

But I don't get how I would do this in unRAID and what the impact on security would be. 


Further Information on how TRIM affects encryption security: http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html



Link to comment

Any SSD that has overprovisioning as standard, or where you can create a slightly smaller paritition and have the SSD use the remaining blocks as overprovisioning pool will manage to keep their write speeds even without TRIM.


Next thing - it is possible to trim encrypted SSD:s. It's just that trimming them will leak information about what parts of the disk that contains data and what parts of the disk are empty. But that is normally not a big issue. So the issue here is the ability to configure unRAID to allow trim. But trimming doesn't break the encrypted file system and doesn't leak the content of files - just the pattern of used/unused disk.

Link to comment

Thanks, overprovisioning was what I needed. You can create an "Host-Portected-Area" (HPA) on your SSD, so that unRAID doesn't recognize this area for the partition.

I hope this will keep my SSD performance up.

I used those sources for the instructions: 

Link to comment
1 minute ago, Fenix said:

You can create an "Host-Portected-Area" (HPA) on your SSD, so that unRAID doesn't recognize this area for the partition.


Remember to make a full disk erase before creating your HPA - it's important that the drive knows that the sectors in the HPA are empty or it will not be able to use them for overprovisioning.

Link to comment
10 minutes ago, Fenix said:


Just for clarification, is a full disk erase the same as a secure erase as mentioned in this link: https://www.thomas-krenn.com/en/wiki/SSD_Secure_Erase ?


Yes - it marks every single block as empty. And since the drive will never see any writes into the HPA region, it will be able to pick up blocks from this region and remap into the used range while erasing old blocks from the used area and remapping them back to the HPA region. Ergo - a pool of pre-erased blocks constantly available when needed.

Link to comment
  • 5 months later...

I know this is really old and already marked as solved. But I think this might a good info for someone with similar problems, e. g. slow read/write even though the device is overprovisioned.


The procedure as described in this post works fine, but my SSD performance was still not great. So I finally sat down and googled how many IOPS my SSD can handle, turns out this specific SSD can only handle "496 random write IOPS at QD1 for the 240GB Sandisk SSD Plus", I have the 480GB Model, I thinks it's basically the same, same controller, different count or sized flash chips. At least that's my conclusion. And it turns out it's true, a friend of mine has a "Samsung 850 Pro 512GB" and he has none of my issues after overprovisioning his device.


Also don't think SanDisk makes bad SSDs that's just not true, the "SanDisk Ultra II SSD" has in all sizes above 80K IOPS which is quite frankly really good, and would only be a problem for insane workloads but i think those folks should look at more sophisticated stuff like enterprise U.2 SSDs or highend-desktop SSDs with PCIe or M.2.


This might be trivial for most of you but I never seemed to realize that this could be the cause, because I thought "Meh that's a SSD of course it's fast and sufficient for my needs" turned out it only has about 5x the IOPS of an HDD. It works fine if you're only using it for caching reads and writes you occasionally do via SMB. But not so much if you're storing Docker, VM's and scripts that generate some IO on that drive. The constant reading and writing of certain apps might then cause your super fast caching SSD to not work as expected, dockers hung randomly, just something weird/off or also what I had were writing with full GBit to the SSD then for 10 seconds only with 20 MB/s then 5 seconds GBit then maybe only 15 MB/s write, I think you get the idea at this point.


I learned that kinda the hard way and it took me a few months to realize that, so I hope this is like a shortcut for some of you to make use of your full hardware potential.


Also Tomorrow I'll be editing this thread to confirm that this was the reason by replacing it by a slightly older "Samsung 840 EVO 250GB" but this one apparently has 97K reads and 66K writes, which I hope confirms my theory and I finally can enjoy full speed GBit writes to unRAID :)


Also if you're a Mod please tell me where this should be moved to if it's relevant or if it insignificant so I know.


Woah that was three times the word also, I should finally get to the point  👉 .

(Also Unicode is awesome, if your device can handle it.)


EDIT: grammar and formatting


EDIT2: I installed the "Samsung 840 EVO 250GB", overprovisioned it as described in this thread. And the performance is much better than before, writes via SMB are now as expected.

Link to comment


This topic is now archived and is closed to further replies.

  • Create New...