SSD Array Drive Experiment


JonathanM

Recommended Posts

I'd like to propose an experiment to determine if SSD's as array members could be better optimized with the tools we have right now.

Physical Requirements.

1. Test unraid server (duh)

2. 1ea. Large(ish) SSD for parity

3. 2ea. Identical SSD's for data drives, must support automatic background cleanup

Methodology.

1. blkdiscard all 3 SSD's

2. set HPA to reserve 10% of one of the data SSD's

3. set up array as normal, XFS formatted

4. benchmark read and write to both data SSD's

5. copy large amounts of data identically to both data drives, delete and fill multiple times, to the point where performance suffers due to lack of trim.

6. benchmark again and compare

7. leave server running idle for hours

8. benchmark again and compare

 

Hypothesis: 10% HPA area may allow SSD to regain performance by "defragmenting" written areas.

 

Disclaimer: I have not used SSD's as array members, I have no first hand knowledge of performance loss due to lack of trim. For me, this is just a thought experiment.

I'm hoping @johnnie.black may be able to shed some light on this, with the great work he has already done with SSD arrays.

 

Thoughts?

Link to comment
11 minutes ago, jonathanm said:

I have not used SSD's as array members, I have no first hand knowledge of performance loss due to lack of trim. For me, this is just a thought experiment.

 

Some SSDs are better than others but in my experience write performance was terrible after a little time due to the lack of trim, so much so that I stopped using them as an active array, I'm now using ZFS with my biggest SSDs, but still have several 128GB and smaller that I can use to run some tests, I'll do that when I have the time.

Link to comment
  • 2 weeks later...
On 18/08/2017 at 4:40 PM, jonathanm said:

2. set HPA to reserve 10% of one of the data SSD's

 

Started doing some tests, I'm going to change the methodology, because in my thinking if I only over provisioned one of the SSDs the test can be limited by the parity SSD, since it won't be over provisioned, any performance improvement may not be visible, so I'm first testing in a 3 SSD array, all using full capacity, then I'll over provisioned all 3 and repeat the tests.

 

P.S. Now that I want them to slow down they haven't yet, I'm using 120GB 850 EVOs and I've written more than 1TB to disk1 and it's still writing at max speed 157MB/s (150MB/s is the announced max sustained writing speed for this model, also for anyone interested if I use normal read/write/modify instead of turbo write, write speed is 112MB/s due to the parity write penalty, so as expected parity penalty is much lower than with HDDs )

  • Upvote 1
Link to comment

Question... does parity have an unformatted partition defined? In my head, I had envisioned something like a 250GB for parity, and a couple 120's for data drives. That way parity would probably be faster than either of the data members, and not constrained by the same write limits as far as trim goes.

 

What type of data is in your test set? I envisioned a media library, with a good mix of small metadata files and large media files.

 

Thank you very much for testing this, it really would be nice to find a way to use cheap SSD's as array devices without speed penalty over time.

Link to comment
20 minutes ago, jonathanm said:

I had envisioned something like a 250GB for parity, and a couple 120's for data drives.

 

That would be a good option, but at the moment only have 120GB SSDs available.

 

21 minutes ago, jonathanm said:

What type of data is in your test set? I envisioned a media library, with a good mix of small metadata files and large media files.

 

That's what I'm using.

 

Stopped for today, must have copied about 2TB and still writing at max speed, will continue tomorrow.

Link to comment

Still no slowdown O.o, I purposely used the 850 EVOs because I remember them getting very slow when untrimmed, slow like <50MB/s, wear level is already down a couple of percentage points and it's still writing at max speed, if there's no change by tomorrow I'll probably stop the test until I can create again an SSD only server for normal daily usage.

  • Upvote 1
Link to comment

I've now written over 10TB and still no signs of any slowdown, I'm surprised because IIRC when I was using the SSD server it didn't take that long for the slowdown to be very noticeable.

 

If no one does it sooner I'll revisit this when I have the chance to rebuild my SSD server and see how it performs after normal daily usage, so that at least I'm not just wasting the SSDs life. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.