Jump to content

Seeking testing methodology advice


Recommended Posts

Posted (edited)

So, I have been working on a project using all SSD's in an array and I fear that my testing methodology is giving me invalid beliefs and results.

 

The Array

  • Single parity 1TB ssd
  • 2 data drives, 1TB each
  • Same brand and model SSD for all 3
  • No cache

 

The Test

  • 1 run = transferring a 48GB video 42 times
  • after each run I do a parity check
  • I randomly select a video to view and see if it plays
  • after each run I delete all the data and start again
  • My array is 2TB's in free capacity
  • my SSDs do support trim, I am not using a trim plugin (on purpose)

I am trying to determine when (or even if) write degradation happens in an all SSD array. I have no idea how to properly test for such a thing, so any advice would be impactful.

 

Run #21

I just completed my 21st successful run and the transfer speeds have not significantly varied yet. I have not averaged the data from each run and each parity check yet but a cursory glance has not yielded any noticeable change in writes and reads.

As you can image after completing my 21st successful run or transferring an approximate total of 42,336TB of data over the course of one and a half weeks-ish, I figured I would have perhaps noticed something.

There has been little deviation in parity check times as well, at least none that my eye has perceived.

 

Crossroads

I am still compiling my data at this time and am not truly ready to reveal what I have recorded but I have reached a point where I am either incorrectly testing or I am misunderstanding what is actually going on.

 

What am I doing wrong? What should I be testing for instead? Is there perhaps a better test than doing network file transfers?  In what form will I see problems?

 

 

 

 

 

 

 

 

 

 

Yes, I am aware this isn't officially supported.  I am just a curious individual and was wondering what the threshold is for performance degradation and/or parity issues with an SSD array.  Furthermore, I am aware that if I wanted the ultimate performance I could use a RAID1 cache or some other setup.  That is not the point of this project/test.  Thank you though.

Edited by spx404
I suck at typing.
Posted
10 hours ago, spx404 said:

my SSDs do support trim, I am not using a trim plugin (on purpose)

Trim (manual or plugin) won't work on any array devices.

 

10 hours ago, spx404 said:

There has been little deviation in parity check times as well, at least none that my eye has perceived.

That's expected if anything slows downs by the lack of trim it would be writes, reads should always be good and constant.

 

The only issue with SSDs on the array is the lack of trim, but if you use reasonable quality SSDs should be fine, also I recommended using a faster/higher endurance SSD for parity, like an NVMe device.

 

I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over).

 

 

Posted
4 minutes ago, johnnie.black said:

I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over).

What kind of parity check speeds do you get, is it higher than regular spinners?

Posted

I have a pool of 4 nvme devices in my main server, normally used as cache.

For kicks, I temporary reconfigured the main array to use these 4 nvme devices (pardon the Dutch language).

image.thumb.png.a6d23a16e7560fa9609ea6f22fb84966.png

 

Running a parity sync, yields 930 MB/s.

image.png.90c6dc14da7d47aa71867c0edbdd4a34.png

 

  • Like 1
Posted (edited)
27 minutes ago, bonienl said:

I have a pool of 4 nvme devices in my main server, normally used as cache.

For kicks, I temporary reconfigured the main array to use these 4 nvme devices (pardon the Dutch language).

image.thumb.png.a6d23a16e7560fa9609ea6f22fb84966.png

 

Running a parity sync, yields 930 MB/s.

image.png.90c6dc14da7d47aa71867c0edbdd4a34.png

 

Could you feedback the parity check speed after sync, thanks. ( Just want to got the figure )

Edited by Benson
Posted
1 minute ago, Benson said:

Could you feedback the parity check speed after sync, thanks.

I didn't wait until the parity sync completed and meanwhile my main server is back again to original state.

 

The speed does drop over time, and at 70% completion it was doing around 400 MB/s.

I guess here comes in play how well an nvme device is performing with sustainable writes.

 

Posted (edited)
3 minutes ago, bonienl said:

The speed does drop over time

Too bad. ( Expected as SSD write, out of SLC mode, just want to got the read speed figure ).

Thanks

Edited by Benson
Posted (edited)
12 minutes ago, Benson said:

Too bad. Thanks

Still a lot faster than traditional hard disks ...

 

With a parity sync, it is the write speed which determines the overall operation. Doing a parity check would show the read speed.

Edited by bonienl
Posted
20 minutes ago, bonienl said:

The speed does drop over time, and at 70% completion it was doing around 400 MB/s.

It's a very important thing to keep in mind with flash devices, cheaper devices can't sustain high speeds for long, usually also depends on how full they are, here's an example from my test server with a cheap TLC SSD, started with the SSD like new after a full device trim:

 

imagem.thumb.png.e8d64c979f12802dda9dbcd62d13838f.png

 

After 30% rebuilding:

 

imagem.thumb.png.a280c2a46b4adcf82e1437740813ccfc.png

 

And is stays like that until the end, good SSDs like the 860 EVO, MX500, etc, can sustain good writes speeds always, large capacity models are also usually faster at writing than small capacity models, since they can write in parallel to the various NAND chips:

 

imagem.png.b50af8528a77baade160596613fdd4b1.png

Posted

@johnnie.black

I guess my problem is my failure to understand why I haven't experienced any negative side effects.  If the SSDs are unable to zero out then why is the performance still so amazing after 42,000TB of data written to the array?

Posted

It should be mostly dependent on the SSDs used, like mentioned I also haven't noticed any slow on my SSD array so far, I'm using WD Blue 3D SSDs for data and a WD Black NVMe device for parity, what are you using?

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...