spxlabs Posted July 2, 2020 Posted July 2, 2020 (edited) So, I have been working on a project using all SSD's in an array and I fear that my testing methodology is giving me invalid beliefs and results. The Array Single parity 1TB ssd 2 data drives, 1TB each Same brand and model SSD for all 3 No cache The Test 1 run = transferring a 48GB video 42 times after each run I do a parity check I randomly select a video to view and see if it plays after each run I delete all the data and start again My array is 2TB's in free capacity my SSDs do support trim, I am not using a trim plugin (on purpose) I am trying to determine when (or even if) write degradation happens in an all SSD array. I have no idea how to properly test for such a thing, so any advice would be impactful. Run #21 I just completed my 21st successful run and the transfer speeds have not significantly varied yet. I have not averaged the data from each run and each parity check yet but a cursory glance has not yielded any noticeable change in writes and reads. As you can image after completing my 21st successful run or transferring an approximate total of 42,336TB of data over the course of one and a half weeks-ish, I figured I would have perhaps noticed something. There has been little deviation in parity check times as well, at least none that my eye has perceived. Crossroads I am still compiling my data at this time and am not truly ready to reveal what I have recorded but I have reached a point where I am either incorrectly testing or I am misunderstanding what is actually going on. What am I doing wrong? What should I be testing for instead? Is there perhaps a better test than doing network file transfers? In what form will I see problems? Yes, I am aware this isn't officially supported. I am just a curious individual and was wondering what the threshold is for performance degradation and/or parity issues with an SSD array. Furthermore, I am aware that if I wanted the ultimate performance I could use a RAID1 cache or some other setup. That is not the point of this project/test. Thank you though. Edited July 2, 2020 by spx404 I suck at typing. Quote
BRiT Posted July 2, 2020 Posted July 2, 2020 Do you mean Parity Sync or Parity Check? They are different. One builds the data, the other validates it. Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 10 hours ago, spx404 said: my SSDs do support trim, I am not using a trim plugin (on purpose) Trim (manual or plugin) won't work on any array devices. 10 hours ago, spx404 said: There has been little deviation in parity check times as well, at least none that my eye has perceived. That's expected if anything slows downs by the lack of trim it would be writes, reads should always be good and constant. The only issue with SSDs on the array is the lack of trim, but if you use reasonable quality SSDs should be fine, also I recommended using a faster/higher endurance SSD for parity, like an NVMe device. I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over). Quote
bonienl Posted July 3, 2020 Posted July 3, 2020 4 minutes ago, johnnie.black said: I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over). What kind of parity check speeds do you get, is it higher than regular spinners? Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 Just now, bonienl said: What kind of parity check speeds do you get, is it higher than regular spinners? About 400MB/s, but it mostly depends on the SSDs/controllers used, on my test server I can get up to 500MB/s with small SSD arrays, up to 8 devices. 1 Quote
bonienl Posted July 3, 2020 Posted July 3, 2020 I have a pool of 4 nvme devices in my main server, normally used as cache. For kicks, I temporary reconfigured the main array to use these 4 nvme devices (pardon the Dutch language). Running a parity sync, yields 930 MB/s. 1 Quote
Vr2Io Posted July 3, 2020 Posted July 3, 2020 (edited) 27 minutes ago, bonienl said: I have a pool of 4 nvme devices in my main server, normally used as cache. For kicks, I temporary reconfigured the main array to use these 4 nvme devices (pardon the Dutch language). Running a parity sync, yields 930 MB/s. Could you feedback the parity check speed after sync, thanks. ( Just want to got the figure ) Edited July 3, 2020 by Benson Quote
bonienl Posted July 3, 2020 Posted July 3, 2020 1 minute ago, Benson said: Could you feedback the parity check speed after sync, thanks. I didn't wait until the parity sync completed and meanwhile my main server is back again to original state. The speed does drop over time, and at 70% completion it was doing around 400 MB/s. I guess here comes in play how well an nvme device is performing with sustainable writes. Quote
Vr2Io Posted July 3, 2020 Posted July 3, 2020 (edited) 3 minutes ago, bonienl said: The speed does drop over time Too bad. ( Expected as SSD write, out of SLC mode, just want to got the read speed figure ). Thanks Edited July 3, 2020 by Benson Quote
bonienl Posted July 3, 2020 Posted July 3, 2020 (edited) 12 minutes ago, Benson said: Too bad. Thanks Still a lot faster than traditional hard disks ... With a parity sync, it is the write speed which determines the overall operation. Doing a parity check would show the read speed. Edited July 3, 2020 by bonienl Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 20 minutes ago, bonienl said: The speed does drop over time, and at 70% completion it was doing around 400 MB/s. It's a very important thing to keep in mind with flash devices, cheaper devices can't sustain high speeds for long, usually also depends on how full they are, here's an example from my test server with a cheap TLC SSD, started with the SSD like new after a full device trim: After 30% rebuilding: And is stays like that until the end, good SSDs like the 860 EVO, MX500, etc, can sustain good writes speeds always, large capacity models are also usually faster at writing than small capacity models, since they can write in parallel to the various NAND chips: Quote
spxlabs Posted July 3, 2020 Author Posted July 3, 2020 @johnnie.black I guess my problem is my failure to understand why I haven't experienced any negative side effects. If the SSDs are unable to zero out then why is the performance still so amazing after 42,000TB of data written to the array? Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 It should be mostly dependent on the SSDs used, like mentioned I also haven't noticed any slow on my SSD array so far, I'm using WD Blue 3D SSDs for data and a WD Black NVMe device for parity, what are you using? Quote
JorgeB Posted July 3, 2020 Posted July 3, 2020 Enterprise devices should still maintain good performance even without trim. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.