Slow Drive Write Speed and Network Transfers - 30 - 70 MB/s


pish180

Recommended Posts

1 minute ago, pish180 said:

Weird... maybe its a UI glitch.  

Umm.... WTH... 
So I put it on High Water and I get significantly different results! It was able to transfer a 16GB video and rocked steady at around 200MB/s peaking at 400-500MB/s.  Once it changed to another video in the transfer queue it dropped down to 36MB/s and didn't return back to full speed.  WTF is going on here?!

Are those the speeds you're seeing on the main page, or are you getting the speed from whatever you are transferring from?

 

Did the disk being written to change when the next video queue'd up? If so I suspect the system is still reading the needed data for parity from the first movie, when the second movie begins writing. In reconstruct write, all disks are read except the one being written to. In theory, if the disk being written to changes before the parity is generated from the last transfer, you will run into a disk being written to for movie#2, while still being read to create parity for movie #1. 

 

These are all educated guesses on my part, they are logical, but someone could prove me wrong.

Link to comment
  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

1 minute ago, david11129 said:

Are those the speeds you're seeing on the main page, or are you getting the speed from whatever you are transferring from?

 

Did the disk being written to change when the next video queue'd up? If so I suspect the system is still reading the needed data for parity from the first movie, when the second movie begins writing. In reconstruct write, all disks are read except the one being written to. In theory, if the disk being written to changes before the parity is generated from the last transfer, you will run into a disk being written to for movie#2, while still being read to create parity for movie #1. 

 

These are all educated guesses on my part, they are logical, but someone could prove me wrong.

No, the 200MB/s is what I was seeing from the remote system sending the transfer and it was legit because it transfer that files quick!  I ran 3 more tests and using Fill-Up is the most consistent I guess it's not writing to other drives... This is more than just switching drives... when I do most free the single file is being written to a single drive but I only get 30 seconds of fast transfer.  if I do Fill up then it will transfer a full 16GB file without going below 200MB/s.  Single file operations should not be effected... there is something clearly going on here. 

 

Almost seems like Turbo Write doesn't work when you use Most Free setting. 

 

Link to comment

If I were you, I would screenshot the main page so you have the disk assignments available. Then stop the array and change both parity drives to none and start the array again. Once you are finished transferring files, do the same thing and assign each disk to the parity position it was it previously and start the array. Then check parity and leave write corrections to parity checked. This should let you transfer way quicker, then build the parity later.

 

I don't see a reason why that wouldn't work, but I suggest you read a bit and confirm. Afaik doing it this way shouldn't lead to any negative consequences.

Link to comment
1 minute ago, david11129 said:

If I were you, I would screenshot the main page so you have the disk assignments available. Then stop the array and change both parity drives to none and start the array again. Once you are finished transferring files, do the same thing and assign each disk to the parity position it was it previously and start the array. Then check parity and leave write corrections to parity checked. This should let you transfer way quicker, then build the parity later.

 

I don't see a reason why that wouldn't work, but I suggest you read a bit and confirm. Afaik doing it this way shouldn't lead to any negative consequences.

Thats hot garbage but you are probably right.   I have 18TB written already and parity is good for that data... dropping now after I'm 3/4 of the way through... UGHHH frustrating.   

I'm likely going to have to do that... I have 8TB to go and this data has smaller files and many more files.  

 

Honestly I'd really like someone from UnRAID to look into this. Is this a bug? Is it by design?  Is Most Free broken?  Why does Turbo Write not retaining setting when the Unraid changes the drive to write too? 

Another user reported this issue... can anyone else confirm that Most Free does the same thing? 

Link to comment
8 hours ago, pish180 said:

Just keep track of what needs parity so you don't have to rewrite the existing 14TB of parity info again.

That's not how parity works. Parity has no concept of files, full or empty drives, or anything like that. It simply calculates the the sum of a specific address across all the data drives, so if any single data drive is missing, it can do the math and replace the bits that make the equation correct at that address.

 

If parity is ever not valid, it needs to be checked and corrected across the entire capacity of the parity drive, if it's anticipated that parity is going to be significantly wrong like when it's been disabled for a period of time it's faster to rebuild instead of checking and correcting.

Link to comment
10 minutes ago, pish180 said:

Thats hot garbage but you are probably right.   I have 18TB written already and parity is good for that data... dropping now after I'm 3/4 of the way through... UGHHH frustrating.   

I'm likely going to have to do that... I have 8TB to go and this data has smaller files and many more files.  

 

Honestly I'd really like someone from UnRAID to look into this. Is this a bug? Is it by design?  Is Most Free broken?  Why does Turbo Write not retaining setting when the Unraid changes the drive to write too? 

Another user reported this issue... can anyone else confirm that Most Free does the same thing? 

Once the data is written,  the parity will be created at the same speed it would during any normal check. I do those monthly anyways. So worst case, you're just having to redo the parity an extra time.

Link to comment
5 minutes ago, jonathanm said:

That's not how parity works. Parity has no concept of files, full or empty drives, or anything like that. It simply calculates the the sum of a specific address across all the data drives, so if any single data drive is missing, it can do the math and replace the bits that make the equation correct at that address.

 

If parity is ever not valid, it needs to be checked and corrected across the entire capacity of the parity drive, if it's anticipated that parity is going to be significantly wrong like when it's been disabled for a period of time it's faster to rebuild instead of checking and correcting.

 I didn't say files.  My thought was it could index changed sectors to calculate  parity at a later time and store that in a cache or RAM.  I get it though.  I've been  using other forms of RAID5-6.  And same goes for that.  Rebuilding parity takes a rather significant amount of time on 20+ TB of data... 

Link to comment
2 minutes ago, david11129 said:

Once the data is written,  the parity will be created at the same speed it would during any normal check. I do those monthly anyways. So worst case, you're just having to redo the parity an extra time.

Roughly how long would it take to redo the parity with UnRaid for 30TB of data?

Link to comment
33 minutes ago, pish180 said:

Roughly how long would it take to redo the parity with UnRaid for 30TB of data?

The amount of data actually has nothing to do with parity speed. The only thing that matters is the size of the parity drives, in your case 10tb. I am only running single parity with an 8tb drive, and my monthly checks take ~18 hours. 

Edit: just checked the history and my parity time is ~18 hours, not 22 as originally stated.

 

Edit: I'll try my best to simplify how parity works. Here's a link for more info: https://wiki.unraid.net/Parity#How_parity_works

 

Drives store data in 0's and 1's. You 10tb disk can contain 10TB of them. 01010101010 etc. Let's imagine you have two drives, and a single parity disk. Disk one contains 01010101.

        Disk two contains 10110110.

Parity in this case would be 11100011. If you add them together from the beginning, 0+1 is 1. 1 is an odd number so set the parity to 1. On the 4th position, 1+1 is 2. 2 is even so set the parity to 0, and so on for each position. If  disk one died, you would calculate in reverse. Since the first bit in parity is a 1, and we know the first bit on disk two is also a one, we know the first bit on the dead disk has to be 0, because that is the only possible one it could be and still have parity be odd. 

 

This is super simplified, and ignores dual parity completely, but it's how I think about it when attempting to wrap my head around what is actually happening. Now you see why data size is irrelevant, since data is simply stored as 0's and 1's, and the parity drive is only going to hold so many of those, 10Tb in your case.

Edited by david11129
Link to comment

I did read that but... and thanks for the explanation. 
A few things:


1.  As for rebuilding that makes sense everything must be redone or at least done upfront.  I'm a little vague on this but when a RAID 5 rebuild is it just looking at strips (data) to rebuild?  When UnRaid rebuilds it just does every sector on the entire drive because its not data aware?

 

2.  With UnRaid every future change of parity calculation is triggered upon a delta aka a write operation, correct?  My initial thought earlier when I was insinuating "calculate parity later" option which could be a delta tracking feature or Snapshot (if you will).   This could go into a memory buffer or even use space on a cache SSD to store the delta's until IOs are free'd on the mechanical storage to write it all out.  I'm not talking about writing the actual data to cache, I'm talking about writing to the array and then using the cache/RAM/(what ever is fast enough) as a fast delta tracking feature so the system can write out parity later.  IDK just a thought. 

 

3.  This is all great stuff but it seems I have another issue that is slowing my transfers down when using the "Most Free" option.  So I'm not really sure what to do at this point. Has anyone confirmed this happens to them when using the same share options? 

Link to comment

Attached are 2 screenshots.  I used TeraCopy to graph the transfers.   

 

Comparing Most Free to Fill up Share settings: 

 

NO other changes were made at all.

Same files from same source. 

 

 

NOTES:

  • The first 8GB file didn't drop below 300MB/s using Most Free
  • The 2nd File transfered about 1GB of data before it slowed down to 50MB/s and then got lower to 30 MB/s using Most Free
  • Surge in 3rd file but then slowed down to 20-60MB/s using Most Free
  • The first and 3/4 of the 2nd file transferred at 300MB/s+ using Fill Up.  Just started to drop towards the end of file 2. 
  • The entire transfer never slowed down under 130MB/s using Fill up and really stayed at around 150MB/s afterwards.  

 

Anyone else???

 

 

MostFree.jpg

Fillup.jpg

Link to comment

Taking Parity out of the loop completely... 

 

I'm so confused on what is happening, pretty sure there is a bug. 
 

Tests:

Test 1: Transfer to just Cache Drives (2x 1TB SSDs RAID1) (defaults) using Most Free, High Water and Fill Up

Test 2: Transfer to just SSD without parity (Evo 860 SSD) using Most Free, High Water and Fill Up

 

Attached are screenshots named with each operation.

Same 3 files from same source for each transfer (roughly 8GB each)

 

Share settings: 

New share created for each drive test (total of 2 shares were created) and modified Most Free and Fill up settings. 

Cache Setting: "Only" for Cache Test

Cache Setting: "No" for Evo 860 Test

 

 

Notes:

  • I tried Most Free, High Water and Fill up with no difference in results to the Cache Drives
  • SSDs easily capable of hitting 400MB/s locally consistent but I never saw above 260MB/s transfering to the Cache
  • When using the Mechanical drives I would get 350-400MB/s for the first 2 files
  • Really odd results from Test2... 
  • Transferred to the 860 using High Water and it pretty much saturated the 350-400MB/s the entire time but then dropped to 200MB/s? (Same Results with Fill Up setting) 
  • Transferred to the 860 using Most Free and.... WTF?  Its not doing parity nor changing drives here but for some reason it drops down to 20-80MB/s.  There is a surge at the start of FIle 2 and File 3, but drops down.  This is what I am seeing from my mechanical drives using the same Most Free option. 

 

Conclusion:

Something is wrong with the Most Free Option!

Something is wrong with the Cache (there is no way there is that much overhead from a Raid 1 to drop it down 100MB/s)... or is there? 

IDK wtf is going on with my 10G network link.  This is another issue outside of this topic but I can only get 3.5Gb/s peak. 

 

Anyone else care to run some tests? 

 

EDITED:  Added Specific tests for Fill up in addition to High Water tests.  No screenshots for Fill up as they were the same results as High Water tests.  

 

HighWater-Cache.jpg

HighWater-Evo-NoCache.jpg

MostFree-Cache.jpg

MostFree-Evo-NoCache.jpg

Edited by pish180
Updating High Water and FIll up - Updated several times - Sorry.
Link to comment

Well it isn't helpful in solving the problem, but I can confirm I experience the exact same behavior. I duplicated your set up, and using teracopy or file explorer in windows, I show the same results. I am disabling parity right now and I will confirm if that solves it or not.

 

Though, regardless of disk fill up setting, both transfers dropped off for me after about 8gb were written. 

Edited by david11129
Link to comment
5 minutes ago, david11129 said:

Well it isn't helpful in solving the problem, but I can confirm I experience the exact same behavior. I duplicated your set up, and using teracopy or file explorer in windows, I show the same results. I am disabling parity right now and I will confirm if that solves it or not.

 

Though, regardless of disk fill up setting, both transfers dropped off for me after about 8gb were written. 

I disabled parity and it transfers the entire time at line speed. I suspect this is why they suggest disabling parity until your initial data transfer is finished. 

 

Presently I am running a transfer to my cache drive and will update when it's finished.

Link to comment
2 hours ago, pish180 said:

I did read that but... and thanks for the explanation. 
A few things:


1.  As for rebuilding that makes sense everything must be redone or at least done upfront.  I'm a little vague on this but when a RAID 5 rebuild is it just looking at strips (data) to rebuild?  When UnRaid rebuilds it just does every sector on the entire drive because its not data aware?

 

2.  With UnRaid every future change of parity calculation is triggered upon a delta aka a write operation, correct?  My initial thought earlier when I was insinuating "calculate parity later" option which could be a delta tracking feature or Snapshot (if you will).   This could go into a memory buffer or even use space on a cache SSD to store the delta's until IOs are free'd on the mechanical storage to write it all out.  I'm not talking about writing the actual data to cache, I'm talking about writing to the array and then using the cache/RAM/(what ever is fast enough) as a fast delta tracking feature so the system can write out parity later.  IDK just a thought. 

 

3.  This is all great stuff but it seems I have another issue that is slowing my transfers down when using the "Most Free" option.  So I'm not really sure what to do at this point. Has anyone confirmed this happens to them when using the same share options? 

Just theorizing here, but it may be that when the second transfer occurs, the first is still being written from ram. Linux uses the ram as a cache for writes and lots of other things. So instead of being a synchronous write, the disk spindle is having to move a bunch to write the new date, and the data the was still in ram from the first transfer. They most likely aren't being written right next to eachother and the extra spindle movement is going to increase overhead. I have 96gb of ram, and I believe I changed the amount of ram to use to cache writes. 

 

After parity completes, I will attempt to increase the ram cache size and report results.

 

Edit: just checked and my ram cache is set to 10% of ram size. That means that up to 9.6gb will be used to cache writes. That makes sense because that is about when the transfer falls off for me. So after the ram cache is filled, it is both writing that data to the disk, and I am sure it is writing the second transfer straight to the disk. 

 

Edit: Increasing the ram cache to 35% did work. I transferred 32gb of files at full line speed. Once the cache fills up. that would decrease a bunch, especially with the parity operation going on now. I believe this is probably just normal behavior for a parity protected array that isn't striped. Spindle overhead and parity calculations are going to come into play once the ram cache is filled.

 

I hope someone more helpful can chime in here. I believe I have done what I can as far as confirming what you are seeing. Ultimately I guess the answer is to disable parity during the initial data sync. I can say that I have experience no issues with Unraid the last 3 years after I set it up. My 1tb cache is large enough to cache all writes until the mover runs, and even when downloading TB's to the array with no cache, my internet speed is a bottleneck before the disks become one. I have 400mbs down and all downloads run at max speed. Unraid is great for media serving and data protection. Because it isn't striped, even if you lose more disks than parity can recover, you don't lose any data and the remaining disks and can read them on any linux distro. If you need Constant writing of your max line speed at all times, I suspect that using a striped solution is going to fit your needs better.

 

Edited by david11129
Link to comment

@david11129
Thanks for helping to confirm all this.  

 

I have 48GB of RAM, so if the default is 10% then 4.8GB.  That is not consistent with my results, as all my files were around the 8GB mark.  Where is this setting? 

 

It seems many of the tests actually exceed 1Gb/s NIC speed so I'm not sure if you will see the same issues.  Perhaps your cache doesn't fill up as fast and your network speed won't exceed your max mechanical drive speed?  IDK but that part of our systems is pretty significant as many of my tests are exceeding your max network throughput. 

 

I am unclear of which exact tests you ran and which are the results.  If possible could you describe which tests you ran in each of your previous posts?  You can just use my numbers below (Test 1, Test 2, Test 3)   

-----------------------

 

I ran several tests with many different results.  

 

Summary: 

Tests I ran:

  • Test 1: Transferring to Mechanical drives, no Cache, using parity with Most Free, High Water and Fill Up settings
  • Test 2: Transferring to Cache Only drives (pairty is out of the loop here) using Most Free, High Water and Fill Up
  • Test 3: Transferring to a stand alone SSD (no parity here and no cache) using Most Free, High Water and Fill Up

 

Same 3 files from the same Source.

3 Files were roughly 8 GB each

 

Results:

  • Test 1 and 3 had similar results when using Most Free share setting.  MASSIVE slow downs! One had parity one did not.
  • Test 1 and 3 had similar results with peaks throughput speeds (350MB/s).  One had parity one did not.
  • Test 2 was slower peak throughput than both Test1 and Test 3.  
  • Test 2 did not see any difference in share settings (Most Free, High Water and Fill Up)
  • Test 2 NEVER slowed down (same speed with all 3 files)
  • Test 3 Had an odd slowdown at then end of file 2 (roughly 15 GB into the transfer) using HighWater.  Dropped to 160-200MB/s.
  • All tests had similar results between High Water and Fill Up settings. 
  • Results were consistent between TeraCopy and Windows Transfer.  (So the app on the sending side did not play a factor)
  • Test 1 using High Water (or Fill Up) had expected results around the middle of file 2 where it is maxing out the mechanical drive speed (130-160MB/s).  Likely when the Cache ran out. 
  • Test 1 will transfer an entire 8GB files at 350MB/s (Maxing the NIC speed) and about 1 GB of the 2nd file at which point it will slow to 30-70MB/s using the Most Free Option.

 

Conclusions:

  • Something is wrong with Most Free option when NOT using Cache
  • Something is causing massive slowdown of Cache when writing directly to it (250MB/s is really slow for SSDs)
  • I'm not convinced that Turbo write is being set properly on Most Free share setting (perhaps a bug).  
  • Parity does not seem to be the issue when transferring to the HDD, more than likely the share setting of Most Free is bugged. Expected results when using Fill up or High Water
  • Not sure why the Direct transfer to the SSD slows down at all... No cache, No parity.  It should stay at the initial transfer speeds (350MB/s) as it did with the first 2 files (slow down started just at the end of the 2nd file - 15GB ish). 

 

 

EDIT: 

Another note about my system:

I don't think it is relevant but I do have 2 separate SATA Controllers both support 6Gb/s and all SATA drives are connected at that speed.

PERC H200

LSI SAS2008

 

I have not tried tests to isolate one or the other.  

It should be noted:

The cache SSDs are on a H200

4 of the 10TB drives are on the H200

5 of the 10TB drives are on the LSI

Standalone SSD is on the LSI 

1 Parity drive on LSI and 2nd parity on H200
 

 

Let me know please. 

 

 

 

Edited by pish180
Removing Quote for Cleaninelnes. Adding Edit section
Link to comment

I'll expand on my test results later, but I wanted to let you now that the default cache ration for ram is 20%, 9.6gb in your case. 

 

i changed it via the tips and tweaks plugin. The setting "Disk Cache 'vm.dirty_ratio' (%):" is where you can change it. I had it set to 10 because I was having out of memory errors a while back, related to something I had misconfigured, not sure what anymore. When I changed it to 35% today, I did notice it stated the default is 20%.

 

I've never reeally thought about changing it back to the default, mainly because I have enough sticks on hand to increase my ram to 256GB. 

 

Also, realize that increasing the ram cache is just going to put off the point at which your transfer slows down.

 

Edited by david11129
Link to comment
15 minutes ago, david11129 said:

I'll expand on my test results later, but I wanted to let you now that the default cache ration for ram is 20%, 9.6gb in your case. 

 

i changed it via the tips and tweaks plugin. The setting "Disk Cache 'vm.dirty_ratio' (%):" is where you can change it. I had it set to 10 because I was having out of memory errors a while back, related to something I had misconfigured, not sure what anymore. When I changed it to 35% today, I did notice it stated the default is 20%.

 

I've never reeally thought about changing it back to the default, mainly because I have enough sticks on hand to increase my ram to 256GB. 

 

Thanks for this.  Even more interesting results after messing with this.  I'm not able to explain what is happening here... 

 

Notes: 

  • Set Dirty Ratio to 1% so 480MB (basically nothing)
  • As expected no massive surge in file transfer
  • However... I did NOT have a massive drop in speed and the  UI didn't report Reads on the parity drives as it did with Most Free setting before.  Is the problem gone now that I reduced the dirty ratio???
  • Fill Up and Highwater just maxed out the Drive speed, roughly 150-200MB/s the entire time.  Consistent with other results just without surge.
  • The cache drive had no change (its appears to not use this dirty ratio setting).  Still not sure why the cache SSDs are so slow (250MB/s)
     

MostFree-HDD-1pDirty1.jpg

Link to comment

That's amazing!! I would love to know what's going on behind the scenes that is fixing it, but I'm sure you're just glad to have decent write speeds now! 

 

Did you set thhe CPU governor to something else too by chance? If you open the terminal an enter: "watch -n1 lscpu" you can see what speed the cpu is operating at. I used conservative for a long time because it ramped down well when extra power wasn't needed, and ramped up when it was. With conservative off, i save about 15 watts according to my kill a watt. I have it set to performance now, because with 17 drives, dual cpu's, and 3 ssd's, low power is out of my reach. My idle isn't that bad though actually, with performance mode and all disks spun down, i idle a hair below 100 watts. With everything ramped up and moderate cpu uasge, It maxes around 400 watts.

Link to comment

Per my last test this time I set the dirty ratio to 40% and tried Test 1 again (write to HDD) 

 

 

Using Most Free:

Just cruising along at 350MB/s and then BAMMMM  Down to 10 MB/s after that RAM maxes out. 4m22s  WHY DOES IT DROP? 

Using the Fill UP setting:

It just crushes all 3 files at 350MB/s non stop transferring 24GB of data in 1m22s.  Clearly this exceeds 40% of 48GB of RAM (19.2 GB ish) but yet it kept going at 350MB/s, WHY?

Fillup-HDD-40pDirty.jpg

MostFree-HDD-40pDirty.jpg

Link to comment
6 minutes ago, david11129 said:

That's amazing!! I would love to know what's going on behind the scenes that is fixing it, but I'm sure you're just glad to have decent write speeds now! 

 

Did you set thhe CPU governor to something else too by chance? If you open the terminal an enter: "watch -n1 lscpu" you can see what speed the cpu is operating at. I used conservative for a long time because it ramped down well when extra power wasn't needed, and ramped up when it was. With conservative off, i save about 15 watts according to my kill a watt. I have it set to performance now, because with 17 drives, dual cpu's, and 3 ssd's, low power is out of my reach. My idle isn't that bad though actually, with performance mode and all disks spun down, i idle a hair below 100 watts. With everything ramped up and moderate cpu uasge, It maxes around 400 watts.


I'm so confused on what is happening.  It has to be a bug and honestly all these unsolved cases in the forums that report slow speeds.. it's probably this.

As for CPU I have it set to performance  (I plan to run Plex on here and maybe some VMs for testing) I have a P2000 GPU to setup as Hardware transcoding as well. Thus the entire reason behind switching from FreeNAS to UnRaid in the first place.   I also like the idea of not losing data if the array fails, so that's a plus.  I don't really need 10G speeds to stream, my internet upload is really the bottleneck there 1Gb/s (up and down).  So I will max out my internet before I max the drive out.  At least that is my thought behind bandwidth.  However... internal network transfers I need to resolve this 10G link only working at 3.5Gb/s.  That is my next task after I figure out this issue.  

 

I think we have ran enough tests and I'd really like to know if this is a bug or a "feature".  I can't image my last tests are "working as intended".  

Link to comment

RECAP

I just wanted to Recap where we are at this point for clarity sake.  There has been a lot of testing and back and forth and it may be hard to follow. 

 

Problem:

I was experiencing slower transfer speeds when not using a Cache Drive (aka SSDs)  and writing directly to the HDDs during the initial transfer of data to UnRaid (30 TB ish).  Transfer speeds would start strong (350MB/s on a 10G link) and then drop down to 30MB/s.  

 

Issues:

There seems to be an issue when using Most Free share setting and RAM caching and Turbo Writes.

 

 

Work Around:

1.  Best option: Reduce your Dirty Ratio % to 1% using the TipsAndTweaks Apps. Default is set to 20%.  This uses 20% of the available RAM as a Cache.  For some reason this causes problems when using the Most Free Share option. This removes most of the peek transfer rates (if you have a 10G link) and provides you with mechanical drive maxes (150-180MB/s) (assuming 10G link).  

 

2.  Option 2... Don't use Most Free option.  Use the Fill Up Option.  If you are like me you want your data spread amongst many disks evenly for a number of reasons and you want to keep the Most Free option then you must use Solution 1.  

 

 

Needs UnRaid Team Help:

  • I believe there is something wrong with Most Free share setting and Dirty Ratio, and Turbo Writes that are causing issues. 
  •  

 

Ideally I would like to use RAM as cache but without it breaking my large transfer queues when using the Most Free Share option.  I should be able to get a nice surge of line speed and then taper off to a reasonable mechanical drive speed after the RAM cache is filled.  My guess is it is trying to smash 2 different queues at the same time, the Full RAM Cache Queue and the existing transfer queue.  Perhaps if 2 queues are created the maybe the 2nd queue doesn't have turbo writes applied?  This problem is NOT visible when using the Fill Up share option.  For example having 19GB of effective Dirty Ratio and transferring 24GB of data, I never saw it drop below line speed (350GB/s).  Turning down the Dirty Ratio just seems to be a way to reduce how badly the problem is exacerbated. 

 

Update: 

18 March 2020

Edited by pish180
Added more details.
Link to comment

I didn't read the entire thread, but in case it's related most free option is always slower, with or without turbo write, since parity writes will overlap when changing disks, that setting should never be used for best performance, it was always like this, and since v6.8.x with the new auto change to non turbo write it can be even worse (if using turbo write) since at those points (when writes overlap) it will change to read/write/modify.

Link to comment
3 hours ago, johnnie.black said:

I didn't read the entire thread, but in case it's related most free option is always slower, with or without turbo write, since parity writes will overlap when changing disks, that setting should never be used for best performance, it was always like this, and since v6.8.x with the new auto change to non turbo write it can be even worse (if using turbo write) since at those points (when writes overlap) it will change to read/write/modify.

Thanks for that info.  That is probably part of it for sure!  

 

On that same note its all software it can be changed.   There is no reason that each drive can't have its own RAM cache (Dirty Ratio).  If you have it set to 20%, then that is broken up over how many disks you have in the array.  There are probably many ways to solve the problem.  I don't understand why MostFree wouldn't be the default.  We all talk about using an array for redundancy, performance, scalability... so why would you put all your eggs in 1 basket per-say?  Meaning why put all your data on a single drive when you have 3+ drives to choose from?  That will likely spark a heated discussion so I digress.  Point in case this problem should not exist but here we are.  

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.