Jump to content

[Solved] Array write speed issue - very slow even with turbo write enabled


Go to solution Solved by JorgeB,

Recommended Posts

Hey all,

 

after reading through countless posts I am at the end of my ideas and am kindly asking for some pointers.

 

The issue:

New (first) Unraid build. Trying to fill up my array with data. Since it is the initial fill (~8TB of data), I do not have any cache disk enabled. I did however switch on md_write_method = reconstruct write after reading that for my use-case it makes sense, as I don't care about spin-ups and such while initially loading the data. Unfortunately, I still only see 4-30MB/s with various ups and downs over longer periods.

 

Things I've tested/observed:

  • copy from client to array: 4-30MB/s after cache is full, before that ~110MB/s (GBit networking)
  • copy from client to pool (hdd): ~110MB/s
  • copy from pool to array: same as first, with faster initial speed
  • read speeds always ~ 1GB/s maxing out my network
  • write speeds when pre-clearing were as expected per HDD, ranging from 260MB/s down to lower 3-digit numbers, all as I'd expect from spinning drives
     
  • switching between md_write_methods does not help
  • no errors reported in log, 8800k is bored load-wise
  • "Fix common problems" and "Tips and Tweaks" plugins didn't offer any additional insights
  • I used the same hardware / controllers in Proxmox/TrueNAS Core before without any speed issues - didn't have the 16TB drive back then though.

 

Diagnostic file attached while moving data onto the array with the described slow speeds.

 

Hope someone can offer a pointer or two. Thank you for your time & help!

//Chris

 

 

Some additional screenshots:

 

With md_write_method = reconstruct:

grafik.thumb.png.a3d1a229d7d1f946be1c15753d92694e.png

 

 

With md_write_method = auto:

grafik.png.5a89c071dc11b6e48b49e8fe75ccb852.png

 

Copy from array while write activity is still ongoing from previous transfer TO array:

grafik.png.51efdd8c639f849b2b0af6c96b609cef.png

 

Copy from array with no other write activity to array:

grafik.png.8d71ecc82a0df2bf797593d462cedcd3.png

 

Writing to pool (didn't have Windows screenshot here, but it was maxing my NIC)

grafik.thumb.png.e6250067af62f8292ee7376d8724dfa6.png

 

sovereign-diagnostics-20230721-2121.zip

Edited by FitzZZ
Link to comment
9 minutes ago, FitzZZ said:

Thank you! So my situation might totally be "works as designed"?

 

So in that case I should:

  1. move already transferred stuff off
  2. new config w/o parity disk
  3. load data
  4. add parity disk & enable cache pool

Is that correct?


step 1 is unnecessary

 

instead of step 2 simply unassigned the parity drive and restart the array.

  • Thanks 1
Link to comment
  • FitzZZ changed the title to [Solved] Array write speed issue - very slow even with turbo write enabled

Just wanted to add this - even though @itimpi approach is a good learning, it turned out it was the first HDD in my array that was causing this issue in the first place. I checked them all individually in a single-disk array, and the issue would only come up with that one disk. No SMART errors, nothing. Weird failure, at least I caught it 🙂 Was thinking this might be of interest for someone else at some point.

Link to comment
19 minutes ago, JorgeB said:

Most likely because it's an SMR drive.

 

I was only faintly aware of this attribute but after checking: you are right! This also answers my question if it is faulty - which it is not. I just seemingly never saw the issue as I ran them in a RaidZ configuration which apparently masked the performance issue while writing initially.

 

Thank you @JorgeB - now I don't have to open up the server again and hunt shadows ;-) (and can actually use it as it's a data grave anyways)

Link to comment
1 hour ago, FitzZZ said:

 

I was only faintly aware of this attribute but after checking: you are right! This also answers my question if it is faulty - which it is not. I just seemingly never saw the issue as I ran them in a RaidZ configuration which apparently masked the performance issue while writing initially.

 

Thank you @JorgeB - now I don't have to open up the server again and hunt shadows ;-) (and can actually use it as it's a data grave anyways)


my experience is that as long as you are not writing large amounts of data then SMR drives perform fine.   It is when you do the initial load when you overload the cache on the drive and get the severe slowdown.   Having said that there no longer seems to be a significant cost savings to buying SMR drives.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...