Jump to content

Slow writes to Array


Go to solution Solved by JorgeB,

Recommended Posts

I have an Array with 10 disks, 8 identical 2.5" SATA-drives for data and 2 (almost) identical 3.5" SAS-drives for parity. All 2TB

My parity was built at speeds ranging from 8MB/s to 65MB/s.

Now i'm copying large files from a cache disk (ssd) to my array using mover but i'm getting max write speed of 2MB/s which is pretty slow compared to the 65MB/s i was getting at parity build.

 

I don't know where to look to figure out what is causing my writes to be slow. Any methodology i can follow.

  • Outside of the array, each disks works fine with max speeds up to 140MB/s (tested with table dock hooked via usb3 to my desktop pc)
  • I know the write speed in an unraid array is slower because of the read-write checks happening but as explained i was getting 65MB/s at parity build.
  • After reboot, speeds are the same (turning it off and on again didn't help :o )
  • All dockers and VM's turned off, speed are the same (appdata and system are on seperate cache pool so should have non to little impact)

 

All suggestions are welcome

 

 

Link to comment
3 hours ago, JorgeB said:

2TB 2.5" drives will be SMR, what brand/model? Some SMR models, particularity by Seagate performe decently, SMR drives by Toshiba or WD can have several minutes writing below 5MB/s, even when doing sequential writes.

It says smr in the description, so basically i'm screwed.

Now i know why i got the drives for free 😛

Are there solutions to speed these drives up?

Edited by HisEvilness88
Grammar
Link to comment
1 hour ago, HisEvilness88 said:

Are there solutions to speed these drives up?

Not that I'm aware of, but if you use your array like a vast majority of people, it shouldn't hurt you too bad once you get your data loaded, read speeds should be fine, and once your writes are limited to daily updates you can let mover take care of things when you're not actively using the array.

 

Typical Unraid use is a WORM style, with daily updates being relatively small.

  • Thanks 1
Link to comment

I have done some reading into smr drives and i had an idea.

I have sheduled mover every two hours (even hours) to start moving.

I made another script stopping the mover every odd hour. 

Hoping this will give the drives time to sort stuff out which it would do in expected down time which should bump up overall writing speeds.

 

Might need some timing tweeks but this might be a good starting point.

 

 

Link to comment
15 hours ago, HisEvilness88 said:

sheduled mover every two hours

It is impossible to move from fast cache to slow array as fast as you can write to fast cache. Mover is intended for idle time. If you have a lot of idle time then scheduling mover more often might make some sense. Where people get in trouble is thinking they can keep writing to cache and hoping mover can keep up. It won't.

 

16 hours ago, HisEvilness88 said:

copying a lot of data at the moment because a replacement

 

Not sure what you mean there. The usual way to replace drives doesn't involve copying anything. Rebuilding to replacement is the whole point of parity.

Link to comment
7 hours ago, trurl said:

It is impossible to move from fast cache to slow array as fast as you can write to fast cache. Mover is intended for idle time. If you have a lot of idle time then scheduling mover more often might make some sense. Where people get in trouble is thinking they can keep writing to cache and hoping mover can keep up. It won't.

I know but apparently smr drive need some time "to take a breath" when a lot of data is written to them. 

I did notice higher write speeds (+/- 18MB/s) just after such a pause. It helps a little.

 

7 hours ago, trurl said:

Not sure what you mean there. The usual way to replace drives doesn't involve copying anything. Rebuilding to replacement is the whole point of parity.

New server, new config, new disks.

Data is moving from old to new so the old one can retire

Link to comment
22 minutes ago, HisEvilness88 said:

To be technically superior: rsync 

As long as the source is still there, whatever works for you. rsync would be my choice, binary verify after copy is a must when doing things like this. I've seen many cases over the years where people thought moving the data was the best option and ended up with corrupt data and no way to recover or verify.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...