Jump to content

(SOLVED) Slow copy speeds. Diags attached


intoran

Recommended Posts

Recently moved to a new server.  Using unassigned devices to mount disks from old server one at a time then using mc to copy files to shares on the new array.  

Speeds hovering between 35-45MB/s.  Seems way slow considering these HGST 4TB drives are faster than the old drives and I was seeing faster speeds on the old server.  I have already enable turbo write, I think.

 

Any help would be great, this is frustrating and I have a lot of data to copy into the new array.

 

Link to comment

Your syslog is full of these errors over and over again every second

Jun 23 18:48:32 Tower kernel: md: do_drive_cmd: disk3: ATA_OP e0 ioctl error: -5
Jun 23 18:48:32 Tower kernel: mdcmd (8598): spindown 4
Jun 23 18:48:32 Tower emhttpd: error: mdcmd, 2639: Input/output error (5): write
Jun 23 18:48:32 Tower kernel: md: do_drive_cmd: disk4: ATA_OP e0 ioctl error: -5
Jun 23 18:48:32 Tower kernel: mdcmd (8599): spindown 5
Jun 23 18:48:32 Tower emhttpd: error: mdcmd, 2639: Input/output error (5): write
Jun 23 18:48:32 Tower kernel: md: do_drive_cmd: disk5: ATA_OP e0 ioctl error: -5
Jun 23 18:48:33 Tower emhttpd: error: mdcmd, 2639: Input/output error (5): write

Not quite sure about it though as I don't think I've seen them before, but it appears that unRaid wants to spin down the drives and an error is happening.

Link to comment

I've removed all but 1 SAS drive on the m1015 and 1 SATA on the onboard. Still slow around 40-45.

Copying to 1 SATA on the m1015 is around 80-100mb/s.

I thought SAS was supposed to be faster.
Is there anything special that needs to be done on unraid or the controllers when using SAS vs SATA?

Also, flashed the controller to the latest version I could find, which was 20.  No help.  Still at 45mb/s.
 

Link to comment

Solved.  Maybe..

 

So a default unraid has these SAS drives with NCQ queue depth set to 1.  Changing that to 32 or higher results in a sustained write of over 115MB/s.  More what I was expecting with these drives.

 

Anyone know why it would be that way?  This is using a new unraid trial setup with no previous config.  Wanted to test before moving my license over.

Link to comment

The default is because it works for mostly all other situations. Your hardware just happens to need non-default values. There is a thread somewhere with all sorts of tests and peeformance numbers and lengthy discussion about this setting and what setups need different values. ... But because the built in search on the forums still sucks, I cant direct you to it.

Link to comment
40 minutes ago, BRiT said:

The default is because it works for mostly all other situations. Your hardware just happens to need non-default values. There is a thread somewhere with all sorts of tests and peeformance numbers and lengthy discussion about this setting and what setups need different values. ... But because the built in search on the forums still sucks, I cant direct you to it.

 

Is this the one?

 

    https://lime-technology.com/forums/topic/3038-write-performance-within-the-unraid-server/

 

 

I use Google to search for unRAID forum topics.  This is the search parameters that I used to find this---   unraid SAS drives with NCQ.   Google does a much better job than the default forum search engine!!!!

Link to comment

Unfortunately not. It's not about NCQ but about command queue depth set on the tunables … Settings, Disk Settings.

 

Field name: "Tunable (nr_requests):"

Help Text of: This defines the nr_requests device driver setting for all array devices.

 

For me the setting is at 128 and says that's the default.

 

Of course perhaps I'm thinking of the wrong setting, but that's the field I had in mind and previously it never existed and there was a lot of discussion around it, where it seemed to 'break' at some specific point in the newer Linux Kernels and many were reverting to older versions to get better performance.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...