Slow READS from all drives? + Script to test drive speed


Recommended Posts

Ok, I am finally preparing to transfer my unassigned devices over to the array, so naturally doing backups and all that jazz in preparation.

 

I have noticed that the performance is horrible during these backups.

 

Generally the whole backup process takes me ~1 hour, it took my 30 mins - 1 hour JUST to scan the files with some of the drives.

 

Then the copy speeds were at best half what they should be and were on windows.

 

Watching netdata reads seem to be the bottleneck, here is an example from a copy that is happening now:

 

firefox_VCiPvi0N4u.thumb.jpg.a10f53bfa360f315b730a52a4904491d.jpg

 

The reads are using krusader from the cache pool that is capable of over 1GB/s, yet it is almost capped at ~60mb/s which is basically the highest I saw with any of my copies. Meanwhile the writes going in bursts at the expected speed of ~110-120mb/s.

 

It took my an entire day to do a backup that normally would of taken me a bit over an hour.

 

I ran a speed test using dd and the results from cache of as expected even while this copy is happeningfirefox_H8n0AnFj18.jpg.56bca93489148fce7ddab027c9709ec7.jpg

firefox_Tk3gENrcUv.jpg.786127596454d5a4e6589519dd2dee4f.jpg

 

Here is the script I made up to test drive speeds, does anyone know how to run multiple dd commands in parallel? I would like to test all the drives at the same time to see the hardware limits even though I know they are not the issue.

 

#!/bin/bash
#description=Simple Script for testing cache read and write speed, can be used for other drives as well by adjusting PATH_TO_TEST variable in the script.
#argumentDescription=Set size of test here in MB
#argumentDefault=1024

PATH_TO_TEST=/mnt/cache/tempfile

echo
echo This tests write speed to path
dd if=/dev/zero of=$PATH_TO_TEST bs=1M count="$1"; sync

echo
echo This tests read speed with read buffer enabled
dd if=$PATH_TO_TEST of=/dev/null bs=1M count="$1"

echo
echo clear the read buffer
/sbin/sysctl -w vm.drop_caches=3

echo
echo Test again with no read buffer
dd if=$PATH_TO_TEST of=/dev/null bs=1M count="$1"

echo
echo delete test file
rm -v $PATH_TO_TEST

 

Is this normal? I have tried coping files with krusader, a windows VM using SMB and a windows VM using drive passthrough. All had more or less the same results (although file scanning was faster with passthrough but still slower then SMB scanning was on windows).

 

Sadly this is dealbreaker level of slow for me. I can't have to spend a whole day baby sitting a backup process when I know it can be done in an hour.

Link to comment
5 hours ago, ChatNoir said:

There is a "Follow" button at the top right of the page, just above "Reply to this post".

Regarding your problem, I am sorry I cannot help you.

Thanks, I knew there had to be another way, anytime I see follow I just see some kind of social media button and my brain blocks it out lol.

 

To add a bit more info to the above. The steady state speeds are slow as shown but the real time vacuum are the small files.

 

It is not uncommon to need to update 100-200k+ small files /  text documents during a backup. Generally on windows it would handle hundreds of files/second. When running the backup yesterday I could watch the numbers ticking up with maybe 10 files/second. This is what really slowed this to a crawl.

 

It is not a drive or hardware limitation since this is the same hardware and drives I was using on windows.

Edited by TexasUnraid
Link to comment

Ok, been doing some more testing and running into a really strange issue.

 

The issue seems to be tied to NTFS file systems. The strange part and why I didn't consider it before, is that if I read or write directly to the NTFS drive (mounted with UD), the speeds are as expected.

 

The issue also goes away if I read / write from btrfs file systems on both ends of the transfer locally using krusader anyways.

 

The really strange part, if I copy from a BTRFS or XFS file system to an NTFS file system.

 

The reads will be really slow as seem above but writes will burst out using the buffer at full speed. Somehow the NTFS file system is causing the reads to be very slow for no reason even from BTRFS or XFS formatted drives.

 

If I reverse and go from the NTFS drive to the BTRFS drive, everything is fine as well.

 

I can't explain it.

 

 

Before I put my data at risk and transfer everything over to a new format (I will only have 1 copy of the data during the transfer), can others confirm that using native file systems the speeds should easily max out the drives bandwidth?

 

I am sure others have to scan/backup many small files as well, any comments on that speed?

 

I wish there was a good GUI based linux sync program but I have yet to find anything.

Edited by TexasUnraid
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.