Jump to content
syee

Horrendously slow write speeds

16 posts in this topic Last Reply

Recommended Posts

Not sure if this entirely belongs here, but since I'm using it in a virtualized environment, I thought I'd start here because I'm sure someone will eventually point me here anyways.

 

As per my last thread, I've set up a new server, running VMWare ESXi 6.7.  Hardware is as follows: 

  • 3x 8TB WD Red disks for storage
  • 500GB Samsung Evo 860 SSD storing the VM files
  • Just acquired a 2TB Seagate HD used for cache drive

 

As per my other thread, the motherboard is a Gigabyte GA-7PESH2 board, with the on board SAS controller flashed to IT mode (LSI 2008)

 

Since the 3 WD drives had data on them that I don't want to lose and I don't really have anywhere else to store the data, I've managed to cram enough data into the existing drives to give me 1 drive to "play around with".

 

What I have connected right now is 1x 8TB WD Red drive, the SSD and the Segate cache drive.  Both the spinning drives have been added to Unraid.

Since I want to ultimately add all 3 drives to Unraid, I'm copying off data from the existing drives to the newly formatted drive in Unraid.  However, I'm getting some really low numbers for transfer speed (as in consistently <10MB/s, and usually around 5MB/s) so copying 8TB of data will literally take a week.  I don't know whether this is due to the virtualization or something I didn't configure correctly.  I'm seeing most people report about 100MB/s so I'm obviously doing something wrong here.  I'm copying this over a local network - gigabit on the source end, and gigabit on the Unraid server end.  I've switched out a few network cables to verify that it's not a cable issue.

 

The two drives are connected to the SAS controller, which is set to pass through mode to the Unraid VM.  Using teh cache drive doesn't appear to make any difference at all.  (I was running without cache drive for a few days since I didn't have another physical drive I could use as cache and went out today and bought this 2TB drive thinking it would speed things up)  I've read that a single drive with no cache drive would be slow, but I didn't think it would be this slow.

 

Any thoughts or suggestions on what I might be doing wrong?  Logs are attached.

 

 

stinkynas-diagnostics-20190208-2307.zip

Share this post


Link to post

That's what I was thinking - hardwired network usually wouldn't be an issue.  In any case, here's the iperf results from the machine I'm copying from to the Unraid server:

 

Connecting to host 192.168.1.13, port 5201
[  4] local 192.168.1.10 port 58446 connected to 192.168.1.13 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  98.9 MBytes   829 Mbits/sec
[  4]   1.00-2.00   sec   104 MBytes   872 Mbits/sec
[  4]   2.00-3.00   sec   109 MBytes   912 Mbits/sec
[  4]   3.00-4.00   sec   110 MBytes   927 Mbits/sec
[  4]   4.00-5.00   sec   101 MBytes   846 Mbits/sec
[  4]   5.00-6.00   sec   109 MBytes   915 Mbits/sec
[  4]   6.00-7.00   sec   111 MBytes   932 Mbits/sec
[  4]   7.00-8.00   sec   103 MBytes   863 Mbits/sec
[  4]   8.00-9.00   sec   105 MBytes   882 Mbits/sec
[  4]   9.00-10.00  sec   111 MBytes   932 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.04 GBytes   891 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.04 GBytes   891 Mbits/sec                  receiver

iperf Done.

 

Speed seems pretty decent.  Drive is hooked up via an eSATA dock currently to the PC (though this shouldn't be the limiting factor here)

 

I figured the cache drive would rule out parity as well, but alas, the speed with or without cache drive is the same.  Share has been set to use cache disk.

Share this post


Link to post

If you get the same speed writing to cache or array (without parity) it rules out the disks, it could also be virtualization related, but can't help there.

Share this post


Link to post

Yeah, seems to be writing to the cache disk currently as I can see it filling up.  I also just set the share to use cache disk only so that should definitely be writing to cache disk.  I'm watching the file copy and it's hovering around 2 - 7MB/s copying relatively large files.  (they're mostly 4GB files from my NVR software)  Says it's going to take 8 hours for 162GB.   :D

 

Copying to a Windows VM on the same server seems pretty snappy (about 100MB/s) so I don't know if it's really all VM related.  Maybe some weird unRAID stuff, or a bit of both...

 

Thanks anyways for taking a peek!  I appreciate it!  

Share this post


Link to post

I used the DiskSpeed plugin and got these results.  Looks decent from what I can tell - at least 100MB/s based on the test itself.

 

 

Disk_Speed.PNG

Share this post


Link to post

Ah understood.

I tried running the command as per the page recommended - I get the following result:

 

root@STINKYNAS:/dev# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 0.748555 s, 1.4 GB/s
 

Share this post


Link to post

you are only copying 1GB of data, it's too small amount. try again with bs=10G for example..

and "of" is not your disk share, it's on RAM now. try for exampel on disk1: of=/mnt/disk1/test1.img

so the whole command on disk1 is as follows: dd if=/dev/zero of=/mnt/disk1/test1.img bs=10G count=1 oflag=dsync

Share this post


Link to post

Ah, that makes sense.  My apologies - Linux is a new beast to me so still trying to get a hang of it.  That makes sense though.  I was kind of wondering how I'd get 1.4GB/s from a spinning disk.  :D

 

So I tried it again, and I got the following - I tried tinkering with the file size number because of the out of memory message:

 

root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=10G count=1 oflag=d          sync
dd: memory exhausted by input buffer of size 10737418240 bytes (10 GiB)
root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=5G count=1 oflag=ds          ync
dd: memory exhausted by input buffer of size 5368709120 bytes (5.0 GiB)
root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=2G count=1 oflag=ds          ync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 12.3935 s, 173 MB/s
root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=3G count=1 oflag=ds          ync
0+1 records in
0+1 records out
2147479552 bytes (2.1 GB, 2.0 GiB) copied, 12.5788 s, 171 MB/s
root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=4G count=1 oflag=ds          ync
dd: memory exhausted by input buffer of size 4294967296 bytes (4.0 GiB)
root@STINKYNAS:/#

 

Share this post


Link to post
3 hours ago, syee said:

root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=10G count=1 oflag=d          sync
dd: memory exhausted by input buffer of size 10737418240 bytes (10 GiB)

I'd change the command to use bs=1G count=10.

 

In parallel to that, I'd do a read test from the remote (networked) file into /tmp/blah (i.e. into RAM disk). Use a reasonable file size (this filesystem is in RAM and you don't want to exhaust it) but a few hundred MB up to 1GB will give you a rough estimate as to the incoming stream speed. 

 

With these two, you should quickly be able to know which part of the copy process is the culprit, and zoom in on it.

Share this post


Link to post

Thanks!  I get the following from the 1GB file and 10 count:

root@STINKYNAS:/# dd if=/dev/zero of=/mnt/disk1/test1.img bs=1G count=10 oflag=dsync
10+0 records in
10+0 records out
10737418240 bytes (11 GB, 10 GiB) copied, 62.8169 s, 171 MB/s
 

Just so I'm understanding correctly, this test (the one above) would just be testing disk to disk transfer?  (so internal to that disk?)  The numbers seem to be pretty consistent with the other test numbers.

 

I'll have to look into how to do a read test and give that a try.

Share this post


Link to post
11 minutes ago, syee said:

10737418240 bytes (11 GB, 10 GiB) copied, 62.8169 s, 171 MB/s

This looks to be a decent write speed (quite good actually).

11 minutes ago, syee said:

Just so I'm understanding correctly, this test (the one above) would just be testing disk to disk transfer?  (so internal to that disk?)  The numbers seem to be pretty consistent with the other test numbers.

In fact that test is testing write to disk, only. The source of the copy is /dev/zero which is a kernel device generating a stream of bin zeroes, which are then written to the disk file you specified.

 

To test the read part you could either manually mount something from a different OS on /mnt/test and then copy from it to /tmp/blah, or use SCP to copy directly from a remote file to /tmp/blah. It will not be an extremely accurate test, but will give you a pretty good idea as to performance limits.

Share this post


Link to post

Man I'm dumb.  I'm 0/2 so far.  The issue ended up being the eSATA connector on my motherboard on the PC I was transferring the data from.  (which was hooked up to a dock where I have temporarily plugged in the hard drive to transfer the data over to the NAS)

 

I suspect the eSATA connection is faulty - plugged in a USB 3.0 cable, and used that instead, and I'm getting the 100MB/s that I was expecting.  Thanks guys for all the help you provided.  I actually learned a bit from this experience so it wasn't all for nothing.  :)

Share this post


Link to post

glad to see you sorted it out :) at least you know, your unraid disk is just fine :) 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.