Jump to content

Slow read and write SMB speeds to SSD pool


Recommended Posts

Hi,

 

I've created a separate pool of two 2tb SSD's to store my steam library on using the default BTRFS file system,

 

drive-pool.jpg

 

However after making sure the share is only using this cache pool and not the main array, its has really slow speeds of around 36MB/s write and 100MB/s read,

 

lan-speed-test.jpg 

 

I've checked my SMB settings and enabled multi channel but there has been no improvement.

 

I'm fairly certain the problem is SMB as running a DD shows write speeds of 186MB/s

 

~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.77645 s, 186 MB/s

 

Is there anything I can do to improve these speeds over SMB? 

 

Link to comment
On 9/28/2023 at 9:18 AM, JorgeB said:

Assuming you have gigabit read speeds are normal, writes are low, try transferring an actual file, a large one, using Windows explorer, also note that those SSDs are QLC, write speed will slow down a lot after the small SLC cache is full, to around 80MB/s IIRC.

 

I know they are not the best but I would still expect them to perform better than 30-40MB/s write being SSD's...

 

I've tried copying a 10gb file and I get around the same speeds.

 

On 9/28/2023 at 9:28 AM, itimpi said:

If you are using an Unraid 6.12.x release have you enabled the Exclusive Share option for that share to by-pass the overheads of the Fuse layer?

 

I have enabled this but it hasn't improved the speeds unfortunately.

Link to comment
14 hours ago, JorgeB said:

Post the results of a single stream iperf test in both directions.

 

Looks fine to me on 1GBPS?

 

Desktop to Unraid server

 

C:\iperf3>iperf3.exe -c 10.5.0.5
Connecting to host 10.5.0.5, port 5201
[  4] local 10.5.0.128 port 57907 connected to 10.5.0.5 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec   113 MBytes   951 Mbits/sec
[  4]   1.00-2.00   sec   113 MBytes   949 Mbits/sec
[  4]   2.00-3.00   sec   113 MBytes   949 Mbits/sec
[  4]   3.00-4.00   sec   113 MBytes   948 Mbits/sec
[  4]   4.00-5.00   sec   113 MBytes   949 Mbits/sec
[  4]   5.00-6.00   sec   113 MBytes   949 Mbits/sec
[  4]   6.00-7.00   sec   113 MBytes   949 Mbits/sec
[  4]   7.00-8.00   sec   113 MBytes   948 Mbits/sec
[  4]   8.00-9.00   sec   113 MBytes   949 Mbits/sec
[  4]   9.00-10.00  sec   113 MBytes   949 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec                  sender
[  4]   0.00-10.00  sec  1.10 GBytes   949 Mbits/sec                  receiver

iperf Done.

 

Unraid server to desktop
 

root@Cobra:/# iperf3 -c 10.5.0.128
Connecting to host 10.5.0.128, port 5201
[  5] local 10.5.0.5 port 57730 connected to 10.5.0.128 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   111 MBytes   932 Mbits/sec  1214    259 KBytes       
[  5]   1.00-2.00   sec   113 MBytes   950 Mbits/sec   12    254 KBytes       
[  5]   2.00-3.00   sec  89.1 MBytes   748 Mbits/sec  328    259 KBytes       
[  5]   3.00-4.00   sec   113 MBytes   950 Mbits/sec   18    254 KBytes       
[  5]   4.00-5.00   sec   113 MBytes   949 Mbits/sec   12    257 KBytes       
[  5]   5.00-6.00   sec   113 MBytes   949 Mbits/sec   19    257 KBytes       
[  5]   6.00-7.00   sec   113 MBytes   949 Mbits/sec    6    257 KBytes       
[  5]   7.00-8.00   sec   113 MBytes   949 Mbits/sec    6    259 KBytes       
[  5]   8.00-9.00   sec   113 MBytes   949 Mbits/sec    0    257 KBytes       
[  5]   9.00-10.00  sec   113 MBytes   949 Mbits/sec    0    254 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.08 GBytes   928 Mbits/sec  1615             sender
[  5]   0.00-10.00  sec  1.08 GBytes   926 Mbits/sec                  receiver

 

Edited by sonic_reaction
Link to comment
20 hours ago, JorgeB said:

You have an NVMe based pool, also have fast enough disks that should write a 100MB/s+ with turbo write enabled, try writing to one or both and see if performance is better.

 

Because its a pool I only have the one mount point so I have to write to both drives. If I DD to the drives since switching to raid 0 the speed is great at 600Mb/s~

 

root@Cobra:~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.56801 s, 685 MB/s

 

I think it must be something to do with SMB? as the drives speeds are fine and I have tested network throughput with iperf and that was fine too.

Link to comment
12 minutes ago, JorgeB said:

Not sure I follow, I asked to test transferring directly to the array with turbo write enabled.

Sorry, when you said "try writing to one or both and see if performance is better", I was assuming you meant the individual disks, which then I explained I can't write to the indervidual disks due to it being configured as a pool.

 

I have tried again writing via SMB with turbo write enabled and I still get around 70Mb/s. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...