sonic_reaction Posted September 28, 2023 Share Posted September 28, 2023 Hi, I've created a separate pool of two 2tb SSD's to store my steam library on using the default BTRFS file system, However after making sure the share is only using this cache pool and not the main array, its has really slow speeds of around 36MB/s write and 100MB/s read, I've checked my SMB settings and enabled multi channel but there has been no improvement. I'm fairly certain the problem is SMB as running a DD shows write speeds of 186MB/s ~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.77645 s, 186 MB/s Is there anything I can do to improve these speeds over SMB? Quote Link to comment
JorgeB Posted September 28, 2023 Share Posted September 28, 2023 Assuming you have gigabit read speeds are normal, writes are low, try transferring an actual file, a large one, using Windows explorer, also note that those SSDs are QLC, write speed will slow down a lot after the small SLC cache is full, to around 80MB/s IIRC. Quote Link to comment
itimpi Posted September 28, 2023 Share Posted September 28, 2023 If you are using an Unraid 6.12.x release have you enabled the Exclusive Share option for that share to by-pass the overheads of the Fuse layer? Quote Link to comment
sonic_reaction Posted October 3, 2023 Author Share Posted October 3, 2023 On 9/28/2023 at 9:18 AM, JorgeB said: Assuming you have gigabit read speeds are normal, writes are low, try transferring an actual file, a large one, using Windows explorer, also note that those SSDs are QLC, write speed will slow down a lot after the small SLC cache is full, to around 80MB/s IIRC. I know they are not the best but I would still expect them to perform better than 30-40MB/s write being SSD's... I've tried copying a 10gb file and I get around the same speeds. On 9/28/2023 at 9:28 AM, itimpi said: If you are using an Unraid 6.12.x release have you enabled the Exclusive Share option for that share to by-pass the overheads of the Fuse layer? I have enabled this but it hasn't improved the speeds unfortunately. Quote Link to comment
JorgeB Posted October 3, 2023 Share Posted October 3, 2023 Post the results of a single stream iperf test in both directions. Quote Link to comment
sonic_reaction Posted October 3, 2023 Author Share Posted October 3, 2023 (edited) 14 hours ago, JorgeB said: Post the results of a single stream iperf test in both directions. Looks fine to me on 1GBPS? Desktop to Unraid server C:\iperf3>iperf3.exe -c 10.5.0.5 Connecting to host 10.5.0.5, port 5201 [ 4] local 10.5.0.128 port 57907 connected to 10.5.0.5 port 5201 [ ID] Interval Transfer Bandwidth [ 4] 0.00-1.00 sec 113 MBytes 951 Mbits/sec [ 4] 1.00-2.00 sec 113 MBytes 949 Mbits/sec [ 4] 2.00-3.00 sec 113 MBytes 949 Mbits/sec [ 4] 3.00-4.00 sec 113 MBytes 948 Mbits/sec [ 4] 4.00-5.00 sec 113 MBytes 949 Mbits/sec [ 4] 5.00-6.00 sec 113 MBytes 949 Mbits/sec [ 4] 6.00-7.00 sec 113 MBytes 949 Mbits/sec [ 4] 7.00-8.00 sec 113 MBytes 948 Mbits/sec [ 4] 8.00-9.00 sec 113 MBytes 949 Mbits/sec [ 4] 9.00-10.00 sec 113 MBytes 949 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bandwidth [ 4] 0.00-10.00 sec 1.10 GBytes 949 Mbits/sec sender [ 4] 0.00-10.00 sec 1.10 GBytes 949 Mbits/sec receiver iperf Done. Unraid server to desktop root@Cobra:/# iperf3 -c 10.5.0.128 Connecting to host 10.5.0.128, port 5201 [ 5] local 10.5.0.5 port 57730 connected to 10.5.0.128 port 5201 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 111 MBytes 932 Mbits/sec 1214 259 KBytes [ 5] 1.00-2.00 sec 113 MBytes 950 Mbits/sec 12 254 KBytes [ 5] 2.00-3.00 sec 89.1 MBytes 748 Mbits/sec 328 259 KBytes [ 5] 3.00-4.00 sec 113 MBytes 950 Mbits/sec 18 254 KBytes [ 5] 4.00-5.00 sec 113 MBytes 949 Mbits/sec 12 257 KBytes [ 5] 5.00-6.00 sec 113 MBytes 949 Mbits/sec 19 257 KBytes [ 5] 6.00-7.00 sec 113 MBytes 949 Mbits/sec 6 257 KBytes [ 5] 7.00-8.00 sec 113 MBytes 949 Mbits/sec 6 259 KBytes [ 5] 8.00-9.00 sec 113 MBytes 949 Mbits/sec 0 257 KBytes [ 5] 9.00-10.00 sec 113 MBytes 949 Mbits/sec 0 254 KBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.08 GBytes 928 Mbits/sec 1615 sender [ 5] 0.00-10.00 sec 1.08 GBytes 926 Mbits/sec receiver Edited October 3, 2023 by sonic_reaction Quote Link to comment
JorgeB Posted October 4, 2023 Share Posted October 4, 2023 Yes, that's looks fine, post diags to see rest of hardware and array config, to see if it can be used for testing. Quote Link to comment
sonic_reaction Posted October 5, 2023 Author Share Posted October 5, 2023 Sure, attached. Just to update I noticed the BTRFS was set to raid1, I've rebalanced to raid0 as its only games on this pool and the write speed now has gone up to 70Mb/s which is better, but would still expect this to be higher. cobra-diagnostics-20231005-0830.zip Quote Link to comment
JorgeB Posted October 5, 2023 Share Posted October 5, 2023 You have an NVMe based pool, also have fast enough disks that should write a 100MB/s+ with turbo write enabled, try writing to one or both and see if performance is better. Quote Link to comment
sonic_reaction Posted October 6, 2023 Author Share Posted October 6, 2023 20 hours ago, JorgeB said: You have an NVMe based pool, also have fast enough disks that should write a 100MB/s+ with turbo write enabled, try writing to one or both and see if performance is better. Because its a pool I only have the one mount point so I have to write to both drives. If I DD to the drives since switching to raid 0 the speed is great at 600Mb/s~ root@Cobra:~# dd if=/dev/zero of=/mnt/user/Games-Windows/test.img bs=1G count=1 oflag=dsync 1+0 records in 1+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.56801 s, 685 MB/s I think it must be something to do with SMB? as the drives speeds are fine and I have tested network throughput with iperf and that was fine too. Quote Link to comment
sonic_reaction Posted October 6, 2023 Author Share Posted October 6, 2023 I've also tried enabling disk shares and running a write and read test that way and its the same speeds at 70Mb/s~ writes and the common factor is SMB. Quote Link to comment
JorgeB Posted October 6, 2023 Share Posted October 6, 2023 38 minutes ago, sonic_reaction said: Because its a pool I only have the one mount point so I have to write to both drives. Not sure I follow, I asked to test transferring directly to the array with turbo write enabled. Quote Link to comment
sonic_reaction Posted October 6, 2023 Author Share Posted October 6, 2023 12 minutes ago, JorgeB said: Not sure I follow, I asked to test transferring directly to the array with turbo write enabled. Sorry, when you said "try writing to one or both and see if performance is better", I was assuming you meant the individual disks, which then I explained I can't write to the indervidual disks due to it being configured as a pool. I have tried again writing via SMB with turbo write enabled and I still get around 70Mb/s. Quote Link to comment
JorgeB Posted October 6, 2023 Share Posted October 6, 2023 That suggests some networking issue, despite the iperf results, could also be a problem with the source PC. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.