Kuleinc Posted September 30, 2022 Share Posted September 30, 2022 I have confirmed I get near line speed with iperf. 970 down and 850 up. However no matter if I use an SSD cache or an NVME cache, I can only transfer files to my server at about 70MB/s, which is about half line speed 500-550mb/s I can't find the bottle neck. Help? Am I just expecting to much from such an old machine? nas-diagnostics-20220929-2339.zip Quote Link to comment
DarthKegRaider Posted September 30, 2022 Share Posted September 30, 2022 (edited) Are you running through a network switch? I have had a switch port fail at 1Gbit, and had to change to another to get back to speed. For the record, your Dual Xeon is newer than my Dual Xeon (x5690) Z800 machine. My data transfers are frequently 130MB/s to the cache from any machine on the network. From cache to rust spinners, the 8TB IronWolf drives hover between 190-210MB/s. I'm not using my onboard SAS controller, as the 8TB drives wouldn't detect on it. Was working fine with 2TB drives during my testing, so I then had to use the AHCI headers. The old beast has both, so 12 drives if I could fit them internally. Edited September 30, 2022 by DarthKegRaider Quote Link to comment
JorgeB Posted September 30, 2022 Share Posted September 30, 2022 From where/how are you transferring the data? If you enable turbo write and transfer directly to the array are speeds better or the same? Quote Link to comment
Kuleinc Posted September 30, 2022 Author Share Posted September 30, 2022 I haven’t tried writing to array directly. The other machine is a fairly modern amd 5600x with a fast nvme drive and 32mn of ram. The iperf tests indicate the network switches are not the problem right? I do have turbo write enabled. I don’t think I have any shares setup for direct copy. I guess I could change one and test it? Quote Link to comment
Kuleinc Posted September 30, 2022 Author Share Posted September 30, 2022 3 hours ago, DarthKegRaider said: Are you running through a network switch? I have had a switch port fail at 1Gbit, and had to change to another to get back to speed. For the record, your Dual Xeon is newer than my Dual Xeon (x5690) Z800 machine. My data transfers are frequently 130MB/s to the cache from any machine on the network. From cache to rust spinners, the 8TB IronWolf drives hover between 190-210MB/s. I'm not using my onboard SAS controller, as the 8TB drives wouldn't detect on it. Was working fine with 2TB drives during my testing, so I then had to use the AHCI headers. The old beast has both, so 12 drives if I could fit them internally. I actually had an old x8dtn board with dual 5600 series Xeon in the case when I bought it. I swapped it for my motherboard because mine has many more pcie slots instead of pci. Quote Link to comment
JorgeB Posted September 30, 2022 Share Posted September 30, 2022 6 minutes ago, Kuleinc said: The iperf tests indicate the network switches are not the problem right? Assuming it was a single stream test yes. 6 minutes ago, Kuleinc said: I guess I could change one and test it? Yep. Quote Link to comment
Kuleinc Posted September 30, 2022 Author Share Posted September 30, 2022 (edited) I pulled a 9.5 GB file off the array to my local machine at 75-80 MB/s and when I copied it back to the array is was about 70/73 MB/s whats the command for single stream test? Want to make sure I did it right. I think I followed space invader ones video... Edited September 30, 2022 by Kuleinc Quote Link to comment
JorgeB Posted September 30, 2022 Share Posted September 30, 2022 That is single stream, multiple stream can give erroneous results since it's doing multiple tests at the same time. Quote Link to comment
Kuleinc Posted September 30, 2022 Author Share Posted September 30, 2022 any ideas as to how to fix the bottle neck? Quote Link to comment
JorgeB Posted September 30, 2022 Share Posted September 30, 2022 Everything limited to around 70MB/s independent of the device used still suggests a LAN issue, despite the iperf results, do you have another NIC you could try with? Quote Link to comment
Kuleinc Posted September 30, 2022 Author Share Posted September 30, 2022 (edited) the server has four ports, I have two hooked up in fail over mode, would this cause the issue? This PC only has one 2.5GB nic running at 1GB, as I dont have any 2.5GB switches. I could try my PC downstairs. it only has a SATA SSD though... Is there another network throughput test I could run? I have to run and service my bobcat and start laying a gravel road for a neighbor... I'll check in again when I'm done. Edited September 30, 2022 by Kuleinc Quote Link to comment
Kuleinc Posted October 1, 2022 Author Share Posted October 1, 2022 13 hours ago, JorgeB said: Everything limited to around 70MB/s independent of the device used still suggests a LAN issue, despite the iperf results, do you have another NIC you could try with? The client pc only has one NIC, but I added in a 10GB mellanox card, and put a mikrotik 305 switch inline to hookup the DAC cable to. Not ideal, but it can't possibly be worse. I also setup the brocade 6610 in the rack that the server is in, and connected the unraid server to the 6610 with a DAC. Iperf is similar. I can't get the FSP+ adapters that connect to the RJ45 copper cables in the wall to negotiate between the mikrotik 305 and the brocade 6610 at 10GB, or 1GB, which is odd. I did have a problem in the past with 1GB switches on either side of the cable run in the wall where they would only negotiate 100MB. I discovered that the jack upstairs wasn't punched down correctly and so wasn't using all 4 pairs of wires. Ever since I fixed that I have been getting 1GB to negotiate between the switches on that cable run. So to get communication going I had to attach that cable run to the brocade on a 1GB ethernet port, and hook the mikrotik switch upstairs to the port on the wall through its 1GB port. I'm still only getting 65-72 MB/s data transfer. I wonder if the cable run in the wall is damaged somehow or maybe runs by something causing some sort of EMI interference... I'm thinking its not the NIC or the switches, maybe cable related, and its not a cable I can easily bypass or replace as its a three story house, drywall would have to be cut in several places to run a new cable. I think next step is I go to the second floor and test speeds with my old I7 computer and see what it can transfer at. It only has a SATA SSD in it though, I think that should be fast enough to eclipse GB speeds on the network, I hope... 1 Quote Link to comment
Kuleinc Posted October 1, 2022 Author Share Posted October 1, 2022 Ok, so I went and used my older dell with an ssd in it. Not sure any more of the sata interface speed its using. I got about 80-88 MB/s download from server to my PC off the array using a 9GB file. I then uploaded the file from my pc back to the unraid server onto a cache share using an NVME drive and got 74-75MB/s the whole time... I also tried using a share back to the array directly and got 72-73 MB/s I guess its not the cable in the wall going upstairs to my wifes pc? Not sure what else to test to narrow my problem down. Quote Link to comment
Kuleinc Posted October 8, 2022 Author Share Posted October 8, 2022 ok so, bit of an update, but I wonder if the speed issues I'm having are a limitation of the GUI file copy utility I'm using in Linux mint? I happened to be booted into windows on my pc, which you can see the transfer speeds for in linux up above, and in windows I'm basically getting line speed transfers minus over head, so about 100-113 MB/s to and from the unraid server from windows on my PC over a gigibit connection. Has anyone run into this limitation and have a fix? Everything is up to date, except for unraid is on 6.9 I believe... Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.