Fma965 Posted December 8, 2019 Share Posted December 8, 2019 (edited) Hey guys, so i have recently added a 10gbe card aqc107 to my unraid server, the same chip is in my desktop. I have a unraid array etc but thats not going to be relevant in this instance, i have a NVME cache drive infact the exact same drive in both client and server, a Samsung 960 evo 256gb. When i am copying from a cache enabled share to my local pc C:\ (nvme) i get almost solid 1GB/s as shown However if i copy the same file back to again a cache enabled share i get Now this is too fast to be a HDD so it has to be the NVME but why is it only ~250 and not 1GB I almost expect it to be the other way, where my writes are slower and not my reads? Windows client SSD Benchmark Edited December 8, 2019 by Fma965 Quote Link to comment
JorgeB Posted December 8, 2019 Share Posted December 8, 2019 Try enabling direct i/o, if still the same see if reading directly from the disk share is faster (//tower/cache instead of //tower/share) Quote Link to comment
Fma965 Posted December 8, 2019 Author Share Posted December 8, 2019 (edited) 5 hours ago, johnnie.black said: Try enabling direct i/o, if still the same see if reading directly from the disk share is faster (//tower/cache instead of //tower/share) Thanks, I have tried with and without direct io, not really any noticeable difference. I will try reading it from the cache disk share later, is there a reason that would make a difference? Edited December 8, 2019 by Fma965 Quote Link to comment
Fma965 Posted December 8, 2019 Author Share Posted December 8, 2019 (edited) I have just tried this and it unfortunately hasnt made any difference, i am using a cat5e cable currently (which although not officially specced for 10gbe does work) i have a cat6 cable coming tomorrow so will test with that. Edited December 8, 2019 by Fma965 Quote Link to comment
JorgeB Posted December 9, 2019 Share Posted December 9, 2019 16 hours ago, Fma965 said: I will try reading it from the cache disk share later, is there a reason that would make a difference? Disk shares can be faster, in some cases much faster, than user shares. Quote Link to comment
Fma965 Posted December 9, 2019 Author Share Posted December 9, 2019 10 hours ago, johnnie.black said: Disk shares can be faster, in some cases much faster, than user shares. fair enough, unfortunately not in this case. i have just tried a cable that claims to be a cat6 cable and no change. I also swapped the 2 cards over and the issue persists the same way, reading from the server is slow despite the cards being reversed now (indicating a software issue surely?) Quote Link to comment
JorgeB Posted December 9, 2019 Share Posted December 9, 2019 First run iperf to confirm lan bandwidth then you can try copying to a ramdisk to rule out other bottlenecks, I can get 1GB/s reading from my cache pool. But it's slower if writing to my desktop since the NVMe devices can't keep up: Quote Link to comment
Fma965 Posted December 9, 2019 Author Share Posted December 9, 2019 21 minutes ago, johnnie.black said: First run iperf to confirm lan bandwidth then you can try copying to a ramdisk to rule out other bottlenecks, I can get 1GB/s reading from my cache pool. But it's slower if writing to my desktop since the NVMe devices can't keep up: I thought i already mentioned this but it seems not, talking to so many people about this issue... iperf has the same When unraid is the server "-s" i get this When my PC is the server "-s" i get this I'm running out of ideas on what the issue could be. Do you have jumboframes on or off? i've tried with both. Quote Link to comment
JorgeB Posted December 9, 2019 Share Posted December 9, 2019 4 minutes ago, Fma965 said: iperf has the same OK, so the problem is network related, if iperf doesn't get more than 2Gbits, neither will any single transfer, you need to try different things, like NIC, another PC, etc until iperf performs normally. Quote Link to comment
Fma965 Posted December 9, 2019 Author Share Posted December 9, 2019 (edited) 17 minutes ago, johnnie.black said: OK, so the problem is network related, if iperf doesn't get more than 2Gbits, neither will any single transfer, you need to try different things, like NIC, another PC, etc until iperf performs normally. i dont' have any other 10gbit equipment so not able to do any other testing, the iperf works fine 1 direction but not the other. EDIT: i have figured it out i think, i just tried unraid to unraid via a test usb and it worked at full speed, i double checked my windows settings and saw MTU was on 9000 on Unraid but on windows the jumboframe setting had turned it self back off so i guess thats the issue. Edited December 9, 2019 by Fma965 Quote Link to comment
Fma965 Posted December 9, 2019 Author Share Posted December 9, 2019 Yeah, all is working now, stupid windows reverted my setting. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.