falconexe Posted August 19, 2020 Share Posted August 19, 2020 Hey everyone. I have been able to get true 10GBe read/write speeds directly to the cache drive to/from a Windows 10 client. However, if you are trying to write to the array shares (on cache), you may be running into "known" SHFS overhead. I dealt with this in a very unique way a few months back. I will try some of these additional SMB settings you guys mentioned (my config is also in the below thread) and report back. If one of us figures this out, I would LOVE to know. My workaround works, but it is kind of a PITA. The golden bar for me is true 10Gbe writes to the cache using the native shares, and not the cache disk share itself. I also have Ubiquity equipment, but am actually running a direct CAT7 NIC to NIC 10Gbe setup for this purpose. I built a new house and put in a second dedicated network drop from my main client to UNRAID. I also have a fully dedicated server room for my network equipment, servers, and Control 4 stuff. Also added a dedicated Carrier HVAC Ceiling unit so the room is a cool 68F at all times. This massive server is running about 20C across all my 30 drives, even during parity checks🥶😅. Anyway, hopefully the below helps! I'll keep checking back for any news. Quote Link to comment
ryann Posted August 19, 2020 Author Share Posted August 19, 2020 16 hours ago, Pourko said: Right. Parity check finished. Reads are right at 1GB/s... so that's a little improvement over the 800MB/s I was seeing, but still not the 1.2GB/s of a fully saturated connection. However, I can live with that. Writes on the other hand... still around 300MB/s. For those talking about the UDMP, I'm on the same VLAN now as the server, so traffic is not getting routed through the UDMP. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.