Jump to content

Ethernet speed issue. 10GB NIC is not recognised properly


Go to solution Solved by itimpi,

Recommended Posts

Hi all,

Got a aqc107 10GB NiC. Checked all cables etc and it is deffinately connected to a 10GB port on a router.

However, I am getting slow speeds on it so I ran a ethtools eth0 command and it returned the following. No 10G.

Any ideas?

Transfer speeds are quite slow too, for example I think I get around 300-500mbs on LAN. Transferring from a PC with a 10GB NiC to Unraid is giving me about 12megabytes (not bits) per sec which is deffo too slow for what I have. When I had a qnap with two 2.5GBe link aggregated I could transfer around 100MB no problem.

 

etht.jpg

Link to comment

This is a typical client card, not meant for servers.

You speed problem comes from "Supported pause frame use: Symmetric Receive-only" (aka "flow control").

For servers it should be either "send+receive" or at least "Send-only". A server sends more data out that it receives usually.

 

I don't know if the driver allows you to change the mode, usually it is pushed from the switch to the card, so maybe you should take a look there?

(there are drivers for some cards that do not honor this feature at all, usually these cards cannot be used too)

 

Your QNAP ran too slow too already, 100MB/s is just an 1Gbe Speed (Link Aggregation does not help you at all, its a failover and just a marketing gag). But of course, to get the possible 250Mb/s all components (including the Harddrives) need to be able to keep up with this speed, so usually it is ~180-200Mb/s

 

Link to comment

Ok so I restarted whole thing and the 10G now appears. I think the slow performance is due to ZFS actually. I installed the latest RC4 and formatted all array drives to ZFS. 

I noticed that when I start transfer for about 1-2mins the speeds are in 100s of megabytes, then they drop to 30mb.

As an experiment I converted one of the drives to btrfs and it gave me a stable speed.

i've got 64gb of RAM and assigned 48GB to ZFS via zfs.conf thing but it doesnt seem to do much, hmm

Link to comment

@MAM59, you have me worried now. I have a mellanox connect-x 3 card which has the same from ethtool. It's connected via a DAC cable to a PC and I seem to get full 10gbps when there are no other bottlenecks on that connection

 

~# ethtool eth2
Settings for eth2:
        Supported ports: [ FIBRE ]
        Supported link modes:   1000baseX/Full
                                10000baseCR/Full
                                10000baseSR/Full
        Supported pause frame use: Symmetric Receive-only
        Supports auto-negotiation: No
        Supported FEC modes: Not reported
        Advertised link modes:  1000baseX/Full
                                10000baseCR/Full
                                10000baseSR/Full
        Advertised pause frame use: Symmetric
        Advertised auto-negotiation: No
        Advertised FEC modes: Not reported
        Speed: 10000Mb/s
        Duplex: Full
        Auto-negotiation: off
        Port: Direct Attach Copper
        PHYAD: 0
        Transceiver: internal
        Supports Wake-on: d
        Wake-on: d


Should I be concerned? 

Link to comment
2 hours ago, apandey said:

Should I be concerned? 

No, my quote above was a little bit wrong.

The essential line is "Advertised pause frame use", if "Symmetric" its ok, flow control will be in both direction, if "No" (like in his listing) flow control is off and stuttering will happen with speed degregation.

 

Also, the Flow Control Problems only apply to twisted pair cabling.

 

 

Edited by MAM59
  • Thanks 1
Link to comment
10 hours ago, Tomb_of_ash said:

As an experiment I converted one of the drives to btrfs and it gave me a stable speed.

There are some known issues with zfs write performance in the array only, if the source if faster than the array disks, also depending on the disks models and array config, do you mind posting diags?

Link to comment

Changed to BTRFS and rolled back to a stable build, and now it's around 100MB write performance, even via doublecommander write from USB device. Think it's an hdd issue tbh (WDC_WD101KRYZ). Wonder if enabling compression made it that way?

2 hours ago, JorgeB said:

There are some known issues with zfs write performance in the array only, if the source if faster than the array disks, also depending on the disks models and array config, do you mind posting diags?

Do you know ETA when the ZFS write performance in array only will be fixed? To be honest I probably will stick with BTRFS since it allows compression and isnt clogging memory too much.

I'm attaching the diags for RC4 with ZFS and stable with BTRFS

btrfs diags.zip ZFS diags.zip

Link to comment
4 minutes ago, Tomb_of_ash said:

Do you know ETA when the ZFS write performance in array only will be fixed?

Nope, but first to see if it was really this, can you confirm you were writing directly to the array, not to cache? I see you are using the default writing mode, and with parity write speed to the array should be well bellow 100MB/s, with zfs or any other fs, it should be >100MB/s with turbo write enabled.

Link to comment

Yes, direct writing. My cache is only for apps, domains and such. 

Just tried turbo write and it fixed it all! Got multi Gig speed! Thanks so much! Would you suggest going ZFS for main array or BTRFS? I mainly use this to archive various video editting projects so ocassionally work from NAS. Plus PLEX library.

 

UPD. spoke too soon, speed dropped back to the usual levels after a couple of file transfers again.

Edited by Tomb_of_ash
Link to comment

With turbo write and a fast source you should get close to line speed during a few seconds while the transfer is being cached to RAM and then drop to the max speed of your slowest disk at the point, with the disks you are using I would estimate at around 160-180MB/s for an empty disk, without turbo write you should see initially the same close to line speed and then drop to around 50-75MB/s, this with xfs/btrfs, possibly slower with zfs, usually most noticeable with turbo write,  if this is not what you are sing there could be other issues involved.

Link to comment
On 4/30/2023 at 9:19 AM, JorgeB said:

No.

 

Try transferring from cache to array, instead of using the USB device.

Thanks, this has improved things a bit. I get about 80MB or so but can use multiple devices to transfer and they all consistent with that speed at the same time.

Just to check, I enabled btrfs compression on both cache and array. Would that significantly impact transfers?

 

Would you also recommend switching array to ZFS for future-proofing since Unraid is going that way? What I am using it is mostly to store old projects (aftereffects etc), photos and PLEX library.

Got 64GB of RAM and 2 XEON CPU E5-2687W 0 @ 3.10GHz. 

Compression is important but I also would like features like deduplication.  I previously was on XFS for array and used dedupeguru to do hardlinks. It would be great if that would be automatic on FS level like ZFS offers.

 

Link to comment
  • Solution
13 minutes ago, Tomb_of_ash said:

Would you also recommend switching array to ZFS for future-proofing since Unraid is going that way?

It is not clear that ZFS in the array offers significant benefits over using BTRFS in the array as performance is still capped by the way the array maintains parity, and there have been reports that ZFS is slower.  

 

The big benefit of ZFS is going to be when it is used in a pool as there it can give significant performance benefits.   Long term the current Unraid array is going to become just another pool type that you can use when appropriate for a particular use case.

Link to comment
2 minutes ago, itimpi said:

It is not clear that ZFS in the array offers significant benefits over using BTRFS in the array as performance is still capped by the way the array maintains parity, and there have been reports that ZFS is slower.  

 

The big benefit of ZFS is going to be when it is used in a pool as there it can give significant performance benefits.   Long term the current Unraid array is going to become just another pool type that you can use when appropriate for a particular use case.

Thank you. Will stay with BTRFS for now then.

 

What about BTRFS compression impact. Y/N on Array, Cache or both?

Edited by Tomb_of_ash
Link to comment
21 minutes ago, Tomb_of_ash said:

What about BTRFS compression impact. Y/N on Array, Cache or both?

I have no real experience based on evidence to give a view either way :(   It would be nice to have some feedback on where measurements have been made to give a good answer to this question :) 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...