EvilUSB

Members
  • Posts

    4
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

EvilUSB's Achievements

Noob

Noob (1/14)

2

Reputation

  1. https://www.intel.com/content/www/us/en/download/19363/non-volatile-memory-nvm-update-utility-for-intel-ethernet-adapters-550-series-efi.html I found this, will try and update soon and see if anything changes.
  2. Hello, I've noticed the following errors in dmesg: [Fri Jul 21 11:04:31 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 [Fri Jul 21 11:04:33 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 [Fri Jul 21 11:04:35 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 [Fri Jul 21 11:04:37 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 [Fri Jul 21 11:04:39 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 [Fri Jul 21 11:04:41 2023] ixgbe 0000:02:00.0: Warning firmware error detected FWSM: 0x80000000 The server works fine for now, but the errors come very 2 seconds. Is there a way to suppress the errors or best fix them? I've seen in other forums people update their NIC kernel driver, but can I do that in unraid? Thanks!
  3. Problem solved. Thanks big time JorgeB! I followed your advice ran iperf in both directions. It was holding 1Gbps. Then double checked the interfaces. Windows was reporting 10Gbps. Unraid ethtool reported 10Gbps for eth0 ... but only 1Gbps for bond0. My setup have single PCIe 10Gbps interface and two integrated 1Gbps interfaces bonded by default. After disabling interface bonding (leaving bridging enabled) everything went back to normal. In my case that solved the mystery. Now ethtool reports 10gbps for both eth0 and br0. iperf now shows around 8-8.5Gbps which is amazing. If anyone is wandering how to disable interface bonding you have to: - stop all VMs and disable VM Manager (Settings > VM Manager) - stop all Dockers and disable Docker (Settings > Docker) - make sure your primary NIC is eth0 (Settings > Network Settings > Interface Rules) - change Enable bonding to No - enable and start your VMs and Dockers Unraid forums are the best!
  4. I don't want to highjack berta's thread, but my case is super identical. I have nvme and ssd cache drives (two separate single disk caches) and a 10 gig network (both servers and desktops are with 10 gig NICs). After upgrading to 6.10.3 my speeds are 100MB/s fixed, no more, no less. It does not matter if I copy from or to the cache drives (tried both of them in both directions). Reporting on Unraid's MAIN tab is also weird. Tranfering 1.5TB of data from my Win 10 Desktop PC to the server (cache drive is empty and 2TB in size). On the Unraid > Main > Pool devices > cache_ssd > write column ... the speeds reported are 0 MB/s for some time, then 300-450 MB/s ... then 0 MB/s again ... Same behaviour can be observed with smaller 3-4GB files. Before the Unraid version change the speeds were close to the maximum supported by the drives/network. The only change in the setup was the new version. Hopefully it would be fixed in some future upgrades, because I hate the idea of downgrading the OS.