xyzeratul

Members
  • Posts

    153
  • Joined

  • Last visited

Everything posted by xyzeratul

  1. I had this bug a few times before: Once was in 6.11.2, the cause is the http proxy setting mess up, clean the proxy setting fix this. The recent one is when I switch to 10GB SFP+ connecting, maybe the NIC or the cable issue, I am still fixing it, but I do know this happen when network connecting is bad. So I suggest you check your network hardware setup and network settings, to root out any setting issue, you best bet is make a new USB boot drive to see if the problem is still there, because use safe mode can still have this bug( bad proxy or network settings are still present in safe mode)
  2. I know the NIC is probably OK, the swich is brand new. So maybe the cable or SFP+ module? I ordered a different brand this time, will update after I get it.
  3. after a reboot and reinstall the card and SFP cable, the NAS to PC speed still stuck around 100mb/s, and PC to NAS speed stuck at 10mb to 30mb/s. I am a bit lost, any idea where I should start to look for the problem?
  4. Recently I bought a new switch has 2 10G SFP+ ports and 4 2.5G ports, connect my laptop and Unraid NAS both with 2.5G ports, the speed in normal, can transfer files at 280mb/s. Today I upgrade my NAS with a X520 DA2 card, connect to one of SPF+ port the switch, find out the speed is very slow, downloading files from NAS to my laptop around 40 to 80mb/s, and I can't even upload file from my laptop to the NAS, it just stuck. The NIC interface shows nothing wrong with the card: Running iperf shows extreme low speed: PC to NAS NAS to PC: Since this the 1st time I try 10Gb network and SFP+ card and cables, I wonder somewhere I must mess up big time.
  5. OK got it, so no need to change MTU on 2.5G or 10G NIC.
  6. Hi I recently upgrade my main switch to a 4 port 2.5G + 2 port 10G SPF+ mode, I plan to install 10G nic on my unraid and desktop PC, and use 2.5G usb nic on my laptop. because I was on 1G network before, I never touch the MTU, always leave it as 1500, but I hear 2.5G and 10G nic may need setup jumbo frame(MTU 9000)on them, and it's best for every client in the network has the same MTU. Should I enable the jumbo frame on my PC, NAS and laptop in this case?
  7. 前面被小螃蟹的2.5G网卡整了下,不过总体还算顺利,升级了两天目前没发现什么问题。
  8. I did a clean install and the onborad nic show up in network settings, so that means my old config must have something mess up. edit: found it, in /config/modprobe.d there is a config file blacklist the 8169 driver, I think it's leftover from the patch, delete it and the nic is back
  9. no, I mean I didn't uninstall the patch before upgrade, nor re-install it after the upgrade.
  10. Yes, this patch only replace these file: I think after upgrading to 6.12, these will also be replaced, so I didn't remove them.
  11. When I was on 6.11.5, I used this patch: but mainly for put i350 T2 in another group so I can bind one of it to my VM, not sure it's related or not. I check my network.cfg file, the old settings are still there: # Generated settings: IFNAME[0]="eth0" DHCP_KEEPRESOLV="yes" DNS_SERVER1="1.1.1.1" DNS_SERVER2="8.8.8.8" DNS_SERVER3="114.114.114.114" DHCP6_KEEPRESOLV="no" DESCRIPTION[0]="Onboard" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.1.200" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.1.1" METRIC[0]="1" USE_DHCP6[0]="yes" IFNAME[1]="eth1" PROTOCOL[1]="ipv4" METRIC[1]="1" MTU[1]="1500" IFNAME[2]="eth2" DESCRIPTION[2]="Onboard 2.5G" PROTOCOL[2]="ipv4" USE_DHCP[2]="no" IPADDR[2]="10.10.10.1" NETMASK[2]="255.255.255.0" MTU[2]="9000" SYSNICS="3"
  12. nvm, this did nothing on my nas, I'll just uninstall it.
  13. here is the diags, not install the driver app as MAM59 mentioned 185409-diagnostics-20230617-1937.zip
  14. I went back to 6.11.5, will try upgrade again tonight to get diags
  15. I tried this, after reboot the onboard nic still not showing up in network setting, but I didn't reboot twice edit: reboot twice, still the nic not showing up
  16. Just upgrade from 6.11.5 to 6.12, I have a Intel i350 T2 card install and also has an onboard rlt 8125 card, all working fine in 6.11.5, in system devices, it show all 3 nic, but in network settings, only the 2 nic from Intel i350 T2 card show up as eth0, 1. a bit more detail: My unraid manage port is one of the nic on that i350 T2 card, the onboard 2.5G nic port I use to directly connect to my PC for faster file transfer. So without the 2.5G nic showing up in the network setting, my direct drive mapping connecting to my PC is gone, but I can still mange my unraid server with local lan.
  17. Thx, I think they need to add this function, I'd rather have local storage for this.
  18. But why the manual update is fine? I don't understand dockerman's logic behind this.
  19. I am using custom icon for each docker container, which stored in /mnt/user/isos/icons, pair with auto update plugin But every time my docker auto update new imagine, my custom icon disappear, but if I do it manually, it's fine, is this a bug or some setting I mess up?
  20. ??? 市面上主流机箱都是支持MATX的,而且不比只支持ITX的小机箱贵。 如果不在乎外观或者其他功能,你只要考虑自己需要装多少硬盘进去和什么型号的电源就行了。
  21. Recently I am looking into 10G SFP+ upgrade for my office setup, I have: One Unraid server, running with 8125b 2.5G nic, and one windows desktop running the same 8125b 2.5G nic, one laptop with a USB 8156b nic, a 8 port 2.5G with 2 port 10G switch My plan is to install one 10G SFP+ nic for unraid and one for the desktop, talking to my local "IT guy", he told me he can get me a bunch of these cards left over from server upgrade, but the one support RDMA is 30% more expensive. From what can I find online, I still don't know how much the difference does RDMA make in my daily work, should I pay more to get this?
  22. 的确会出现这种情况,我也觉得这个设计很傻。
  23. 因为跨境传输本来就是玄学,我在非洲,中东和大洋洲的不同国家都试过,有些国家能跑满我家里NAS的上传,有些最多50kb/s, 和你本地网络没啥关系。
  24. Thx, Multi-Gen LRU does sound nice, never thought Unraid would support it so soon, anyway where can I find the full list feature of this upgrade?
  25. Me not really into the whole ZFS upgrade, so is there anything else interesting in 6.12?