Jump to content

Helmonder

Members
  • Posts

    2,818
  • Joined

  • Last visited

Everything posted by Helmonder

  1. I would run it again.. Also maybe check your drives? If the segfaults are unrelated then maybe there is a disk having issues..
  2. Hi, I am using Syncthing to sync my main unraid server to my backup system. Works great. I recently made some networking changes to my system and where my unraid servers (and with that dockers) used to be connected with 1gig utp, the situation is now as follows: Main server is connected with 10G SFP+ to my internal network, running on 192.168.1.0/24 Backup server is connected with 1GB SFP+ to my internal network, running on 192.168.1.0/24 The main server and backup server have a direct connection with 10G SFP+, these have their own network: 10.10.10.0/24 (primary .1, secundary .2) The two syncthing dockers have ip addresses in the 192.168.1.0/24 What I now would like to do is have both syncthing dockers communicate over the 10G private network between the two servers while I can still reach the servers on their 192.168.1.0/24 address. The interfaces all work, if I give the dockers a 10.10.10.0/24 address then they sync fine but the webgui is not reachable anymore.. Any idea how I could do this within a docker ? In the end I could use two VM's but I am pretty happy with the docker setup and would like to keep that..
  3. And it works ! For anyone interested, the following ebay card works out of the box immediately: https://www.ebay.nl/itm/MELLANOX-MNPH29D-XTR-DUAL-PORT-10G-ADAPTER-CARD-MNPH29D-XTR-NO-GBICS/264238911491?ssPageName=STRK%3AMEBIDX%3AIT&_trksid=p2060353.m2749.l2649
  4. I just added a mellanox sfp+ interface and gbic to both my pc and my unraid server. Both are connected to a Mikrotik 10gig switch. The whole systems functions and I do have an increased transfer speed. However... I used to have an active-backup bond as network connection, this used to be 2 1gig utp network connections and things were fine. Initially I changed the bond and interface assignment to have the 10gig mellanox as eth0 with eth3 (the old primary utp 1gig) in the bond. When I set it up like that my transfer speeds seem to be limited by the speed of eth3.. I solved this for now by removing eth3 from the bond (having a bond now with only eth0 in it), this works and gives a big increase in transfer speed. Ofcourse I would like to get my bond back.. Is there some way of making this work with the 10gig beiing the primary link that is allways used and the 1 gig only used when the 10gig fails ?
  5. I took the easy way out and bought me a new Mellanox card that -does- do eth out of the box 🙂 These things are realy cheap on ebay.. I now have a MELLANOX MNPH29D-XTR DUAL PORT 10G ADAPTER CARD - MNPH29D-XTR This one does eth out of the box, now waiting for the DAC cable... Btw: I ordered thru SF.COM... These guys are realy extremely cheap..
  6. Dont need it at all... I am allready testing with it... Its just that there are a lot of users that do not feel very comfortable with changing files, thats all..
  7. Would be a great plugin.. I would love to do a live test to see if it really makes any difference to turn off or leave on...
  8. Out of my comfortzone here 🙂 I'll think of another solution.
  9. Would this help: http://www.mellanox.com/page/products_dyn?product_family=193&mtag=freebsd_driver There is a tarball file here..
  10. Is it possible to add MFT (Mellanox Firmware Tools) to the Unraid distribution ? It helps greatly in getting all the 2nd hand Mellanox cards that are flooding the market working.. http://www.mellanox.com/page/management_tools
  11. Spotweb has been a challenge the last months to keep running as a docker.. Something to do with database requirements changing... So I changed it to running in a VM... I use the same VM to host xTeVe in... I also was able to get that to run as a docker but it was extremely slow.. Its is running a lot quicker (though still slow) in a VM.
  12. Would love to do that, but how ? At the moment my log is flushed with the following error messages: May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:03 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2 May 11 19:11:04 Tower kernel: mlx4_core 0000:01:00.0: command 0x54 failed: fw status = 0x2
  13. I have made an addition in my go file in the very beginning to make the needed setting: # Set Mellanox cards to ethernet echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1 echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2 # This works. There is however 1 major disadvantage... The interfaces come up as beiing down in unraid, putting them up is not an issue on itself, but Dockler will only recognize the interfaces that are up when docker is started, which means that after every reboot I need to: - enable interfaces for the Mellanox card - stop docker - start docker There appears to be an option to make the setting persistent, but I do not seem to find the necessary folders in the unraid filesystem (and also: since unraid is built during reboot I think making these settings here will not work anyhow: Option 2 (in case RDMA is in use): Edit the file /etc/rdma/mlx4.conf: Note: This file is read when the mlx4_core module is loaded and used to set the port types for any hardware found. Format: <pci_device_of_card> <port1_type> [port2_type] port1 and port2: One of "auto", "ib", or "eth". port1 is required at all times, port2 is required for dual port cards. For example: 0000:05:00.0 eth eth Perform reboot to reload the modules #reboot I cannot find the mlx4.conf file in the unraid file system but it -does- appear in the filesystem of the dockers: /etc/modprobe.d/mlx4.conf Anyone any idea ?
  14. My backup server stil has Marvell.. This worked (but this has been an intermittent issue for a long time)
  15. Did you see my post ? I got mine working: https://forums.unraid.net/topic/79988-mellanox-interface-not-showing/?tab=comments#comment-742772
  16. Primary server upgraded from latest beta to 6.7 in one go, no issues
  17. Backupserver upgraded from previous stable tp 6.7 in one go, no issues.
  18. Did that... but in the options dialogue there is not the option to change the port..
  19. Checking this out: # lspci | grep Mellanox 01:00.0 InfiniBand: Mellanox Technologies MT26428 [ConnectX IB QDR, PCIe 2.0 5GT/s] (rev b0) #echo ib > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port1 #echo eth > /sys/bus/pci/devices/0000\:01\:00.0/mlx4_port2 From: https://community.mellanox.com/s/article/howto-change-port-type-in-mellanox-connectx-3-adapter I actually found these files in the file system: I was able to change the port type and now have them active !! The whole thing is not persistent the way I did it now.. There is an option to do it with persistence, but that I think in case of unraid I cannot do myself.
  20. Arrghh.... I was afraid of something like that.. Thats what I get for getting enthusiastic to soon.. Just to be sure: Nothing I can do about it ? What would your best guess be: wait it out to see if unraid will support it or go out and ebay new cards ?
  21. Thanks ! - Stopped the array The file now shows the following: # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:62", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" # PCI device 0x8086:0x1533 (igb) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ac:1f:6b:94:71:63", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" - Removed the file - Rebooting.. - System back up. In settings still only eth0 and eth 1
  22. I just installed my Mellanox card in my server (and another one in my backup server). The card is a HP 592520-B21 4X QDR CX-2 Dual Port Adapter Mellanox ConnectX-2 MHQH29B-XTR Interface - System devices shows: IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 05) [15b3:673c] 01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0) I am not fully linux savvy when it comes to this stuff but I think the driver is loading: 01:00.0 InfiniBand: Mellanox Technologies MT25408A0-FCC-QI ConnectX, Dual Port 40Gb/s InfiniBand / 10GigE Adapter IC with PCIe 2.0 x8 5.0GT/s In... (rev b0) Subsystem: Mellanox Technologies MT26428 [ConnectX VPI PCIe 2.0 5GT/s - IB QDR / 10GigE] Kernel driver in use: mlx4_core Kernel modules: mlx4_core I am expecting to see an extra NIC in my Settings - Network, but nothing is visible. There are no errors in the log (diagnostics attached), specific to the Mellanox card I see the following, does not look like an error: May 10 14:12:48 Tower kernel: mlx4_core: Mellanox ConnectX core driver v4.0-0 May 10 14:12:48 Tower kernel: mlx4_core: Initializing 0000:01:00.0 And a bit further down: May 10 14:12:48 Tower kernel: mlx4_core 0000:01:00.0: 32.000 Gb/s available PCIe bandwidth (5 GT/s x8 link) What am i looking at here ? Am I correct in thinking that a second interface should "appear" in unraid settings ? Or does it not work that way ? lp link show shows: 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 3: gre0@NONE: <NOARP> mtu 1476 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/gre 0.0.0.0 brd 0.0.0.0 4: gretap0@NONE: <BROADCAST,MULTICAST> mtu 1462 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 5: erspan0@NONE: <BROADCAST,MULTICAST> mtu 1450 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff 6: ip_vti0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 7: sit0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/sit 0.0.0.0 brd 0.0.0.0 10: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 11: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 12: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 13: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether ac:1f:6b:94:71:62 brd ff:ff:ff:ff:ff:ff 14: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:dc:26:eb:4b brd ff:ff:ff:ff:ff:ff Netstat -i Iface MTU RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg bond0 1500 6333525 0 181 0 4445360 0 0 0 BMPmRU br0 1500 6304380 0 39 0 4293148 0 0 0 BMRU docker0 1500 0 0 0 0 0 0 0 0 BMU eth0 1500 6304562 0 0 0 4445358 0 0 0 BMsRU eth1 1500 28962 0 0 0 0 0 0 0 BMsRU lo 65536 112 0 0 0 112 0 0 0 LRU Any help is appreciated .. tower-diagnostics-20190510-1306.zip
  23. They seem to be gone since the last update..
  24. And it works out great ! All my dockers are back up and running.. It cost me a cache drive crash to start it off but effectively I have now "powerwashed" my complete cache drive..
×
×
  • Create New...