Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Tom3

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The problem interface appears correct. Depending on age and vendor, some ethernet NICs had autonegotiation problems. You can turn off autonegotiation for the problem interface and force 1000 full duplex and see if it comes up correctly: $ ethtool -s eth0 autoneg off speed 1000 duplex full This is not a 'sticky' setting, it should revert to default on next boot. -- Tom
  2. Ok. I misunderstood the original post problem directionality. Check the interface setting on the problem Unraid server using the CLI ethtool command. Example from my system: root@Tower:~# ethtool eth0 Settings for eth0: Supported ports: [ TP ] Supported link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 10baseT/Half 10baseT/Full 100baseT/Half 100baseT/Full 1000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 1000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 1 Transceiver: internal Auto-negotiation: on MDI-X: off (auto) Supports Wake-on: pumbg Wake-on: g Current message level: 0x00000007 (7) drv probe link Link detected: yes root@Tower:~#
  3. If the link comes up at 1G in one direction, then it's unlikely to be a cable problem. 1G uses all 4 pairs full-duplex. If they work one-way then the cable is good with high probability. You may want to try a different client device. The symptoms suggest that the client is not auto-negotiating a 1G link, but the server is. This could be due to settings on the client, or a defect in the client. -- Tom
  4. 1 GE does not use half duplex (for a long time). The original spec did permit half-duplex with a hub. Hubs have not been produced in many years. Since the advent of switches, 1G-Base-T (Cat 5 twisted pair) interfaces use full duplex. I'm going to guess that most or perhaps even all available interfaces do not support half-duplex 1GE. -- Tom
  5. On the main screen. Under Boot Device, click Flash. Next screen click 'Flash Backup' It will ask where on the client you want to put the backup. -- Tom
  6. Tried it today and the md5 hash problem is resolved. Everything appears to be working correctly. Thanks for repairing this ! -- Tom
  7. It may depend on how you are doing the deletion. For example the Ubuntu GUI File manager will by default put each file in the 'Trash' directory so they can be recovered later. This slows down massive file deletions terribly. There is a shift-delete on Ubuntu GUI that just deletes the files without this extra step, and it runs much faster. If you are deleting files using the command line from the terminal, then by default it doesn't use the trash directory. -- Tom
  8. Thanks! It's back upon re-installing Community Applications, and "Fix Common Problems" is now updatable as well. -- Tom
  9. So finally got the container to use eth1. What made this take so long is that the Docker tab in UNRAID showed the container as mapping to eth0 even when bridged to eth1. It is actually bridged to both eth0 and eth1. Here's my notes (for my own future sanity) -------------------------------------------------------------------- There are multiple uses of the word bridge in docker networking. 1. It's a Linux networking object. This can be seen with root@Tower:~# route Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface default modem.domain UG 216 0 0 br0 UG 227 0 0 br1 U 227 0 0 br1 UG 227 0 0 br1 U 216 0 0 br0 U 0 0 0 docker0 U 0 0 0 virbr0 Which shows that the networking bridge br1 is handling the network to the gateway root@Tower:~# brctl show bridge name bridge id STP enabled interfaces br0 8000.ac1f6b6c1fc4 no eth0 br1 8000.ac1f6b6c1fc5 no eth1 docker0 8000.02421a0073f7 no virbr0 8000.525400dab422 yes virbr0-nic Shows that br1 is using eth1 as it's interface. 2. There is a separate bridge object in Docker that is a virtual Ethernet switch with ports in the private space (for the first virtual bridge). Each subsequent bridge gets the next assignment block of internal addresses. 3. A new Docker bridge object needs to be created that uses the Linux br1 This is not to be confused with the UNRAID preconfigured docker br1 bridge of type macvlan. root@Tower:~# docker network create -o "com.docker.network.bridge.name=br1" my-net This creates a new docker bridge named my-net of type bridge (the type you get when no type is specified with switch -d ...) that uses the br1 Linux bridge as the external interface. Then select the Docker tab, select the Docker Image you want, left-click and select edit. Choose Custom : my-net for the virtual bridge from the Docker drop-down list. The UNRAID window showing the container mapping doesn't show this, it shows the host default interface mapping. Both interfaces work in this specific container. -- Tom
  10. I opened up apps this morning, it it advised that there was an update, so I clicked update. That caused the APPS menu to disappear. Rebooting did not bring back the APPS menu. Anyone know how to get it back? -- Tom
  11. Thank you again for your patience! I created my-bridge of type -d bridge with the --subnet= docker network inspect my-bridge looks good When selecting Custom : my-bridge in the GUI for the docker container, the displayed mappings are: App to Host bridge to So it's setting the docker app bridge internal IP address to and routing that out to eth0. If I force the -p as an additional parameter, the docker container run crashes. I was able to get the Web GUI to work on eth1 by changing the webgui parameter in the edit panel from: http://[IP]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1 to: http://[IP:]:[PORT:6080]/vnc.html?resize=remote&host=[IP]&port=[PORT:6080]&autoconnect=1 Now the container responds with a GUI on both eth0 and eth1. Progress, sort of. -- Tom
  12. Thanks. Absolutely no luck. The eth1 interface is working as I can login in to the management interface on However when I set Custom : br1 as the network for the Docker container, it assigns as the IP address (within the pool of 5 available). Thus I think it is using MACVLANS. I can successfully ping and but cannot reach the Docker container on it's defined ports. When setting the container to use Bridge, it uses (shared with the management GUI) is pingable at that address and responds to the appropriate ports. The port mapping, etc. is the same between using 'Bridge' and 'Custom : br1' then only thing changed is the specific network. Can one add a custom bridge that is of type bridge and not of type macvlan? Does that even make sense? I cannot find 'add' anywhere. -- Tom
  13. So I've found the following file: /boot/config/network.cfg. It appears to be the one that sets up Custom : br0 and Custom : br1 # Generated settings: IFNAME[0]="br0" DHCP_KEEPRESOLV="no" DHCP6_KEEPRESOLV="no" BRNAME[0]="br0" BRNICS[0]="eth0" BRSTP[0]="no" BRFD[0]="0" PROTOCOL[0]="ipv4" USE_DHCP[0]="yes" USE_DHCP6[0]="yes" IFNAME[1]="br1" BRNAME[1]="br1" BRSTP[1]="no" BRFD[1]="0" BRNICS[1]="eth1" PROTOCOL[1]="ipv4" USE_DHCP[1]="yes" SYSNICS="2" One can see how to edit that to add another, however I don't see what parameter tells the new Custom bridge whether to use macvlan or bridge. Some posts refer to the network setting page for configuring the bridges / macvlans, but I don't see any options to add a new bridge, nor to set the parameters. eth1 (the interface eth1 I'm trying to get one specific Docker container to use) is defined in ipconfig -a and it has all the correct parameters for the interface (correct ip address, netmask, up, etc. Is there some definition of the parameters in this file to see how to configure the new bridge into bridge mode? -- Tom
  14. Same request. This has been a long rabbit trail today. I have attempted to create a new bridge that is on eth1 (10. . . address): 1. Stop Docker & VM service with GUI 2. Create a new bridge 2A. $ ip link add new_bridge_name type bridge 2B. $ ip addr add dev new_bridge_name 2C. $ ip link set new_bridge_name up and that creates the appropriate entry as a bridge with the correct external ip address and bridge name, etc. 3. Edit /boot/config/docker.cfg, appending: DOCKER_OPTS="-b=new_bridge_name" 4. Start docker & VM service with GUI. The only file I could find to edit is /boot/config/docker.cfg - not sure this is the right place... or that DOCKER_OPTS is the right directive. But this just makes br1 disappear and the new_bridge_name does not appear, so can't be selected. Commenting out the #DOCKER_OPTS line and restarting Docker & VM gets br1 back. I am not using br1 because it is of type macvlan. Any advice here? -- Tom, N5EG