meh301

Members
  • Posts

    9
  • Joined

  • Last visited

Everything posted by meh301

  1. I run a ZFS Pool mounted to /mnt/gaochan on which I store all VMs and Docker Containers. Post-update, this pool is fully detected and functional, and no errors are being logged in relation to the startup of Docker. Furthermore the docker.img image seems to be correctly detected (= not in red) in the settings. The following is a screenshot of my Docker settings: I quickly took a look at any mention of docker in the diagnostics but the docker.txt log is blank and there does not seem to be anything out of the ordinary in syslog.txt... As this also affects VMs, I do not think it is only a docker issue. I am aware that this could be due to some incompatibility with the ZFS plugin itself, and will move this to the plugin thread if instructed to do so, though it seems there are mentions of similar issues from different configurations as well. Next Day Update: I have tried copying all my data (docker.img + appdata) to a normal unraid pool - again, no errors but not containers either. kenchitaru-serv-diagnostics-20210313-0510.zip
  2. I have been using a ZFS pool for VM and Docker storage previously. Post-update, all docker containers and VMs are no longer detected, even with the path explicitly written in. The ZFS pool is alive and well so I am not sure what is wrong at this point... I can attach some more information if required
  3. eth3 was down during diagnostics saving because it was disconnected... I have two 1gbe interfaces linked to the router, giving unraid two separate IPs (br0 and br1) and I have two client PCs with 10gbe connectivity connected to the unraid server directly (no 10gbe switch). Unraid bridges the 10gbe connections to the LAN, with each 10gbe network having their own dedicated 1gbe link to the router. The issue in doing this was that only one of the client links would get full 10gbe speed to unraid, while the other would be set to 1gbe only, even if everything is reporting it as 10gbe. Bridging both 10gbe links to a single 1gbe link also does not allow both client links to achieve 10gbe. But I managed to get it to work by setting one of the unraid 1gbe links to the router to be on a different subnet. It only works if the interfaces connecting to the router from unraid are on different subnets entirely. I'm doing this for two reasons: 1. I am in a small Japanese apartment and can't really route anymore cables or fit a switch anywhere 2. I have a 2gbe internet connection but only 1gbe interfaces to the router, this in the very least allows me to saturate the wan with two computers.
  4. Okay so ummm Apparently Unraid cannot do two 10gbe bridges on the same subnet! If you have two IPs to the server, they need to be clearly separate. I am not sure if this is a bug or a limitation.
  5. I'm at a complete loss, can't find anything remotely the same online. I have an Intel x550 T-2 card in my unraid server for direct 10gbe connectivity to two external computers. One of the ports works perfectly fine getting around 9gbe, while the other port stays fixed at 1gbe. This port shows up as having a 10gbe link and everything though. Initially I thought it could have been a bridging issue: My network setup (for convenience) is to bridge eth2 (10g) to eth0(1g to internet) and eth3(10g) to eth1(1g also to internet). (in the screenshot they are inverted because I was testing it out) This results in eth3 achieving 10g but eth2 only getting 1g. I also tried inverting the bridges but eth2 does not go beyond 1g. I then tried direct connection between server and client and eth2 can do 10g without any issues. Is there some sort of bridging limitation? I also tried bridging just one of the 10g connections to eth0. Eth3 can do 10g but again eth2 in the same situation gets stuck at 1g... (Both show up with the exact same settings in ethtool) root@Kenchitaru-Serv:~# ethtool eth3 Settings for eth3: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes root@Kenchitaru-Serv:~# ethtool eth2 Settings for eth2: Supported ports: [ TP ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Supported pause frame use: Symmetric Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 10000Mb/s Duplex: Full Port: Twisted Pair PHYAD: 0 Transceiver: internal Auto-negotiation: on MDI-X: Unknown Supports Wake-on: d Wake-on: d Current message level: 0x00000007 (7) drv probe link Link detected: yes I've also attached a diagnostics zip just in case kenchitaru-serv-diagnostics-20200810-1937.zip
  6. I ran it and it works! Guess I'll have to buy a licence for unraid now
  7. Here are the system logs kenchitaru-serv-diagnostics-20200702-0958.zip
  8. Hi, I am building a virtualization + docker server and I decided I wanted to test out unraid to ensure it is a viable solution. I purchased a supermicro x10dri from the used market for the task and installed a trial of unraid to a usb stick. On initial boot, the only available ethernet ports were those of my add-in cards (two 4x1gbe cards and one 2x10gbe card), and these worked fine for networking once setup. The issue is, the onboard lan ports are not at all in the device list and I kind of want forward all my add-in cards to a VM. Initially I suspected that a jumper on the motherboard was in the "disabled" state but that was not the case, and I know the ports work, even if they aren't present in the interface, they do light up when plugged in. I'm at a veritable stump at this point in time and not quite sure what else I could do. The onboard controller is the intel I350 so it shouldn't be an issue, and both ports are present in the pcie devices in their own respective groups. Running lshw -class network (IPMI FTW) results in an 'UNCLAIMED' state for both ethernet ports and I'm not sure how to claim them. It is somewhat hard to access files on the server at this point in time due to the connection issue so I'll only upload system logs if absolutely necessary.