kennymc.c

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kennymc.c's Achievements

Newbie

Newbie (1/14)

3

Reputation

  1. Duplicati soll einen Backup Server als Ziel erreichen, der nur über das VPN verfügbar ist. curl localhost:8200 liefert in beiden Containern kein Ergebnis. Ich kann aber den remote Backup Server anpingen. Innerhalb des Docker Bridge Netzwerks müsste es eigentlich funktionieren, nur komme ich nicht von außen an die Web-UI. Mir ist gerade aufgefallen, dass ich seltsamerweise die Duplicati API über den selben Port mit dem Unraid Host erreichen kann und darüber auch ein Backup mit dem Server über ein Script erstellen kann. Es scheint also auch nur um die Web Oberfläche zu gehen.
  2. Ich versuche gerade einen Duplicati Docker Container (linuxserver/duplicati) über einen PPTP VPN Container (adito/vpnc) mit einem entfernten Server zu verbinden. Das funktioniert bis zu einem bestimmten Punk auch wie gewollt aber ich bekomme den Port von Duplicati (8200) nicht über den VPN Container nach außen geleitet, obwohl er in der Config vom VPN Container entsprechend gemappt wurde. In der Konsole des Duplicati Container sehe ich mit curl ifconfig.io die externe IP des VPN Servers und im VPN Container kann ich über wget localhost:8200 die Duplicati Loginseite downloaden. Also sollte es eigentlich nur am Host Port liegen, der scheinbar nicht an den Host weitergeleitet wird. nmap liefert für Port 8200 ein filtered. Den Host Port zu ändern, hat nichts gebracht. Ohne laufenden VPN läuft Duplicati auch auf Port 8200. Mit anderen Containern hatte ich nie solche Probleme. Ich konnte Duplcati auch schon erfolgreich über einen OpenVPN Container laufen lassen und da war auch das Port Mapping kein Problem. Allerdings muss die VPN Verbindung jetzt über PPTP laufen. Docker run Duplicati: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='duplicati-vpn' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/':'/source':'rw' -v '/mnt/user/appdata/duplicati/cert/':'/cert':'rw' -v '/mnt/user/appdata/duplicati/config/':'/config':'rw' --name=duplicati-vpn --net=container:vpnc 'linuxserver/duplicati' Docker run VPN: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='vpnc' --net='bridge' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'VPNC_GATEWAY'='xxx' -e 'VPNC_ID'='xxx' -e 'VPNC_SECRET'='xxx' -e 'VPNC_USERNAME'='xxx' -e 'VPNC_PASSWORD'='xxx' -p '8200:8200/tcp' --name=vpnc --privileged=true 'adito/vpnc'
  3. So instead of using the 4 physical 1GbE interfaces I create 4 vlans with 2 subnets for the Mellanox card? Since only my router support VLANs and my switch is unmanaged I will have to configure this in Unraid.
  4. Disconnecting all 1G NICs or putting them into a different subnet is not an option for me. They need to communicate with other devices of the 1GbE subnet. I read that round robin dns by default is not checking which IP is reachable. So I have to find another solution.
  5. The problem is that that the USB interface will not always be connected to the client. I'm looking for a way to automatically choose the 10gbe subnet ip for the server when the usb interface is connected and the router subnet ip when I'm connected via wifi. I googled I little bit about DNS load balancing Looks like this could be the solution for this.
  6. Thanks, that seems to work. Now I get up to 2.33 Gbit/s with 4 parallel connections. Via SMB I get around 1,27 Gbit/s to the cache drive with a 4GB file, but think this could be related to the missing macOS driver that enables jumbo frames. Maybe I will try this later with the Windows VM. But now I have to be connected with 2 network interfaces to reach both subnets. Is there a way to combine these? Or is this not possible because my switch is unmamaged? Are there other ways to achive higher speeds without a separate 10GbE subnet? Sorry for that question but i never worked with different subnets before. I noticed that Unraid lost the DNS server config after I changed the ip configuration although my router ip was shown as the DNS server for all interfaces. I had this issue before and setting the DNS server assignment to manual and automatic again solved it.
  7. No, all Intel NICs are using 192.168.1.201-204 and the Mellanox card is using 192.168.1.205.
  8. I have now tried to disconnect the internal NICs from the switch and suddenly get about 1.37 Gbit/s to the server 1.8 Gbit/s in the other direction with 4 parallel connections. With one it is only 1.23 Gbit/s. Still no good values but at least faster than 1 Gbit/s. If I connect the NICs to the switch again, the speed drops back to the old level. Does Unraid still combine several interfaces internally despite deactivated bonding and is the speed set to the lowest common value? I can't explain it any other way at the moment. Or am I missing something in iperf?
  9. I've been trying for some time to find the reason why I can't get speeds above 1 Gbit/s to my Unraid server with a built-in Mellanox ConnectX-3 10GB SFP+ Ethernet card (MCX311A). My macOS client is connected via a 2.5G USB NIC with a Realtek RTL8156 chip. In between is a Zyxel XGS1010-12 switch with the server connected to the SFP+ port via a DAC cable. A 10G link is shown on the SFP+ port and a 2.5G link on the 2.5G port. However, via SMB and iperf3 I only get about 1 Gbit/s in both directions: Unraid: iperf3 -s --bind 192.168.1.205 Client: iperf3 -c 192.168.1.205 ~ 18:34:28 Connecting to host 192.168.1.205, port 5201 [ 5] local 192.168.1.120 port 52161 connected to 192.168.1.205 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.01 sec 116 MBytes 962 Mbits/sec [ 5] 1.01-2.00 sec 111 MBytes 940 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec [ 5] 3.00-4.01 sec 113 MBytes 941 Mbits/sec [ 5] 4.01-5.00 sec 111 MBytes 941 Mbits/sec [ 5] 5.00-6.01 sec 112 MBytes 939 Mbits/sec [ 5] 6.01-7.00 sec 112 MBytes 944 Mbits/sec [ 5] 7.00-8.01 sec 112 MBytes 937 Mbits/sec [ 5] 8.01-9.00 sec 112 MBytes 943 Mbits/sec [ 5] 9.00-10.00 sec 112 MBytes 941 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 1.10 GBytes 943 Mbits/sec sender [ 5] 0.00-10.00 sec 1.10 GBytes 941 Mbits/sec receiver iperf Done. iperf3 -c 192.168.1.205 -R ~ 18:40:38 Connecting to host 192.168.1.205, port 5201 Reverse mode, remote host 192.168.1.205 is sending [ 5] local 192.168.1.120 port 52205 connected to 192.168.1.205 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 112 MBytes 939 Mbits/sec [ 5] 1.00-2.00 sec 112 MBytes 939 Mbits/sec [ 5] 2.00-3.00 sec 112 MBytes 941 Mbits/sec [ 5] 3.00-4.00 sec 112 MBytes 941 Mbits/sec [ 5] 4.00-5.00 sec 112 MBytes 941 Mbits/sec [ 5] 5.00-6.00 sec 112 MBytes 941 Mbits/sec [ 5] 6.00-7.00 sec 112 MBytes 941 Mbits/sec [ 5] 7.00-8.00 sec 112 MBytes 941 Mbits/sec [ 5] 8.00-9.00 sec 107 MBytes 898 Mbits/sec [ 5] 9.00-10.00 sec 101 MBytes 843 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.00 sec 1.08 GBytes 930 Mbits/sec 26 sender [ 5] 0.00-10.00 sec 1.08 GBytes 927 Mbits/sec receiver iperf Done. My other 4 internal Intel NICs are also connected to the same switch, but the IP used is that of the Mellanox card. Even with multiple parallel streams it does not get faster: iperf3 -c 192.168.1.205 -P 4 ~ 18:42:09 Connecting to host 192.168.1.205, port 5201 [ 5] local 192.168.1.120 port 52233 connected to 192.168.1.205 port 5201 [ 7] local 192.168.1.120 port 52234 connected to 192.168.1.205 port 5201 [ 9] local 192.168.1.120 port 52235 connected to 192.168.1.205 port 5201 [ 11] local 192.168.1.120 port 52236 connected to 192.168.1.205 port 5201 [ ID] Interval Transfer Bitrate [ 5] 0.00-1.00 sec 29.9 MBytes 251 Mbits/sec [ 7] 0.00-1.00 sec 29.8 MBytes 250 Mbits/sec [ 9] 0.00-1.00 sec 29.8 MBytes 250 Mbits/sec [ 11] 0.00-1.00 sec 29.3 MBytes 246 Mbits/sec [SUM] 0.00-1.00 sec 119 MBytes 997 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 1.00-2.00 sec 28.1 MBytes 236 Mbits/sec [ 7] 1.00-2.00 sec 27.6 MBytes 231 Mbits/sec [ 9] 1.00-2.00 sec 28.1 MBytes 236 Mbits/sec [ 11] 1.00-2.00 sec 28.1 MBytes 236 Mbits/sec [SUM] 1.00-2.00 sec 112 MBytes 939 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 2.00-3.00 sec 28.2 MBytes 236 Mbits/sec [ 7] 2.00-3.00 sec 28.1 MBytes 236 Mbits/sec [ 9] 2.00-3.00 sec 28.2 MBytes 236 Mbits/sec [ 11] 2.00-3.00 sec 28.2 MBytes 236 Mbits/sec [SUM] 2.00-3.00 sec 113 MBytes 945 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 3.00-4.00 sec 27.9 MBytes 234 Mbits/sec [ 7] 3.00-4.00 sec 27.3 MBytes 229 Mbits/sec [ 9] 3.00-4.00 sec 27.9 MBytes 234 Mbits/sec [ 11] 3.00-4.00 sec 28.3 MBytes 238 Mbits/sec [SUM] 3.00-4.00 sec 111 MBytes 935 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 4.00-5.00 sec 28.1 MBytes 236 Mbits/sec [ 7] 4.00-5.00 sec 28.1 MBytes 236 Mbits/sec [ 9] 4.00-5.00 sec 28.1 MBytes 235 Mbits/sec [ 11] 4.00-5.00 sec 28.1 MBytes 236 Mbits/sec [SUM] 4.00-5.00 sec 112 MBytes 942 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 5.00-6.00 sec 28.2 MBytes 236 Mbits/sec [ 7] 5.00-6.00 sec 28.1 MBytes 236 Mbits/sec [ 9] 5.00-6.00 sec 28.1 MBytes 236 Mbits/sec [ 11] 5.00-6.00 sec 28.2 MBytes 236 Mbits/sec [SUM] 5.00-6.00 sec 113 MBytes 944 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 6.00-7.00 sec 28.1 MBytes 235 Mbits/sec [ 7] 6.00-7.00 sec 28.0 MBytes 235 Mbits/sec [ 9] 6.00-7.00 sec 28.1 MBytes 236 Mbits/sec [ 11] 6.00-7.00 sec 28.1 MBytes 236 Mbits/sec [SUM] 6.00-7.00 sec 112 MBytes 942 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 7.00-8.00 sec 28.2 MBytes 236 Mbits/sec [ 7] 7.00-8.00 sec 28.3 MBytes 237 Mbits/sec [ 9] 7.00-8.00 sec 28.2 MBytes 236 Mbits/sec [ 11] 7.00-8.00 sec 27.7 MBytes 232 Mbits/sec [SUM] 7.00-8.00 sec 112 MBytes 942 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 8.00-9.00 sec 27.7 MBytes 232 Mbits/sec [ 7] 8.00-9.00 sec 27.6 MBytes 232 Mbits/sec [ 9] 8.00-9.00 sec 27.6 MBytes 232 Mbits/sec [ 11] 8.00-9.00 sec 27.8 MBytes 233 Mbits/sec [SUM] 8.00-9.00 sec 111 MBytes 928 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ 5] 9.00-10.00 sec 27.3 MBytes 229 Mbits/sec [ 7] 9.00-10.00 sec 29.2 MBytes 245 Mbits/sec [ 9] 9.00-10.00 sec 27.3 MBytes 229 Mbits/sec [ 11] 9.00-10.00 sec 26.1 MBytes 219 Mbits/sec [SUM] 9.00-10.00 sec 110 MBytes 922 Mbits/sec - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate [ 5] 0.00-10.00 sec 282 MBytes 236 Mbits/sec sender [ 5] 0.00-10.00 sec 281 MBytes 235 Mbits/sec receiver [ 7] 0.00-10.00 sec 282 MBytes 237 Mbits/sec sender [ 7] 0.00-10.00 sec 281 MBytes 236 Mbits/sec receiver [ 9] 0.00-10.00 sec 281 MBytes 236 Mbits/sec sender [ 9] 0.00-10.00 sec 280 MBytes 235 Mbits/sec receiver [ 11] 0.00-10.00 sec 280 MBytes 235 Mbits/sec sender [ 11] 0.00-10.00 sec 279 MBytes 234 Mbits/sec receiver [SUM] 0.00-10.00 sec 1.10 GBytes 944 Mbits/sec sender [SUM] 0.00-10.00 sec 1.09 GBytes 940 Mbits/sec receiver iperf Done. I first thought it was due to the missing macOS Big Sur drivers of the RTL8156, but I learned in another forum that despite the missing drivers, other users can achieve speeds over 1 GBit/s with it even if only a 1000BaseT connection is shown in the system settings. I tried to pass the USB NIC through to a Windows VM, but even there I get the same result with activated jumbo frames and installed drivers. What could be the reason for this? I can only explain this with the fact that despite using the correct IP in reality only one of the 1 Gbit/s Intel NICs is used. Additionally I have attached a diagnostics file. unraid-diagnostics-20210503-1848.zip
  10. I changed the fan speed minimum on the Fan Control/Fan Settings tab in the IPMI plugin page. When I originally installed my sever I set this to 31.2% since this was the threshold where my fans were not reporting a 0 rpm reading when they are running with their lowest speed. Noctua fans do not seem to be able to report rpm readings under 300 rpm although they actually might be running slower for a short time. Instead a 0 rpm reading is reported and causing all fans to speed up. I never saw this issue with one my cpu fans from Scythe which should even be able to be running 50 rpm slower. Maybe this is related to some special voltage tuning from Noctua to reducing noise.
  11. The same happened to me this morning. All fans suddenly started to run at full speed although I didn't change anything in the fan control config. There are no fan log entries or higher temperatures occurring at the time it happened. After turning fan control off and on again all fans are running normal for about 15-35 minutes but then start to rotate with full speed again. ps -aux | grep ipmifan shows the php ipmifan script as running during this time. I'm also using a Supermicro X11SCH-LN4F Board which is the same board as yours but with 4 NICs. Did you find a solution for this or or has it not happened since? Edit: I tried to set the fan speed minimum to a higher level to not get a 0 rpm reading from a fan which will result in supermicros fan control take over and speeding up all fans due to a hard-coded failsafe feature in Supermicro boards. Low-rpm fans like Noctua will sometimes report 0 rpm for a short time although they still rotating with very low speed. Restarting the ipmifan script will takeover the fan control from the Supermicros failsafe control. Since I haven't re-adjusted the fan speed minimum recently I'm wondering why this happened today? Nevertheless the fans still speed up and now in an even shorter period. Restarting the server didn't help. Changing the minimum values and clicking on Apply will also reset all fans to normal speed. I tried to run the script with --debug but there were not entries during the time the fans speed up. I suspect the Supermicro failsafe is somehow involved in this but I can't seem to get telegraf to log the rpms in a shorter interval than 1 minute although I set the global metric to 10 second. Edit 2: Like I assumed it was one specific fan that was running too low for a short time. I had to adjust the minimum threshold from approx. 30 to 50 percent until the fan was not reporting 0 rpm after a while. Since I didn't change any thresholds or minimum I am still wondering why this happens today and there have never been such problems in the last 7 months since the sever has been running. Perhaps the electronics of the fan have been somewhat affected by the continuous operation? Are there other reasons why something like this could occur?
  12. Just installed 6.9.2 and updated the Unraid.net Plugin. But after that, my custom theme was reset to the basic black theme and I can not load any saved themes anymore. Loading a base theme still works. I testest this in Chrome, Firefox on Safari on macOS. Uninstalling or reinstalling the Unraid.net Plugin didn't help. This console error is shown only in Chrome when I try to load a theme: DevTools failed to load SourceMap: Could not parse content for https://unraid.local/plugins/theme.engine/include/FileSaver.min.js.map: Unexpected token < in JSON at position 0 Edit: Reinstalling the Theme Engine plugin and re-importing my custom theme seems to have solved the problem.
  13. Thanks, never noticed this before. I probably reinstalled a previously manually configured container with a ca-template version.
  14. I noticed (probably since the 6.9 update) that previously deleted port mappings of docker containers reappear after an update. This causes some containers to no longer start because other containers are already using these ports. Is there any way to prevent this?
  15. @kannznichkaufen Did you change the host or container path? The host path can still point to appdata/influxdb. Only the container path needs to be changed to /var/lib/influxdb2.