bavism

Members
  • Posts

    8
  • Joined

  • Last visited

bavism's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Any good way of diagnosing speed issues with SMB mounted shares? I have a remote SMB share on my cloud server that I transfer large files from to my unraid. If I browse the share directly from my windows machine and copy a file from it to that machine, I saturate my incoming bandwidth (300mbps, the server's upload is much greater). I can also verify this amount of bandwidth between the 2 machines using iperf3. However, if I copy from the remote share in unraid (either directly in unraid, using "rsync -ahP" to watch the speed, or by sharing the mount and copying in Windows), I only get about 5MB/s average, or 40mbps. What should I look at next?
  2. The plugin install script for some reason doesn't seem to write to `/boot/config/smb-extra.conf`. At first I thought it was because that file didn't exist for me, and `add-smb-extra` appends to it, but even after touching that file and reinstalling the plugin, it never gets written to. I just ended up running `/tmp/unassigned-devices/add-smb-extra` manually. Not sure what the reason behind this is, although I'm curious why the `add-smb-extra` script is invoked through `at` with a time of now...
  3. Hi all, running into some issues passing my 960 to a Windows 7 VM. It sounds like Windows 10 might give me better luck, but unfortunately I need Windows 7 to do some compatibility testing for work. What I've tried: bios from techpowerup bios as dumped from https://github.com/SpaceinvaderOne/Dump_GPU_vBIOS/blob/master/dump_vbios.sh (interestingly, I could only dump the bios with this script when the GPU was set as the primary display GPU in my BIOS, but I've also heard that some cards flag a chance in the bios after they've been initialized which the drivers can look for to throw code 43) 960 gtx set as primary display in BIOS IGPU set as primary display in BIOS (on an i7-4770) 960 GTX set to bind to vfio on boot 960 GTX NOT set to bind to vfio on boot HyperV on/off Pass through both video and audio devices on the same bus/slot as a multifunction PCI device I'm using the 466.27 nvidia drivers in Windows 7 (curiously, sometimes if I uninstall the drivers, the first time I install them [before restarting] the driver doesn't report error 43, but programs relying on GPU don't run correctly, so I'm not actually sure if the GPU is fully enabled in those cases before the restart) I've got the hyperv vendor_id set to a 12 character string KVM hidden state is on I've given almost every combination of the above choices a try, and still only get error 43 (verified via remote desktop, as I don't get any video out on the GPU). Things I *haven't* tried: Q35 cpu type: I can't switch this without changing bus 0 to a pcie-root, but that seems to conflict with my main drive being a SATA drive, and I can't seem to get the drivers for a VirtIO drive to work in Windows 7 as they aren't signed (even with test signing on, and forcing signed driver requirement off) As mentioned before, a Windows 10 machine. I'd be interested to see if that works, but it won't help me get the machine I actually need up and running Older NVIDIA drivers? Any thoughts folks?
  4. Adding INTERFACE didn't solve the problem Current run command: /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='openvpn-as' --net='bridge2' --log-opt max-size='50m' --log-opt max-file='1' -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" -e 'PGID'='100' -e 'PUID'='99' -e 'INTERFACE'='eth0' -p '943:943/tcp' -p '9443:9443/tcp' -p '1194:1194/udp' -v '/mnt/user/appdata/openvpn-as':'/config':'rw' --cap-add=NET_ADMIN 'linuxserver/openvpn-as' ff1ad02a88e6e8bcfaf27a54fb73364c371ca262c0bf217464b237662bad3c6c ifconfig from the container shows eth0 with the correct ip: root@ff1ad02a88e6:/# ifconfig eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 172.18.0.3 netmask 255.255.0.0 broadcast 172.18.255.255 ether 02:42:ac:12:00:03 txqueuelen 0 (Ethernet) RX packets 53 bytes 7403 (7.4 KB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 37 bytes 3074 (3.0 KB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536 inet 127.0.0.1 netmask 255.0.0.0 loop txqueuelen 1000 (Local Loopback) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 Network config on bridge2 seems correct (at least, nothing missing compared to the default bridge): { "Name": "bridge2", "Id": "e2b9cc8c0b99a6067ccba2f885a97dbc098a51ea66f2146a2c3b38820ff3303d", "Created": "2020-01-24T00:04:57.831164279-08:00", "Scope": "local", "Driver": "bridge", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": {}, "Config": [ { "Subnet": "172.18.0.0/16", "Gateway": "172.18.0.1" } ] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": { "df12779932b70052012b8850741b0ddccb5caf9aeb84e2e65d70be221495a183": { "Name": "ddclient", "EndpointID": "c953404ed51e638bf0e761e5d2fcabf37b3600f36309361f19ebf2ff72c4e5db", "MacAddress": "02:42:ac:12:00:02", "IPv4Address": "172.18.0.2/16", "IPv6Address": "" }, "ff1ad02a88e6e8bcfaf27a54fb73364c371ca262c0bf217464b237662bad3c6c": { "Name": "openvpn-as", "EndpointID": "5db7e0a0ccb7832b6098b8da46ae2bf3bfd11a9d54a090bc3bdb3638a8406694", "MacAddress": "02:42:ac:12:00:03", "IPv4Address": "172.18.0.3/16", "IPv6Address": "" } }, "Options": { "com.docker.network.bridge.enable_icc": "true", "com.docker.network.bridge.enable_ip_masquerade": "true", "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0", "com.docker.network.driver.mtu": "1500" }, "Labels": {} } Still the same error in the OpenVPN-AS web ui. I would be interested in seeing your settings where you have OpenVPN running on a custom network.
  5. The only interface that presents itself on containers connected to bridge2 is eth0, which is what openvpn is already set to listen to (and what the openvpn-as docker docs suggest the default is, after they removed the need to explicitly set the INTERFACE varaible). Clearly that is working, as I can connect to the web ui for openvpn-as when it's on bridge2 (the web ui is set to use eth0 as the interface). What version of unraid and openvpn-as are you using? What docker command did you use to setup the custom bridge network? Can I see your openvpn-as docker config? Thanks all for the help.
  6. Yes I realize that is the cause, since that's literally the only config I changed... what I want to know is why? What's missing on that network interface that's on the docker default? I've read about the differences, but I can't imagine what in there is causing the daemon to not even start. You're probably right, but I want to provide access for people to the client UI to grab their ovpn files remotely. I'll lock down the admin openvpn ui once everything is setup. I suppose for now I can use --link options on the default bridge to get openvpn working and nginx seeing it, but I'm still curious what's causing the problem above...
  7. Thanks for the help. I'm not sure where the interaction with macvlan is... the custom bridge is running on the bridge driver, not macvlan. It's the same driver as the default bridge. As "docker network ls" reports: 7addae9b988f bridge bridge local // the default docker netowrk e2b9cc8c0b99 bridge2 bridge local // my custom network (the only thing on macvlan is the br0 network, which openvpn-as isn't configured to use anywhere) However, I can cross that "bridge" when I come to it. For the moment, I can't even test access to docker via that IP, as the OpenVPN-AS daemon won't start. I suppose my configuration might make this an "artificial problem", but I wasn't intending on allowing access to the unraid gui via WAN through nginx. I wanted access via openvpn only... if I can get OpenVPN-AS to start... The only reason I mention nginx is that I want access to the OpenVPN-AS ui from WAN, so nginx and openvpn must run on the same bridge, and nginx must run on a custom bridge to allow DNS discovery of all the other services I want it to proxy for.
  8. I have an OpenVPN-AS instance running on my unraid box connected to the default docker bridge, and everything works fine. However, I want to use a custom bridge network so that my nginx instance (which I use to proxy all WAN traffic to the various services on my home server) can reach all the internal services via docker's DNS resolution, which only works on a custom bridge (if I don't do it this way, I have to manually go in and fix the IPs in the nginx "proxy_pass"es every time I do enough starting/stopping of containers that the IP assignments change). When I switch OpenVPN-AS to the custom bridge, without changing any other configuration, the openvpn server daemon fails to start, with this message in the admin UI: Error: service failed to start due to unresolved dependencies: set(['user']) service failed to start due to unresolved dependencies: set(['iptables_openvpn']) Service deferred error: IPTablesServiceBase: failed to run iptables-restore [status=2]: ["Bad argument `[unsupported'", 'Error occurred at line: 108', "Try `iptables-restore -h' or 'iptables-restore --help' for more information."]: internet/defer:653,sagent/ipts:134,sagent/ipts:51,util/daemon:28,util/daemon:69,application/app:384,scripts/_twistd_unix:258,application/app:396,application/app:311,internet/base:1243,internet/base:1255,internet/epollreactor:235,python/log:103,python/log:86,python/context:122,python/context:85,internet/posixbase:627,internet/posixbase:252,internet/abstract:313,internet/process:312,internet/process:973,internet/process:985,internet/process:350,internet/_baseprocess:52,internet/process:987,internet/_baseprocess:64,svc/pp:141,svc/svcnotify:32,internet/defer:459,internet/defer:567,internet/defer:653,sagent/ipts:134,sagent/ipts:51,util/error:67,util/error:48 service failed to start due to unresolved dependencies: set(['user', 'iptables_live', 'iptables_openvpn']) ... line above repeated several times... service failed to start due to unresolved dependencies: set(['iptables_live', 'iptables_openvpn']) As far as I can tell, I have all the same settings on my custom bridge that the default docker bridge has (icc, ip masquerading, etc.). The OpenVPN-AS container starts fine, I can still access the web ui's correctly (either directly or via nginx), it's just the sever daemon that doesn't start, with the error above. I have verified that iptables-restore is present in the container. If needed, I can post all of my OpenVPN-AS config (or the docker custom bridge config), although not much has changed from the defaults (I just don't push a gateway, and I only expose the subnet 172.17.0.1/32, which is the gateway (host) on the bridge network, since all I want to do over vpn is access the unraid management UI). But again, my OpenVPN-AS config works perfectly fine and does everything I want on the default bridge.