dnLL

Members
  • Posts

    219
  • Joined

  • Last visited

Posts posted by dnLL

  1. Just noticed the VPN IP is part of RFC 1918 reserved IPs for private (local) networks, is that normal? Tried Spain and Israel servers.

    2020-04-12 10:02:52,603 DEBG 'watchdog-script' stdout output:
    [info] qBittorrent process listening on port 8080
    
    2020-04-12 10:02:52,666 DEBG 'watchdog-script' stdout output:
    [debug] VPN incoming port is 38328
    [debug] qBittorrent incoming port is 38328
    [debug] VPN IP is 10.12.11.6
    [debug] qBittorrent IP is 10.12.11.6

    That's confusing. As for qbittorrent.log, 10.1.1.54 being the docker IP:

    
    (N) 2020-04-12T10:02:52 - qBittorrent v4.2.3 started
    (N) 2020-04-12T10:02:52 - Using config directory: /config/qBittorrent/config/
    (N) 2020-04-12T10:02:52 - qBittorrent v4.2.3 started
    (N) 2020-04-12T10:02:52 - Using config directory: /config/qBittorrent/config/
    (I) 2020-04-12T10:02:52 - Trying to listen on: 0.0.0.0:49121,[::]:49121
    (N) 2020-04-12T10:02:52 - Peer ID: -qB4230-
    (N) 2020-04-12T10:02:52 - HTTP User-Agent is 'qBittorrent/4.2.3'
    (I) 2020-04-12T10:02:52 - DHT support [ON]
    (I) 2020-04-12T10:02:52 - Local Peer Discovery support [ON]
    (I) 2020-04-12T10:02:52 - PeX support [ON]
    (I) 2020-04-12T10:02:52 - Anonymous mode [OFF]
    (I) 2020-04-12T10:02:52 - Encryption support [FORCED]
    (I) 2020-04-12T10:02:52 - IP geolocation database loaded. Type: DBIP-Country-Lite. Build time: Tue Mar 31 19:49:13 2020.
    (N) 2020-04-12T10:02:52 - Using built-in Web UI.
    (N) 2020-04-12T10:02:52 - Web UI translation for selected locale (en) has been successfully loaded.
    (N) 2020-04-12T10:02:52 - Web UI: Now listening on IP: *, port: 8080
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 127.0.0.1, port: TCP/49121
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 127.0.0.1, port: UDP/49121
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.12.11.6, port: TCP/49121
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.12.11.6, port: UDP/49121
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.1.1.54, port: TCP/49121
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.1.1.54, port: UDP/49121
    (N) 2020-04-12T10:02:52 - Web UI: Now listening on IP: *, port: 8080
    (N) 2020-04-12T10:02:52 - Web UI: Now listening on IP: *, port: 8080
    (I) 2020-04-12T10:02:52 - Trying to listen on: 0.0.0.0:38328,[::]:38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 127.0.0.1, port: TCP/38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 127.0.0.1, port: UDP/38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.12.11.6, port: TCP/38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.12.11.6, port: UDP/38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.1.1.54, port: TCP/38328
    (I) 2020-04-12T10:02:52 - Successfully listening on IP: 10.1.1.54, port: UDP/38328
    (I) 2020-04-12T10:02:55 - Detected external IP: 185.77.248.2

     

  2. Just switched servers since Canadian ones are broken (for port forwarding). According to supervisord.log, everything is working. I don't see anything special in qBittorrent.log either aside from the correct VPN IP. However, the webUI doesn't work, despite both logs saying it's now listening on port 8080. Is there a different log for the webUI hidden somewhere?

  3. 1 hour ago, ken-ji said:

    If the devices are on the same VLAN/subnet they will ignore the router and communicate directly.

    I feel like an idiot. That's the part I was missing. Thank you for the info, basically if you really want all the communications to go through pfSense, you need a dedicated VLAN for every VM. Which makes sense now that I think about it since I've been reading about some people doing exactly that...

  4. Just tested it and it works with a VLAN, I'm fully able to isolate the VM however I want through regular pfSense rules. I guess my questions now are just to help me understand what makes the traffic go through pfSense when on a VLAN and not through pfSense if no VLAN.

  5. 7 hours ago, ken-ji said:

    To limit host to VM and VM to host communications, you want them to go through a firewall - this can be done on the Unraid level via iptables, but that's a non scalable ugly hack.

     

    What you want is easy to do if you have VLAN support on your switches (or at least they happily pass VLAN tagged packets)

    Enable a VLAN in Unraid network settings. Make sure not to add an IP address to the new VLAN. (this will create a new network interface eth0.2/br0.2 for VLAN ID 2. Configure pfSense to support this VLAN (DHCP, DNS, gateway). Connect a VM to this network interface. the VM should then get a DHCP IP from pfSense. You can then firewall the IP/Subnet as needed.

     

    But how does it work by default? Like, if I create a VM, it does get a DHCP address from pfSense. But traffic doesn't route through pfSense. How does that make sense? Put it another way, why will a VLAN force traffic to go through pfSense basically?

     

    That will work for my need (since I basically want to isolate 1 VM) so I'll go and test it now, but I'm trying to understand the inner workings, why default routing doesn't go through the router and why it will with a VLAN.

  6. From another thread, I now understand I can't really have the communication between dockers and the Unraid host going through my pfSense router because of the way the docker engine is built, sharing ressources with its host. Can't get DHCP to work with dockers either.

     

    Now, my question remains about how to have VMs to host and host to VMs communications go through pfSense rather than be handled within the Unraid host itself. I probably need to edit the routes but last time I played with the routes, I locked myself out my Unraid host.

  7. 40 minutes ago, ken-ji said:

    This seems to be caused by the simple fact that there is no DHCPv4 client running in any container. Add to the fact that usually userland processes are not allowed to touch the network settings of the container, so the engine has to assign the IP (or the container specifies the IP to be assigned. I guess that its possible to have a DHCP-like plugin to the docker network system, but the developers were never interested in developing such a plugin.

     

    In IPv6, the same is in effect with the exception of the fact that SLAAC is configured at the kernel level, so the container can auto learn and set IPv6 networking, but again, DHCPv6 also doesn't assign IPv6 addresses to containers.

     

    Still won't work as the container engine does not actually consult with what's on the LAN and just obeys how the docker network has been configured.

    Interesting. So static IPs that magically "fit" into my network design are the best option if I can't have DHCP reservations and do need fixed IPs for whatever reason, correct?

     

    And if I want network isolation... well, that means I need hardware isolation which means dockers isn't adapted for that specific need and I should use VMs, correct?

     

    I'm looking at Wireshark right now and when my dockers are talking to the WAN, they do go through my router and the firewall (ie. having a static IP, I could block communications of a specific docker to the WAN). They also go through pfSense if they have to talk to my desktop. However, they don't go through pfSense when they're talking to anything related to Unraid (host, VM or docker), it stays inside Unraid network. Is there any way to configure Unraid networks to actually go through my firewall? I made a separate thread for this: 

     

  8. On 11/11/2018 at 2:00 AM, ken-ji said:

    Just to make things clear for everybody.

    Docker networks do not have a real DHCP server in the the usual sense.

    Docker networks do not interact with a DHCP server on that subnet either

    What docker simply does is it grabs the next free IP in the docker network, and assign it to the container. If the container stops (or leaves the network), the IP is automatically marked as free and available for the next container that requests an IP.

     

    The correct way to have DNS-like names is to do docker linking - which I don't like as its messy and doesn't persist across reboots

    The other correct way that's persistent is to assign IP addresses to each container that needs it and add it into your LAN's DNS server.

    This is an old thread/post but I can't find accurate information anywhere. Can't dockers get IP addresses assigned by my DHCP server (which is my pfSense router) rather than having Unraid just giving them the first "free" address (which bypasses completely the DHCP server which can create IP duplicates if not configured in a separate subnet)? What if I would like to have my router (pfSense) in-between my dockers and the LAN? I guess it's just a limitation of the docker engine since it shares the host's ressources such as the NIC.

  9. Hi yall,

     

    Currently, when I create a VM, it gets an IP address from my pfSense router. However, if I create a docker on br0, it gets the first IP after Unraid's IP even if that IP is already used on pfSense. In fact, I don't see the dockers at all in pfSense, as if Unraid was doing the DHCP itself. I would like that to change, to have my pfSense acts as the DHCP for both my VMs and my dockers.

     

    Another thing I noticed about VMs: when VMs communicate between each other (or with the host), they don't go through my pfSense router at all, all the communications are handled within Unraid. So, even if I create rule within pfSense to prevent VM 1 to talk to VM 2, that doesn't work since the traffic never reaches pfSense. That's also something I would like to change, I would like all the traffic to go through my firewall.

     

    I pretty much use the default network settings, with a bonding between my 2 network interfaces on my server.

     

    Here is the configuration of my 2 network interfaces:

     

    image.thumb.png.82c8750aa84d1cbc500ae0eb71be339a.png

     

    And here is pretty much what I think is the default routing table for Unraid, at least I didn't make any chance that I am aware of (and need help to understand what's really going on 172.17.0.0/16 and 192.168.122.0/24 since I don't use these networks and don't really want Unraid to use them):

     

    image.thumb.png.850430882413e39083973a21a3294215.png

     

    My goal at the end is to be able to actually use my pfSense firewall to prevent one specific VM from reaching anything else on the local network besides port 53 on pfSense for DNS purposes. There are most likely multiple ways to do that, but I kinda like the idea of having the traffic to go through the pfSense, this way I can properly monitor what's going on on my local network.

  10. If all you have is the AP, then all you had to do was configure the two WiFi Networks and assign one a VLAN ID.  The Network settings in the UniFi controller does nothing if you do not use UBNT gateway products.
    Just tested it and you are 100% right I had these setup for no reason. Can't delete the last one (LAN) however, the UI is preventing me from deleting the last network.
  11. 10 hours ago, mifronte said:

    I am running pfSense and I used the UniFi Controller SW to tag my guest WiFi with a different VLAN tag from my main WIFI.  My pfSense handles all the networking services for each WiFi network.  I do have a smart switch between the AP and pfSense, so my AP is not plugged directly into my pfSense.

    I just configured the whole thing this morning and it went like a breeze. Very very easy to figure everything out with some very basic knowledge and ability to read the manuals (which are both awesome ressources whether it's for UniFi or pfSense).

     

    My AP is directly behind pfSense, all I had to do on the UniFi controller is basically to create 2 networks in 2 different subnets, have one of them with a VLAN tag id, create my 2 wireless networks with the guest assigned to that vlan ID and relay all the DHCP stuff to pfSense. On pfSense, had to enable the interface the AP is plugged in and I added my main subnet in there (the safer wifi one), then I created a VLAN under that interface with the same ID as previously and I configured it in the same different subnet as I did in UniFi and bam, job's done, IPs get assigned properly. AFter that it's only a matter of creating firewall rules.

  12. I'm hijacking this thread a little bit but I had duplicati taking care of backuping my /boot/config stuff amongsts other things and it doesn't work anymore with the new permissions and... I can't change the permissions apparently even as root on my server.

     

    I know I could use the backup tools in Community Application but I like having everything centralized.

  13. I'm not sure I fully understand the issue here. You can pass GPUs to VMs (whether they're from nVidia or AMD). You can use your IGP to transcode inside a docker (I do it with Plex and my Intel IGP currently). What do nVidia users want to do exactly? Use hardware decoding directly in Unraid? 

     

    Don't throw rocks at me, just trying to understand.

  14. Noticed the server I'm using doesn't support port fowarding? Can't use anything in the US???

    2020-02-29 22:30:59,893 DEBG 'start-script' stdout output:
    [info] ca-toronto.privateinternetaccess.com
    [info] ca-montreal.privateinternetaccess.com
    [info] ca-vancouver.privateinternetaccess.com
    [info] de-berlin.privateinternetaccess.com
    [info] de-frankfurt.privateinternetaccess.com
    [info] sweden.privateinternetaccess.com
    [info] swiss.privateinternetaccess.com
    [info] france.privateinternetaccess.com
    [info] czech.privateinternetaccess.com
    [info] spain.privateinternetaccess.com
    [info] ro.privateinternetaccess.com
    [info] israel.privateinternetaccess.com
    [info] Attempting to get dynamically assigned port...

    Got it working with the Toronto VPN but eh...

  15. Got it working WITHOUT vpn enabled. Now, if I enable VPN, I get this:

     

    2020-02-29 22:26:14,822 DEBG 'start-script' stdout output:
    [warn] Response code 000 from curl != 2xx
    [warn] Exit code 7 from curl != 0
    [info] 10 retries left
    [info] Retrying in 10 secs...

    I'm using PIA, not sure what's wrong, the error is very generic? Do I need to port forward something?

    root@server:~# ls -l /mnt/user/appdata/qbittorrentvpn/openvpn/
    total 16
    -rwxrwxr-x 1 nobody users 2025 Oct 22 17:06 ca.rsa.2048.crt*
    -rwxrwxr-x 1 nobody users   20 Feb 29 22:25 credentials.conf*
    -rwxrwxr-x 1 nobody users  869 Oct 22 17:06 crl.rsa.2048.pem*
    -rwxrwxr-x 1 nobody users 3170 Feb 29 22:25 us2-aes-128-cbc-udp-dns.ovpn*
    

     

  16. So... totally new to this docker... installed it from CA, put my PIA credentials in there... but the docker won't start. Nothing in /var/log/docker.log, and I have this in /var/log/syslog:

    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered blocking state
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered disabled state
    Feb 29 11:08:14 server kernel: device veth979aba4 entered promiscuous mode
    Feb 29 11:08:14 server kernel: IPv6: ADDRCONF(NETDEV_UP): veth979aba4: link is not ready
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered blocking state
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered forwarding state
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered disabled state
    Feb 29 11:08:14 server kernel: eth0: renamed from vethda9dccf
    Feb 29 11:08:14 server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth979aba4: link becomes ready
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered blocking state
    Feb 29 11:08:14 server kernel: docker0: port 1(veth979aba4) entered forwarding state
    Feb 29 11:08:15 server kernel: vethda9dccf: renamed from eth0
    Feb 29 11:08:15 server kernel: docker0: port 1(veth979aba4) entered disabled state
    Feb 29 11:08:15 server kernel: docker0: port 1(veth979aba4) entered disabled state
    Feb 29 11:08:15 server kernel: device veth979aba4 left promiscuous mode
    Feb 29 11:08:15 server kernel: docker0: port 1(veth979aba4) entered disabled state

    I tried bridge, host and custom modes as network types. Also tried the debug flag. Haven't done anything else yet (didn't see any special instruction in the first post of this thread so I assumed it would work without any extra setup). Never had any problem with anything else from binhex or dockers in Unraid in general.

  17. On 2/21/2020 at 11:07 AM, Hoopster said:

    It's been that way for months now.  Fortunately, CA Backup restarts it often enough to keep the memory bloat manageable.

     

    I have no idea if the issue is the controller or the container.  I can observe memory usage on the container increment slightly but steadily constantly until it is well over 2.5GB before a weekly CA Backup restarts the container and resets the memory usage.

    Completely new to the Unifi world, thinking of purchasing an AP I just installed the LTS version of this docker to see what I can expect.

     

    Is the memory leak issue only happening in the latest version or is it also the case in LTS (5.6.x)? If their software is flawed it's hard to trust their hardware and the company as a whole, unless the memory management is purely related to lsio's implementation of the software inside docker.

     

    Bonus question not really related to the docker: I assume this software with one AP plugged into my pfSense box will allow me to have pretty much complete control over my WiFi? I want guests to be on a separate SSID and ideally a different network subnet to restrict their access to the local network and I want pfSense to handle most of it (DHCP/DNS/firewall/etc) if possible

  18. Hi y'all,

     

    My check_mk monitors my syslog just in case something weird happens and it detected the print_req_error below, twice today simultaneously and the same thing last week. It happened both times while I was updating plugins... so writing to the flash drive.

     

    Is my flash drive about to die? I have 226d of uptime currently and don't plan on rebooting for a while still (probably at least until ~6.9.0), I'm on 6.7.0. I do have weekly backups of my flash drive with smart retention (12 months) so it's not a problem to restore my config if need be. It's the 2 same sectors both times... not sure if that's great or not. Unraid webUI has it at 0 error still.

    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : 0x3 [current]
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 ASC=0x11 ASCQ=0x0
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 00 f8 1e 00 00 40 00
    Jan 17 02:35:41 server kernel: print_req_error: critical medium error, dev sda, sector 63518
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : 0x3 [current]
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 ASC=0x11 ASCQ=0x0
    Jan 17 02:35:41 server kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 00 f8 5e 00 00 80 00
    Jan 17 02:35:41 server kernel: print_req_error: critical medium error, dev sda, sector 63582
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : 0x3 [current]
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 ASC=0x11 ASCQ=0x0
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 00 f8 1e 00 00 40 00
    Jan 24 18:50:39 server kernel: print_req_error: critical medium error, dev sda, sector 63518
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x08
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 Sense Key : 0x3 [current]
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 ASC=0x11 ASCQ=0x0
    Jan 24 18:50:39 server kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x28 28 00 00 00 f8 5e 00 00 80 00
    Jan 24 18:50:39 server kernel: print_req_error: critical medium error, dev sda, sector 63582
  19. For anybody stressing out, the best way to keep calm is to always have another copy of anything important and irreplaceable. Parity, even dual parity, is no substitute for a backup plan.
    I agree. But I don't backup everything (such as my Plex library). While losing any of the data I don't backup wouldn't bother me that much, I would still rather not. And when you use your server for some small projects that eventually grow up over time you want to avoid big downtimes for your friends (I host a webserver, a gaming server, Discord bots and so on).

    For the most part in my case it's more about the time involved in restarting from your backups rather than stressing out about actual data loss.