jzawacki

Members
  • Posts

    37
  • Joined

  • Last visited

Everything posted by jzawacki

  1. And one more time. I was already using macvlan with v6.12.4 and have read the release notes about it.
  2. I am using macvlan becuase ipvlan never worked properly. ipvlan wasn't even an option.
  3. Good luck with that. When I attempted to use ipvlan, the entire unraid server had issues talking to the network. I was hoping to no longer be a part of this topic as my server had been running fine for over 6 months without issues. Then, being super dumb, I updated to 6.12.4 and the server kernel panic'd literally the next day. Hopefully, after it finishes a parity check, it'll go back to normal. Fingers crossed.
  4. It do, it's a: 08:01.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Rage 3 [Rage XL PCI] (rev 27)
  5. Same for me, I couldn't even get the host to be able to update after switching to ipvlan, and I'm not using UniFi. So, I believe, if ipvlan fixes it for anyone, it's because it is stopping the actual cause (one of the dockers?) not to be able to talk to the network. But that's just a guess.
  6. I wish there was a solid answer/fix. For me personally, It had to do with one of the dockers. I installed docker and portainer on my backup server, moved a bunch of dockers to it, and although I still have 9 dockers running full time on unraid, my server uptime is currently 21 days and I can't remember the last time it kernel panic'd. Now, since I'm posting this, it'll kernel panic by the end of the night.
  7. Well, I had to switch back to macvlan due to some network weirdness. unraid had been running for 20 days without issues, but I had some dockers that wouldn't stay connected and it wasn't till I realized that unraid couldn't check to see if there was a version update available, did I blame it on the docker ipvlan setting. Switched it back to macvlan and all the network weirdness went away and the server hasn't kernel panic'd yet, so I'm keeping my fingers crossed.
  8. I'm afraid to post.. cause I don't want to jinx myself, but unraid has been up for 5 days after switching to ipvlan running all dockers except for one (that is known to cause issues). I hope I don't have to post again any time soon.
  9. There is a lot of info here.. so I'm just going to chime in so that I receive notifications as this thread progresses. Not speaking on the cause of anyone else's kernel panics, but mine match a few of these screenshots verbatim. I've been running Unraid for as long as I can remember and running dockers since they were originally introduced. My server has been rock solid this entire time, even after upgrading MB/Proc/RAM, adding 10G NICs, replacing HDDs, etc. Even with 6.9.2, never had any issues. Out of the blue, I decided to upgrade to 6.10.0 and everything went south. Random kernel panics. Obviously, blaming it on the upgrade, reverted to the previous version, 6.9.2 but the kernel panics continued (in hindsight, is it possible that 6.10.0 made changes to the docker configs causing 6.9.2 to continue having issues?). Doing the research, assumed it was hardware related and replaced the MB/Proc/RAM and the kernel panics continued. I noticed that it appeared the kernel panics seem to happen around the time the Appdata Backup/Restore was scheduled, so I started diving into that. Found some errors accessing files, so thought it might be corruption issues. Resolved all that, moved appdata to different drives, on to/off of cache, etc. Turned off all containers and slowly turned them on, one at a time (over the course of a month), to see if I could pinpoint the cause, but nothing was repeatable. Everything appeared to be random. As of two days ago, my server had been running for 28 days with all dockers running without issues. So, when it kernel panic'd the other day, I remove Appdata Backup/Restore, moved everything docker related onto the protected array and it kerenel panic'd that evening. Anyway, latest change I am testing (finger's crossed) is changing the docker setup to use ipvlan instead of macvlan. if that doesn't work, I'm going to plug another NIC in and try using eth# instead of br0 for the dockers that need a static IP to operate properly. PS: I learned of ctop, which is like top for docker containers.. loving it. Wish it was built into Unraid or that Unraid had that information available within the web GUI.
  10. Since PiHole has the ability to track individual devices, I would configured the DHCP server to hand out 192.168.1.3 and a fake IP address for the second IP. Then, on PiHole, make it's upstream DNS custom and give it 192.168.1.4. Then, on LanCache server, give it's upstream DNS as your ISP's DNS or 8.8.8.8, or whoever you prefer. Using this method, you retain all the function of PiHole with the benefit of the LanCache server. Just keep in mind that if DHCP hands out more than 192.168.1.3 as a valid IP address, things might not always work correctly, as windows doesn't use them as primary and if I can't talk to primary, use secondary, it will randomly use either of them depending on how it feels at the time. You may also have to clear the DNS cache on the PiHole if it already knows the real IP address for the services you are trying to cache. James
  11. It's funny you say this. The docker run command is correct, all configured properly, but it doesn't make it into the container, I promise. (IP and path redacted) docker run -d --name='unifi-controller' --net='br0' --ip='1.1.1.13' -e TZ="America/Denver" -e HOST_OS="Unraid" -e 'UDP_PORT_3478'='3478' -e 'TCP_PORT_8080'='6759' -e 'TCP_PORT_8443'='6760' -e 'TCP_PORT_8880'='6761' -e 'TCP_PORT_8843'='6762' -e 'UDP_PORT_10001'='10001' -e 'PUID'='99' -e 'PGID'='100' -v '/REMOVED/unifi-controller':'/config':'rw' 'linuxserver/unifi-controller' But, what I was able to end up doing, is put the docker in a location that is easily accessible via SMB share, manually change the permissions of the system.properties file so I could edit it easily, save the file and restart the docker. Bingo, it's listening on all the custom ports I wanted it to be listening on. debug.device=warn debug.mgmt=warn debug.sdn=warn debug.system=warn is_configured_and_restarted=true is_default=false portal.http.port=6761 portal.https.port=6762 unifi.http.port=6759 unifi.https.port=6760
  12. Apologies if this has been addressed, I saw notes around page 8 of others having issues using custom ports, but I couldn't find a solution. It's unfortunate that so much of this thread is standard UniFi controller support over container support. Anyway, I have been unsuccessful using custom ports for this container as well. I have configured the docker properly, but the defaults are still used. If I go into the container and look as the system.properties file, the port options are all commented out, so I'm not sure how the UniFi controller is supposed to know to use the custom ports. Thanks, James
  13. EXCELLENT FIND! Just as a note, I also had to "fix" the /etc/nginx/sites-available/10_generic.conf you referenced in the link, as mine had the error_log listed twice. I too am now seeing proper "HIT" and "MISS"! Thanks for the info!
  14. I was thinking the same thing, but if all of your DNS servers are running on unRaid anyway, you are basically taking the internet down with unRaid anyway. With that said, I'm still running pfsense on dedicated hardware because I don't trust a docker to not be compromised and your firewall is something important enough to care a little more about than standard dockers. Your firewall is there to protect your network and you would be putting an interface on unRaid directly on the internet. Not really a good idea, IMO.
  15. Interesting. Having both .69 and .13 being used, depending on how your router caches lookups (if it does at all), you may be randomly bypassing the lancache-bundle server. I can't tell you how to setup your network, but I can tell you how I have mine setup, since it sounds like we have similar thoughts. 1) DHCP is handing out the IP address of PiHole and a second IP address that is dead on my network. This keeps Windows from using whatever it wants. If your router isn't able to provide custom DNS IPs for DHCP, I would suggest switching to a DHCP server that does. I haven't used it myself, but PiHole includes a DHCP server. 2) PiHole Upstream DNS Servers 1- lancache-bundle IP 2- same fake IP DHCP is handing out. Again, this forces PiHole to use the lancache-bundle IP. 3) LanCache-Bundle Upstream DNS is OpenDNS IP address only This setup allows me to blacklist sites easily on PiHole as well as allow me to get correct statistics from the interface. If you have your router asking PiHole, all the statistics will show the requests from the router and not the individual devices. So, if you have a ton of blocked lookups from a specific device, you will not be able to track it down.
  16. Unfortunately, the "secondary DNS" you added is bypassing the lancache-bundle. Windows doesn't use it as a primary (and if I can't talk to it, use the secondary), it picks whatever IP it wants. What do you get when you try this on the windows computer command prompt: C:\> nslookup Should look like this: Default Server: UnKnown Address: 192.168.1.69 > > google.com Should look like this: > google.com Server: UnKnown Address: 192.168.1.69 Non-authoritative answer: Name: google.com Addresses: 2607:f8b0:400f:801::200e 172.217.2.14 Then, try this: > steamcontent.com Should look like this: > steamcontent.com Server: [192.168.1.69] Address: 192.168.1.69 Non-authoritative answer: Name: steam.cache.lancache.net Address: 192.168.1.69 Aliases: steamcontent.com
  17. I'm running a single dedicated IP address on my lancache-bundle at this time and also have a 1Gbps internet connection and seem to have hit a download limit of around 20Mbps through the lancache-bundle server. But, with that said, when I test network equipment within a 1Gbps network, I look for ~950Mbps throughput, so a 1Gbps network should get better than 117Mbps, that would be a limitation to your lancache-bundle hardware, most likely. From what I understand, adding the additional IP addresses should improve your download performance. I'd say give it a shot and report back.
  18. Can't say for sure, but the 404 is a not found error as if the location lancache-bundle is trying to access doesn't have what it's asking for. The 500, 502, 503, and 504 errors are all gateway/server based errors, which would be upstream as well.
  19. Well, if you want to go to your unRaid docker page every time you want to access the web interface of a docker (or memorize a bunch of random ports), knock yourself out. But, your browser defaults to port 80 or 443 (https), so every docker sharing your host IP will need a different port for the web interface. Edit: Ah.. I get it.. you got me.. you are just trolling.. Seems pretty darn clear to me:
  20. You may need to open a cmd prompt as administrator and run: ipconfig /flushdns Once you get pfsense up, you'll be able to watch the bandwidth usage on the status page to you can see if you are using the internet or the cache server as well.
  21. Try this command from your Unraid terminal. What you are looking at are the 200 and 206 numbers. Unfortunatly, I can't tell you which is a HIT and which is a MISS, but if you download something and it shows one of those numbers and the second time you download, it's the other number, it is definitely pulling form the cache server. docker exec -it lancache-bundle tail -f /var/log/nginx/access.log As for bridge vs br0 vs host, I run all my dockers as br0 so they get their own IP address. This makes it so they can all have a web UI on port 80 instead of goofy port numbers all over the place because they are all trying to run on the host IP address. Lastly, on the machines you are troubleshooting with, and have their ONLY DNS set to your lancache-bundle IP address, disable IPv6 so we don't have to deal with that crap in the logs.
  22. All of your dockers need to have a different IP address than the server.
  23. I would remove 8.8.8.8 as Windows likes to use whatever DNS it wants and using 8.8.8.8 may cause the computer to get the REAL ip address. If windows DNS caches the correct IP, it'll bypass lancache-bundle till the DNS entry expires and it has to ask again. In which case, it may get your cache server or 8.8.8.8 again. 1) Only traffic to the places listed when configuring the docker will be cached. If you don't want to cache something on the list, set it to FALSE 2) I don't think so 3) Depends on how fast you really want it to be. If you have the extra cash and want it to be faster, get a dedicated SSD. Even with standard mechanical drives, I've gotten 50MB/s (bytes, not bits) from my cache server 4) For sure. Change your DHCP to hand out the cache server IP. Manually doing it is good for testing, but not if you have a bunch of computers you want to cache and not at all if you have people bringing their computers over for a lan party 5) Sounds like you have it. Normally I would tell you to look at the cache logs and see the "HIT" messages, but this dockers logs are all jacked up.
  24. Ok, with that kind of response, you get this kind of response: If you want it to cache something, do you think you should set it to false? Normally, the word false, means you DON'T want it to do something. Therefore, you DO NOT need to change any of those fields. By default, it will cache everything. The only time you would set it to false, is if you DON'T want it to cache something.