Jump to content

cybrnook

Members
  • Content Count

    387
  • Joined

  • Last visited

  • Days Won

    1

cybrnook last won the day on May 19

cybrnook had the most liked content!

Community Reputation

35 Good

About cybrnook

Converted

  • Gender
    Male
  • Location
    United States

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I just don't think it's the card, especially if you have gone through three so far. Mathematically/Statistically speaking, you are out of bounds for thinking it's the card at this point 🙂 Unless, of course they are all using the same ix driver, and you are triggering a bug somewhere (I have seen things like tcpdump trigger port flapping in LACP bonds before. But that was a Cisco switch and a Broadcom SR-IOV card).. But with that said, I have intel based NIC's and never had an issue..... Do you have any other 10Gb switches other than the Mikrotik? What's the opposite 10Gb device you have on your network? If I look at your topology correctly, your gigabit everything else except for your link between the Mikrotik and your server? 10Gb won't help you if your link from your unifi switches (which appear to be all 1Gb) are 1Gbe. Unless you are using the SFP1 and 2 on your 16 port PoE as a bonded pair (that needs to be setup properly in your UCK as well, and the Mikrotik would also need to support it) into your Mikrotik? But even then, the rest of your devices are 1Gbe, so even if all your clients were saturating the bond, you would max out a 2Gb since those are not SFP+ ports. Sorry, I know I am more looking at your layout then answering your question, but I am curious how you have it all hooked up....
  2. You sure it's not your switch that's flapping? I have seen where particular settings will trigger bugs on switches. Maybe one test you could do is when the flapping starts to occur, log into unraid and see if you can ping the adapter locally. If it's flapping I would expect you to see, even locally, a response then not a response etc. Another thing, are you running in a LACP bond or anything? What if you run just one of the ports, do they flap when running by themselves (both ports tested?). Trying to think of way's to rule out the adapter/OS as the cause.....
  3. I run them in both my unraid servers as well. China 10GTek ones from Amazon, haven't had any issues. Let me know if I can pull any info for you. Unifi 10Gb backbones via SFP+ (over copper). Brain fart..... I run SuperMicro cards now based on Intel 82599. I USED to run the 10GTek cards....
  4. Correct, maybe I worded it wrong, but I wrote: Meaning that for now, nospectre_v1 will work to disable this for our current Kernel. Then, in the future, all we will need is mitigations=off for newer Kernels. Sorry if it reads weird. In the end, as long as we are using nospectre_v1, we are good as this will also be disabled with that, since it's a v1 spectre variant.
  5. New Spectre V1 Intel vuln. out (SWAPGS): https://www.phoronix.com/scan.php?page=news_item&px=CVE-2019-1125-SWAPGS Looking at the commit: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=a2059825986a1c8143fd6698774fa9d83733bb11 We should be okay as far as disablement goes as it's going to be lumped under "nospectre_v1", or "mitigations" (for newer Kernels). "The mitigations may be disabled with "nospectre_v1" or "mitigations=off"" As mostly has been the case, AMD seems not affected.
  6. Just set this up with a Wintv-QuadHD using the LibreELEC image, and worked a treat with my Plex docker. Thank you for this image.
  7. Question was asked if rsync could handle changes in file names instead of duplicating, and I answered. Yes it can. Seems relevant to me.
  8. Well I use the wonderful recycle bin plug-in for accidental deletes, and to be honest I have been using unraid for around 10 years and have never once accidently deleted anything. And for corruption I have redundant backup servers that take backups at different times. Again, have yet to run into corruption due to anything like bit rot etc. Key is I write my own scripts, so I understand what they are doing and create my own assumed and acceptable risk factors trading things like versioned backups for smaller backup footprints.
  9. I use a switch in my rsync backup script called "--delete-before" and it will drop/delete any file or folder in the target location that was either changed or doesn't exist anymore prior to rsync making a new backup from source. So if a file was renamed file1.txt to File1.txt since the last backup, file1.txt on my backup target will be deleted and a new copy of File1.txt will take place.
  10. Hmm, then going back to the original idea of supporting "remote-random" would be the better case then. ovpn file would look something like this: #2703 UDP remote 96.9.245.120 1194 #3410 UDP remote 172.93.153.187 1194 #4231 UDP remote 198.23.164.115 1194 remote-random client dev tun proto udp resolv-retry infinite remote-random nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key ping 15 ping-restart 0 ping-timer-rem comp-lzo no remote-cert-tls server auth-user-pass credentials.conf verb 3 pull fast-io cipher AES-256-CBC auth SHA512 Catch to this though, is we would need to be more crafty with your logic that's currently in there since your current script just grabs the first occurrence of "remote" and strips the rest out. So, the example above would go in fine, but the moment you started the container for the first time, you would only be left with the 120 address, with 187 and 115 now being MIA from the ovpn file. That's the first step then. Second, would be that you are (very nice though) adding manual firewall rules during startup. So, if the tunnel was to go down and your watchdog restarts it, this would be the time that openvpn would automatically assign a new random server you have in the ovpn file. However, if the watchdog is only restarting openvpn, but ignoring the current firewall rules (I haven't looked that far yet), then the rules would be built around an old IP that would no longer match the new tun. So firewall rules would also need to repopulated during the tun restart. (Maybe you are already doing this, and if that's the case, then it's just as simple as allowing multiple entries in one single ovpn file to exist for "server"). Let me ask, do you ever have a taste for this? To me, it would be a great addition! However, it's your container and you maintain it. So any PR I issue will ultimately get checked by you anyways.... P.S. I haven't noticed any issues with Nord. I have my down cap set at ~15 MB/s and my up set at ~3.5 MB/s and I max out every time.
  11. If you implement, I can work on submitting a PR to your readme page to write a little summary on how it works. For your points: 1. Correct, random connection would only happen during a full container restart. Yes, it's a little heavier handed, however, with the seamless integration you have right now, there is not much front end integration to the end user anyways. So, I find that just restarting the container is the best/easiest way while keeping most everything the same. 2. I don't see this as much of an issue. Because even with users who only want to keep one .ovpn file inside the openvpn folder, it will just attempt to choose a random selection of 1. So, by default the same .ovpn file will always be used, and the enduser will always connect to the same server (unless more than one ovpn exists). I personally use NordVPN, and they provide individual ovpn files per server. In all, the entirety of the file is the same, keys and certs are all the same, options the same, just the IP of the end point server is different per ovpn.
  12. Opened a PR: https://github.com/binhex/arch-qbittorrentvpn/pull/22
  13. Ah, that makes sense seeing as it's using noVNC, thanks for the reply.
  14. @binhex Is it perhaps a leftover that your template for Krusader comes down with a VNC_PASSWORD default key: