Jump to content

cybrnook

Members
  • Content Count

    377
  • Joined

  • Last visited

  • Days Won

    1

cybrnook last won the day on May 19

cybrnook had the most liked content!

Community Reputation

33 Good

About cybrnook

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    United States

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hmm, then going back to the original idea of supporting "remote-random" would be the better case then. ovpn file would look something like this: #2703 UDP remote 96.9.245.120 1194 #3410 UDP remote 172.93.153.187 1194 #4231 UDP remote 198.23.164.115 1194 remote-random client dev tun proto udp resolv-retry infinite remote-random nobind tun-mtu 1500 tun-mtu-extra 32 mssfix 1450 persist-key ping 15 ping-restart 0 ping-timer-rem comp-lzo no remote-cert-tls server auth-user-pass credentials.conf verb 3 pull fast-io cipher AES-256-CBC auth SHA512 Catch to this though, is we would need to be more crafty with your logic that's currently in there since your current script just grabs the first occurrence of "remote" and strips the rest out. So, the example above would go in fine, but the moment you started the container for the first time, you would only be left with the 120 address, with 187 and 115 now being MIA from the ovpn file. That's the first step then. Second, would be that you are (very nice though) adding manual firewall rules during startup. So, if the tunnel was to go down and your watchdog restarts it, this would be the time that openvpn would automatically assign a new random server you have in the ovpn file. However, if the watchdog is only restarting openvpn, but ignoring the current firewall rules (I haven't looked that far yet), then the rules would be built around an old IP that would no longer match the new tun. So firewall rules would also need to repopulated during the tun restart. (Maybe you are already doing this, and if that's the case, then it's just as simple as allowing multiple entries in one single ovpn file to exist for "server"). Let me ask, do you ever have a taste for this? To me, it would be a great addition! However, it's your container and you maintain it. So any PR I issue will ultimately get checked by you anyways.... P.S. I haven't noticed any issues with Nord. I have my down cap set at ~15 MB/s and my up set at ~3.5 MB/s and I max out every time.
  2. If you implement, I can work on submitting a PR to your readme page to write a little summary on how it works. For your points: 1. Correct, random connection would only happen during a full container restart. Yes, it's a little heavier handed, however, with the seamless integration you have right now, there is not much front end integration to the end user anyways. So, I find that just restarting the container is the best/easiest way while keeping most everything the same. 2. I don't see this as much of an issue. Because even with users who only want to keep one .ovpn file inside the openvpn folder, it will just attempt to choose a random selection of 1. So, by default the same .ovpn file will always be used, and the enduser will always connect to the same server (unless more than one ovpn exists). I personally use NordVPN, and they provide individual ovpn files per server. In all, the entirety of the file is the same, keys and certs are all the same, options the same, just the IP of the end point server is different per ovpn.
  3. Opened a PR: https://github.com/binhex/arch-qbittorrentvpn/pull/22
  4. Ah, that makes sense seeing as it's using noVNC, thanks for the reply.
  5. @binhex Is it perhaps a leftover that your template for Krusader comes down with a VNC_PASSWORD default key:
  6. @binhex Thanks for this container, it seems to work great! One bit of additional functionality I would like to see is the ability to use the "remote-random" option in the ovpn file, and specify a handfull of "remote" servers for the pool to pick from. I looked at your git page, and in the install script you mention that you list: # get first matching 'remote' line in ovpn (Ignores any other remote hosts after the first entry) # remove all remote lines as we cannot cope with multi remote lines (strip out all remote lines, wipes any remaining?) # write the single remote line back to the ovpn file on line 1 (write contents of line 1 back into stripped ovpn file, all other remote now wiped out) Out side of coming up with some juggling logic, is there any other reason you would have to not support multiple remote hosts outside of cleaning the entry prior to using it? I could work on submitting a PR to you to try and accommodate this, but wanted your take on it first? EDIT: Or just add the ability to pick from a random .ovpn file that exists in /config/openvpn when defining VPN_CONFIG, if more than one .ovpn file exists. That would be a pretty clean way, and would give the container the ability to bounce around a few different servers on restart. Plus would keep most of your logic as is, since we would only be changing the VPN_CONFIG definition line.
  7. And also Ctrl+F5 to clear cache on the page (Assuming just hitting refresh).
  8. I got a couple on the table to test with, but I would be lying if I said I was going to be able to get around to testing them anytime soon.
  9. Upgraded from 6.7.0 to 6.7.1 without issue. I have 2 x NVME drives as cache in RAID1. Docker settings are set to default /mnt/user/appdata. No issues starting my Plex docker. Using the Plex docker managed by Plex. So far so good.
  10. Thanks for the input. So, in your case for one, you are an AMD system not Intel. So your platform isn't as heavily hit as say my 2011v3 based Intel systems, since Intel is really behind the ball on these patches. As well, I don't want the impression that disabling these is a magic +%30 performance boost across the board on all benchmark suites, that's absolutely not the case. But what we can see, like from @zoggy 's EXCELLENT pre/post test case on an Intel based system, he see's perf boosts across the board, and up to %80 improvement in context switching (almost at the bottom of the page): https://openbenchmarking.org/result/1906037-HV-190603PTS41,1906033-HV-190603PTS92 So the benefits are real, if your use cases are in alignment, and are Intel based. Not to say though that disabling the overhead on an AMD system is not fruitful as well, especially on the OS level. Just don't expect an even +%30 across the board, all platforms, etc.... With that said, I look forward to maybe bouncing some ideas off you when I get my 2970WX system up and running. It's all here, just no time to actually build it out 🙂 Plus the fact we have been battling SLES scheduling issues on IBM Power at work, and it's issues that we faced on incorrect affinity scheduling/assignments to non-optimal numa nodes.... I am taking a little time before hopping right back into that 🙂
  11. @Squid is right. It's a nicer two-fer. Since we are in a world of chips right now that are not immune to these attacks at the HW level, we are getting updates in two channels right now. BIOS level microcode updates, Windows patch level updates, linux kernel level patches and microcode updates. etc.... Okay so more than two channels 🙂 (It's a mess is the easy way). With that, only some vulnerabilities are addressed at the BIOS level with microcode. Others are being handled by patches and updates. To FULLY disable it all, would require not only staying on an older un-patched BIOS (for some, they may have no option as MB vendors and Intel are only retrofitting but so far back), but also applying these mitigations. I don't really recommend staying on an old BIOS as other features come in newer BIOS versions, like AGESA updates and CPU compatibility for newer Chips on older chipsets. As noted in the plugin, there are still a good amount of mitigations we can disable at the kernel level, and users are seeing perf gains in the VM space. As new CPU's are patched at the hardware level, this will be even more confusing since we will have microcode in BIOS updates that apply only to certain CPU's, but not other ones, and then patches at the OS level that will seemingly apply to everyone since we all pay the price at the OS level.