Xaero

Members
  • Posts

    413
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Xaero

  1. Needs to go on the client side peer config Add to the list with AllowedIPs=1.2.3.4/32, 5.6.7.8/32 Once set in the client config you do have to stop/start the wireguard server. Make sure the config on the client is updated as well (changing it on the server doesn't change it on the device(s) that have that peer config loaded, so you'd have to reload it onto those devices. Once it's loaded onto the devices, the server has been restarted and you connect, try pinging the IP you are trying to access. It should at least ping if it's routable.
  2. You can circumvent that with the SETUID bit. I've been using an overlayfs for my /root/ folder so that any changes I make are automatically on the flash drive, and support full *nix permissions. The go file is only used to mount that overlay and kick off any scripts if needed. I can share the steps here to recreate the overlayfs. It also enables things like preference persistence for htop, tmux, etc.
  3. In the [Peer] section for the PEER configuration file that you want to have access to 10.0.2.3 make sure that 10.0.2.3/32 is in the list of AllowedIPs. If it's not, the tunnel won't send traffic to it.
  4. The DNS server on my local lan, in this case my ISP provided cable modem gateway. Though eventually that will be replaced with OPNSense, now that I've tested everything works that way. Also, do note that you need to edit the peer configuration files manually in /etc/wireguard/peers Afterwards you can regenerate the QR code using my instructions above, so that you can provide users with a QR code or the ZIP.
  5. FYI, I was able to get this working properly manually with only the following data for all profile types: DNS=<Local-IP> in the [Interface] section of the peer config. <Local-IP>/32 included in the AllowedIPs= of the [Peer] section of the peer config. A single DNS field and some rudimentary logic should sort whether or not the DNS is already included in the range. From there I manually regenerated the QR codes and moved on. Of course I can't touch those peers in the GUI now without ruining everything, but it works as is.
  6. Not my feature request; but this covers the same issue.
  7. Would it be possible to add this as an option in the GUI? I'll do it manually for now; but that doesn't help much for QR code users. A slider in the advanced for "Force DNS" with an input field for the DNS IP would be sufficient, I think. EDIT: For people who do set the DNS manually in the client configs and want the QR code updated as well: cd /etc/wireguard/peers qrencode -o peer-<hostname>-wg#-*.png < peer-<hostname>-wg#-*.conf (where # is the wg profile and * is the peer number) This will update the png manually.
  8. Would it be possible to force a DNS server? Currently it looks like the client DNS is used no matter what, which means DNS leaks are a problem. It also means that hostname resolution for devices on the VPN doesn't work (for example http://<ServerName/ does not work, while http://<IP Address>/ does) Other than that seems pretty excellent so far. Edit: I tried adding DNS=<IP> in the wg0.conf and it didn't like it. Not sure what special sauce is needed.
  9. So on 6.7 I would configure the "pproxy" docker from dockerhub like so: And the nordvpn docker would have the port forwards. The pproxy docker would exist within the nordvpn network and therefore already be tunneled into the VPN. By setting up pproxy in this manner I could use socks etc.. to VPN selective traffic to NordVPN at will. On Unraid 6.8rc1; this results in an error about the docker not being allowed in two networks simultaneously: I'd like to suggest that containers be listed as network options in the dropdown list as a solution, since its the most direct approach. Also worth noting that just running the docker directly with: docker run -d --name='pproxy' --net=container:nordvpn -e TZ="America/Los_Angeles" -e HOST_OS="Unraid" 'mosajjal/pproxy' Works fine.
  10. FWIW; you can use something like sslh coupled with something like udptunnel to handle the UDP packets of wireguard over TCP on the SSL port (443) which is generally not blocked anywhere. This would be pretty manual to setup since the unraid implementation of wireguard doesn't "just have this" but there are dockers for BOTH of these things...
  11. I accidentally wrote a raw disk image over the top of my unraid USB drive. I'm an idiot. Don't worry.
  12. I don't get an error - but the edit option just isn't even available on my containers. I'll try recreating a couple. This has been pretty painless so far. EDIT: recreating works. Everything is back to normal again. Thanks everyone.
  13. Yeah that was my intent was to finish setting things up (they weren't yet) and then make the backup. Well, you live and you learn.
  14. So; I've been slowly migrating from my old server to my new server. I use my old server as a reverse tunnel entrypoint for remotely managing certain machines via SSH. I image these machines over SSH, using a VM as my SSH client so that I can't make a fatal mistake. In a lapse of judgement I did not use the VM since I haven't set it back up yet. I helped someone and imaged their drive successfully over SSH... but also managed to write the image to my Unraid USB Flash drive. I do not have a backup of this flash drive yet. I know that at the very least I need to restore my raid configuration (I have a screenshot of my disk assignments, thankfully) I'll also have to manually reinstall any plugins and do any docker and VM setup manually (Unless someone can suggest a way to restore the dockers? I had the docker.img on my cache drive, as well as the appdata and system folders.) How should I approach restoring the flash drive? How should I approach backing this up once done? I've never made this sort of mistake with Unraid before; so it's a new experience to me. EDIT: I should note that I did not notice I had made this mistake until I rebooted the server. So it's not up for me to capture the state or anything.
  15. I just tried installing 6.8rc and I'm getting a kernel panic because the root filesystem is not mountable. Not sure if the image I pulled got corrupted somehow or what. EDIT: This was not your fault. Disregard. I did a stupid.
  16. In my opinion, each docker should be listed inside the "network interfaces" box as selections. That way you can easily select which network to connect to. Perhaps add a "shared network" option to dockers so that the list doesn't get huge with too many dockers. Just need it to not switch from container name to container ID. To make this a bit more clear: In this screenshot we see that Nordvpn is configured for bridge mode networking. DDClient is configured for host mode networking (I start it in host mode for updating dns records with my real IP, currently. eventually I will change it to container:nordvpn) The third docker pproxy is configured manually going into advanced and putting --net=container:nordvpn. After saving the --net=container:nordvpn is converted to container:<uuid> This UUID is changed every time the container is modified. So if I change a setting, update the container, etc everything that is dependent on it's network now also must be manually updated again.
  17. I too saw this behavior and have become frustrated by it. I would like to second this request. My implementation was for nearly identical purposes. Basically I setup a NordVPN container; added non-VPN friendly containers (including my sslh docker) to that docker's network and was off the the races. SSLH multiplexes my HTTPS and SSH traffic so they are both on Port 443. I'd rather not expose my public IP directly so I route all external traffic through the VPN connection leveraging this and nginx. Problem: Anytime the NordVPN container restarts or updates everything breaks and has to be manually corrected to --net=container.
  18. You may need to get a bit more verbose here. For one, we don't know what platform you are on. Some CPU sockets can be troublesome with memory errors. Additional, dirty/corroded pins on the CPU have been the culprit for memory errors in the past as well. I see that you have (potentially) two channels that have errors: CPU_SrcID#1_Ha#0_Chan#2_DIMM#0 CPU_SrcID#1_Ha#0_Chan#3_DIMM#0 In both cases, you are experiencing the error with DIMM 0 of the respective channel. Typically speaking the first channel is the set of DIMMS closest to the CPU in question. The second is the second set, and the third is the third set. This is typically denoted on the soldermask, and in the system board manual. It's interesting that the memory address for the errors on both channels is very similar; Though I don't know if you've moved the bad DIMM between channels (the log can't tell me that)
  19. seconded. Pics or it never happened.
  20. Any chances of setting this up in Pterodactyl.io?
  21. While this is fine for most use cases; and probably even "acceptable" for mine; it reaches limitations rapidly when hitting the disk with high network bandwidth. I have 10gbe uplink to my network. A single sata hard disk can hardly saturate gigabit; let alone 10gbe. If I'm transferring a large image on my network (500gb+ disk images) and a user wants to stream an episode of a tv show form the same disk as I am hitting - it might work. But if that TV show is 4k.... we've got a problem. The disk I/O demand is just too high. Yes, even spreading the data out, we can run into this problem - but it will be substantially more rare. I can then also further mitigate these issues by ensuring my large disk images are on a single share, and that those shares don't touch the disks with tv shows and movies. But we still hit disk I/O bottlenecks when multiple users are slamming the same drive. Enter cache, maybe? Sure - if you can reliably predict which data users are going to grab and cram all of that into a (relatively) small cache device; unlikely. In the above I highlight several (band-aid) fixes for the "problem" in the form of a lot of constant, manual server administration work. I paid for an unraid license because I don't want to fiddle with the server constantly. I have my Arch server, and I went down that road. I haven't had to do any manual administration on said arch server in over a year. This unraid box, I cannot say the same for - but that's growing pains. Once everything is set up the way I like and works properly, I won't need to fiddle. This is probably the single biggest downside to unraid. The throughput sucks. There are a lot of positives to a system like this, but the big detriment is that I/O performance is abysmal by comparison to a striped disk system. I keep teetering between whether I should continue to fill this server, or jump ship to a different platform while I can still afford to empty the drives and make the swap or not. I'm less inclined to just because I've paid for a license here.
  22. Would it be possible to add support for a true scatter function? All of my disks are currently the same size. I'd like to keep files distributed among the disks to spread out the bandwidth use. Originally my files were distributed by unraid's High-Water option. I used unbalance to migrate data to 3 individual disks so I could switch to encrypted disks. Migrating data off those existing disks is where I realized the name "scatter" doesn't quite tell the story. Now I have 4 disks more or less permanently stuck at 100% usage making a much larger portion of my data reside on them. I can manually go through and select individual directories to spread the data out, but that's probably going to take several months.
  23. This feature would need to be implemented cautiously. Parity check times for large volumes with many disks are already high enough. And with the current parity/array ratio limitations, anything beyond the current 28+2 is imho reckless. I'd imagine we would see multiple arrays and array pooling before we would see a larger configuration. I'd rather see the 28+2 changed to be more flexible with additional parity disks; and multiple array and array pooling myself, as it would be a much more flexible system overall. You could also do simultaneous parity checks on the multiple arrays; and the pooling would make everything still appear as one "big logical volume"
  24. Excellent idea; I may make a script to back up superblock information as well; as that could be used to recover a lot of data in the event of multiple disk failures. Should be fine without though.