Strayer

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Strayer's Achievements

Noob

Noob (1/14)

6

Reputation

  1. Of course, no rush! I wasn't sure if you'd rather like to handle contributions here or on Github, so I decided to… uhh, spam, I guess… sorry about that
  2. Hey @ich777, thanks for supporting these plugins! I recently tried out the smartctl exporter and opened a PR for more NVMe data: https://github.com/ich777/unraid-prometheus_smartctl_exporter/pull/1 May I ask why you never released the plugin? It works fine for me. I read the comments about spinning up sleeping disks, which is not a problem for me. Is that the only reason? I saw some code that should avoid this, so I assume this was already fixed? I think its a fine addition to the plugin list!
  3. So, preface: I know what I'm doing here is unsupported, but I now know what is happening and want to document it in case somebody else runs into this. As I mentioned in my edit, this was 100% related to the ip6tables option of Docker. At first I didn't assume this would actually have any influence on VMs running through the br0 bridge, since usually that traffic shouldn't be affected by iptables at all, but, and I didn't realize this beforehand, Docker loads the br_netfilter kernel module that is actually causing traffic on bridge interfaces to be handled by iptables too (see link at the end of this post for technical details). The actual problem: After Docker started with the ip6tables option enabled, it set the default FORWARD policy for IPv6 to DROP. That policy caused all IPv6 traffic to not be passed from the VM to my local network and vice versa. This is explicitly done by Docker itself (see here for the relevant code in the patch that introduced ip6tables to Docker). I noticed that the default FORWARD policy for IPv4 was ACCEPT, this is why this didn't cause any issues for IPv4 traffic to the VMs. The workaround in my case was to simply add a new rule on ip6tables and everything seems to work fine: ip6tables -A FORWARD -i br0 -o br0 -j ACCEPT Now, what I don't understand is why the default FORWARD policy on IPv4 is ACCEPT. Based on Dockers code it should be set to DROP too… I checked the init scripts of Docker and libvirt in Unraid and couldn't find anything… in case anyone from Limetech reads this, I'd love to know if you did anything to make this work for IPv4. A lot more context on Docker, libvirt, bridge interfaces and br_netfilter can be found here: https://serverfault.com/a/964491
  4. So, I think this may be related to me enabling ip6tables in Docker using this Docker daemon.json: { "experimental": true, "ip6tables": true } Today IPv6 stopped working for VMs again. After disabling my custom daemon.json and rebooting the Unraid server (which didn't help the last time I worked on this) fixed IPv6 in VMs again. Kind of annoying, since I utilize this to have correct IPv6 port forwarding for my PiHole, but at least I now have something concrete to test. Edit: This is definitely the case. To test this I stopped Docker with /etc/rc.d/rc.docker stop, put the above daemon.json at /etc/docker/daemon.json, startet Docker with /etc/rc.d/rc.docker start and rebooted my VM. After the reboot, the VM couldn't get an IPv6 via SLAAC again.
  5. Sadly I again have trouble with this… now none of my Linux VMs get an IPv6 address at all. I recently switched all VMs that were using DHCPv6 to SLAAC and while everything worked fine for a while, I realized there is no IPv6 connectivity at all now. I can see the router solicitation request from the VM with tcpdump: tcpdump -i vnet1 icmp6 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on vnet1, link-type EN10MB (Ethernet), capture size 262144 bytes 21:05:33.962275 IP6 :: > ff02::1:fff9:6584: ICMP6, neighbor solicitation, who has fe80::5054:ff:fef9:6584, length 32 21:05:34.986326 IP6 fe80::5054:ff:fef9:6584 > ip6-allrouters.lan.gru.earth: ICMP6, router solicitation, length 16 21:05:38.954235 IP6 fe80::5054:ff:fef9:6584 > ip6-allrouters.lan.gru.earth: ICMP6, router solicitation, length 16 21:05:47.146270 IP6 fe80::5054:ff:fef9:6584 > ip6-allrouters.lan.gru.earth: ICMP6, router solicitation, length 16 21:06:04.554282 IP6 fe80::5054:ff:fef9:6584 > ip6-allrouters.lan.gru.earth: ICMP6, router solicitation, length 16 This is also visible on br0. None of those packets are visible when running tcpdump on eth0 or on the router. I don't know how to investigate this further… rebooting the server didn't help this time. All my other devices using SLAAC are getting an IPv6 just fine, this is only happening with VMs on Unraid. Edit: Since I was at a total loss I decided to upgrade to Unraid 6.9.10-rc2 and now the VMs get an IPv6 via SLAAC again. No idea if this was just the reboot or the newer kernel/libvirt fixed anything. Ugh. I'll have to keep an eye on this.
  6. I rebooted the Unraid server and now it is working 🤔 Not sure what happened there.
  7. It's a standard Debian 11 VM (tried Ubuntu 20.04 too). As far as I can see the ICMP6 packets appear on the br0 interfaces on the Unraid server but don't reach my router, so I assume some multicast packets don't get forwarded for some reason. Any tips on how to further debug this would be appreciated.
  8. I'm trying to get a VM running in my local network with IPv4 and IPv6. The VM is getting an IPv4 from my router, but it seems that neither SLAAC nor DHCPv6 are able to get an IPv6. My network config is fairly simple and VMs in VMware Fusion on my Macbook have no trouble getting both an IPv4 and IPv6 from my router via a bridge network. Is this to be expected with Unraid? Do I need to change some settings for this to work? Unraid itself and Docker containers have working IPv6. The VM is in network br0 and both virtio and virtio-net did have this problem.
  9. @gcolds thank you for testing this! Since @limetech asked a while ago why NFSv4: I've been planning to build a very simple Kubernetes cluster in my homelab for a while, but a lot of deployments would need a persistent volume. I'd like to use Unraid as my central storage and the most sensible way would be to use a NFS storage driver in the cluster. This obviously requires a stable NFS solution in Unraid, and before 6.10 all the reports about unstable mounts kept me from actually trying to implement this. Thank you for getting NFSv4 in 6.10!
  10. Warning is gone now, thanks for looking into this!
  11. Thank you! Interesting, thanks for letting me know. I issued a new certificate with the wildcard as its common name and the warning is gone now. Any particular reason why this is unsupported?
  12. I seem to have trouble with the new 'Invalid DNS entry for TLD' warning too… it says: "The DNS entry for 'noatun.lan.gru.earth' resolves to 192.168.178.2, you should ensure that it resolves to Array." Not really sure what to do with this… that is the local IP address of the Unraid server. noatun is its name and lan.gru.earth is the local domain. Everything network related works perfectly and has always worked. My local machine resolves this exactly as expected (.1 is my local OpenWRT, .2 is PiHole on the Unraid server, Unraid uses .1): ❯ dig +short @192.168.178.1 noatun.lan.gru.earth 192.168.178.2 ❯ dig +short @192.168.178.2 noatun.lan.gru.earth 192.168.178.2 ❯ host noatun.lan.gru.earth noatun.lan.gru.earth has address 192.168.178.2 noatun.lan.gru.earth has IPv6 address fd3b:2815:be50:4::25 noatun.lan.gru.earth has IPv6 address <redacted> Edit: I also have an incorrect error with "Invalid Certificate 1": "Your noatun_unraid_bundle.pem certificate is for 'lan.gru.earth' but your system's hostname is 'noatun.lan.gru.earth'. Either adjust the system name and local TLD to match the certificate, or get a certificate that matches your settings. Even if things generally work now, this mismatch could cause issues in future versions of Unraid." The certificate I'm using is a wildcard certificate and definitely valid for the server: root@noatun certs ❯ openssl x509 -in /boot/config/ssl/certs/noatun_unraid_bundle.pem -text Certificate: Data: Version: 3 (0x2) [...] Issuer: C = US, O = Let's Encrypt, CN = R3 Validity Not Before: Nov 26 13:29:03 2021 GMT Not After : Feb 24 13:29:02 2022 GMT Subject: CN = lan.gru.earth [...] X509v3 Subject Alternative Name: DNS:*.lan.gru.earth, DNS:lan.gru.earth [...]
  13. Nice! Do you work on these plugins in the open? I couldn't find anything on Github, but I may be blind. I'm a bit wary of using precompiled 3rd party kernel modules. Sorry if this comes off rude, but I'd rather compile these myself like with the build script of this topic, where I at least have some kind of oversight on what happens. I wouldn't mind moving less used stuff to a swap space placed in compressed RAM, yes. This is how most modern operating systems work anyway. There are some nice articles on this, I managed to find these that I read a while ago: https://haydenjames.io/linux-performance-almost-always-add-swap-space/ https://haydenjames.io/linux-performance-almost-always-add-swap-part2-zram/ I pretty much started installing Debians zram-tools on all (mostly cloud) servers that I manage and so far didn't run into any issues. That package creates swap on compressed ram disks based on the RAM size and CPU count. But my primary use case is to be able to use compressed ram disks as described with the docker containers in my previous post. Ha, thanks. Have been using the avatar for more than 10 years now and the people who recognized it are shockingly few. I started moving on to a custom commissioned avatar on most of my profiles though, I just forgot to change it here. Now I'm happy that I did forget :D
  14. Sorry, I was still playing around with this this morning and didn't finish anything yet. What I did for testing zram was essentially this: modprobe zstd modprobe zsmalloc modprobe zram zramctl -f -s 200MiB -a zstd # returns device name, e.g. /dev/zram0 mkfs.ext4 /dev/zram0 mount /dev/zram0 /tmp/zramtest I would probably throw something like this in my go file with a better mount point than /tmp/zramtest. I'd then use this mount point as a bind mount in the docker-compose.yml of the container I'm running, replacing the current bind mount on the array for the log files. Doing this for swap is mostly the same, except using mkswap and swapon instead of mkfs and mount. Most of the popular zram packages (e.g. zram-tools from Debian) do some percentage calculation to determine how much ram should be used for swap. Thats a very different topic.
  15. First of all, thanks for replying! I'm using it as a replacement for tmpfs and because I want to add a little amount of compressed memory for swap (see the various discussions and blog posts about if the Linux kernel benefits from having at least a bit of swap available). Biggest specific use case for me is putting some very VERY verbose debug log files of a container on a compressed ram disk. I don't want to thrash my SSDs with them, but also want to avoid having array disks running because of log files. I can use tmpfs for that, but after testing a bit zram manages to compress the data down to 25% of the initial size. This needs a bit of custom code in a user script or the go file to create the block devices, file systems and mount them, obviously, but I do like the way it works. I definitely see myself using this for more files that don't need to be persisted through reboots. zstd just because it is a bit more efficient at compressing compared to the default lzo. (see e.g.