Leseratte10

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Leseratte10's Achievements

Noob

Noob (1/14)

5

Reputation

  1. I mean, they could just offer the solution if people want it. Just make a toggle somewhere when you buy the license - do you want a license that's bound to a USB drive or do you want a license that's bound to the internet and requires connecting to a server every week or so to verify it's already valid. Best of both worlds and everyone can choose what they like best. One easy way to solve that would be to add some mapping in UnRAID. Like "ZFS is introduced on 2024-xx-xx, some-other-new-feature is introduced on 2025-xx-xx" and that for each new actual feature that's released. Then they can let everyone update to the newest UnRAID to get security updates and bugfixes (even if the license is expired), but UnRAID can check the date the license file was issued at, and only allow people to use ZFS or "some-other-new-feature" if the license file was valid at the point in time where said feature was added. This means that they wouldn't put OS updates and bugfixes behind a subscription, just actual new *features*. And they don't have any extra work maintaining two branches, all they need is a simple feature toggle in the latest UnRAID that blocks the new feature if the license is too old. But I guess that depends on what LimeTech *wants*. Do they actually want to put all updates behind a subscription wall (including security fixes and bugfixes) and are only selling it with the "new features cost money to develop" take - or do they actually *want* to only charge for actual new features and just figured limiting access to updates is the easiest way to do that? I'd be pissed if right after my subscription ended a new point release was released that fixed bugs that I reported earlier, or fixes important CVEs that have just been published, and I couldn't install that ...
  2. Do you have any plans to decouple OS security updates (which every user should get even if their license is expired) from UnRAID feature updates, then, if people with an expired license don't even get minor updates? Otherwise quite a few UnRAID boxes are going to become outdated security risks and are going to get hacked eventually ...
  3. @limetechIf the support / development work for two or more seperate branches is too high, have you thought about allowing everyone to update but lock down new features? So, like, even with the limited license you could update UnRAID forever, but once you add a new large feature to the UnRAID system like ZFS or whatever the last larger update was, you could add code to UnRAID that's like "Only allow people to use this feature if they're on Pro, Lifetime, or their key was bought after the feature was introduced". That way you'd only have one branch of UnRAID to update, everyone (even those with subscription keys) can update to the newest version and receive security updates and fixes, but new features would require people to either have a Pro or Lifetime license, or a currently valid subscription license. Looking through the last couple of updates, quite a few changes are security-only, or bugfixes for past issues. If I had a temporary license for a year, I'd be pissed if I updated to a new version that introduces some new bug, but then my license expires and I won't be able to update to the next release that hopefully fixes it. The number of big new features hasn't really been that high recently, so the additional work of locking new features behind a "your key must be up-to-date" shouldn't be that much.
  4. In case anyone else runs into this issue, too, I've opened a bug report, and created a small Unraid plugin that works around this issue: https://github.com/Leseratte10/unraid-plugin-bind-all This should make all services listen on [::] (IPv6) and 0.0.0.0 (IPv4) just like in 6.11.5. Note that this plugin isn't really tested that much and it's my first plugin so make sure you can access the server if this breaks networking somehow. You can install it by going to Plugins -> Install Plugin and entering this URL: https://github.com/Leseratte10/unraid-plugin-bind-all/blob/master/leseratte.patch.listen-on-all-ips.plg
  5. Can confirm, something is broken in 6.12.x. In 6.11.5, the nginx webserver was simply configured to listen on "0.0.0.0" for IPv4 and "[::]" for IPv6 which made it listen on all IPs: listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; Since the 6.12.0 update, it's listening on hardcoded addresses: listen 10.0.2.140:443 ssl http2 default_server; # eth0 listen [2001:db8:1234:5678:1234:56ff:fe78:90ab]:443 ssl http2 default_server; # eth0 Same for the SSH server: #Port 22 AddressFamily any ListenAddress 10.0.2.140 # eth0 ListenAddress 2001:db8:1234:5678:1234:56ff:fe78:90ab # eth0 This is terrible. My network uses global IPv6 addresses for outgoing internet connections and ULAs for internal access. This change, when I updated from 6.11.5 to 6.12.6, made it so I was unable to access my Unraid server since it's no longer listening on the correct addresses. Also, hardcoding the address itself is terrible for dynamic IPv6 prefixes since it requires some daemon to A) notice the prefix change, B) rewrite the nginx/SSH/SMB/whatever config, then C) reload the server. Can this be fixed somehow? I checked the new "Interface extra" section in the Network Settings that was mentioned in the release notes, but that's useless. It looks like Unraid just checks for the current IPv4 and IPv6 address on the interface that's entered there, and puts that into the Nginx config. So, that will probably break when the IPv6 prefix changes, and it *definitely* breaks for ULAs and when there's more than one IPv6 address on an interface. This looks like yet another case where new network "features" again weren't properly tested with IPv6 prior to release. Can the priority for this bug be increased? In my opinion, no longer being able to access the WebGUI AND the SSH server over IPv6 is more than an "annoyance that doesn't affect functionality". Thank god I did that OS update in person instead of over a remote connection, otherwise I would have been locked out. Can you just give us a toggle to revert back to the old behaviour of just listening on "::"? EDIT: Looks like I can enter an ULA into the "Include listening interfaces" and thankfully that'll get added to the SSHD and Nginx config files (so I can access the server remotely again), but still, that'll break when the ULA or GUA prefix changes as then I'd need to manually alter the config again. How does something like this end up in a release?
  6. I'm trying to use UnRAID's built-in syslog server to receive logs from a bunch of other machines in my network. The general logging itself is working fine, but I really don't like that UnRAID is using the client's IP address for the file name. Some clients connect over IPv4, some connect over IPv6. Some even switch between IPv4 and IPv6 (maybe one of them is down sometimes) and the logs end up in different files. Some are clients with a dynamic IPv6 (privacy extensions). I tried to have my remote backup UnRAID server send its logs to the "main" UnRAID server, too - but given that that server is behind a dynamic public IP, UnRAID will create new syslog log files over and over again every single time the IP changes. That makes the logs pretty useless since you can't really find stuff. Given that the syslog protocol is a standard format and each log entry starts with the date/time and then the hostname of the sending machine, why does UnRAID use an IP which can change all the time for the file name? Can a setting be added to UnRAID's syslog server so the file name for the received syslog messages contains the server name sent inside the syslog message (which will be constant) instead of just using the source IP address (which changes all the time)? EDIT: Looks like the relevant config is in /etc/rsyslog.conf: # ######### Receiving Messages from Remote Hosts ########## # TCP Syslog Server: # provides TCP syslog reception and GSS-API (if compiled to support it) $ModLoad imtcp # load module #$InputTCPServerRun 514 # start up TCP listener at port 514 # UDP Syslog Server: $ModLoad imudp # provides UDP syslog reception #$UDPServerRun 514 # start a UDP syslog server at standard port 514 $template remote,"/mnt/user/syslog/syslog-%FROMHOST-IP%.log" I could probably just edit this and modify the path, but I'm not sure when/if that'll be overwritten again by UnRAID. Basically, I'd just like to have an option in the UnRAID GUI to replace that stupid "%FROMHOST-IP%" with "%HOSTNAME%". The corresponding code in the webgui repo that would need to be changed is here. EDIT 2: Also, while we're at it, the HTML input field for the remote syslog server hostname is limited to 23 chars which is pretty short for a server name. When I edit the HTML and remove that restriction I can add a longer server name and it also works. Can that limit be removed / increased? EDIT 3: PR: https://github.com/unraid/webgui/pull/1563
  7. Great Plugin, I started to use this to automatically create backups of my Docker container data (lives on the cache drive) to the main array. When I started using it I added the "noParity" flag because I didn't want to have backups or other intensive tasks running on the array during a parity check or data rebuild - and then a month later wondered why it didn't run the backup for two days in a row ... Would it be possible to add another flag that auto-pauses a Parity Check or Data Rebuild if it's currently running? So, a flag like "pauseParity" that checks if a Parity operation is currently running before it starts the userscript, and if so, pause the operation, run the userscript, then start it again? Or would that cause any issues like when running multiple scripts in parallel?
  8. I'm also looking for a fix for this issue. Spent hours debugging my own code until I figured out the UnRAID host (6.11.5) is not reachable over IPv6 from within a container ... Any way to change this stupid behaviour? I tried manually setting routes but they seem to be ignored ... EDIT: Solved the issue by throwing a second NIC into the system. 1st NIC is for UnRAID and for all VMs, 2nd NIC is just for Docker containers. That way traffic is forced to go out NIC2, through the switch and into NIC1 which makes that work. Still hoping there can be a proper solution in the future - kind of a waste of a NIC (and power)...
  9. I'm running an UnRAID host with 6.11.5, and a bunch of VMs (network source br0, network model virtio-net) all running Debian 11. All VMs have "net.ipv6.conf.*.accept_ra = 2" set. One of the VMs is running radvd, announcing a route to the network: interface enp1s0 { AdvSendAdvert on; AdvDefaultLifetime 0; AdvDefaultPreference low; route 2001:db8:1234::/48 { }; }; Other devices on the network, Linux and Windows, are picking up this route just fine. "ip -6 r" on Linux, or "route -6 print" on Windows will display the route and use it. The UnRAID host itself and the other VM running on UnRAID, however, don't. Running a tcpdump or something like radvdump confirms that the route advertisements do reach the UnRAID host and the other VMs: # # radvd configuration generated by radvdump 2.18 # based on Router Advertisement from fe80::xxxx:xxff:fexx:xxxx # received by interface enp1s0 # interface enp1s0 { AdvSendAdvert on; # Note: {Min,Max}RtrAdvInterval cannot be obtained with radvdump AdvManagedFlag off; AdvOtherConfigFlag off; AdvReachableTime 0; AdvRetransTimer 0; AdvCurHopLimit 64; AdvDefaultLifetime 0; AdvHomeAgentFlag off; AdvDefaultPreference low; AdvSourceLLAddress on; route 2001:db8:1234::/48 { AdvRoutePreference medium; AdvRouteLifetime 0; }; # End of route definition }; # End of interface definition However, the advertised route seems to not be added to the routing table. Is there some kind of setting inside UnRAID that could be blocking these? Both the UnRAID host and all the VMs correctly receive and use routes advertised by another physical machine on the same network, just RAs from the same physical machine seem to be ignored. Or could this be a configuration issue in the Linux VMs in that Linux somehow still figures out that the RA actually comes from the same physical device? The link-local address the RA is from (fe80::xxxx:xxff:fexx:xxxx) is the link-local address of the VM running radvd. I have attached the configuration from the radvd VM extracted from the Diagnostics. vmconfig.txt
  10. Noticed a bug in 6.11.5 that apparently prevents Wireguard tunnels from working with IPv6 only. How to reproduce: Go to Settings -> VPN Manager, enter local name, generate keys, enter endpoint, click "apply". Toggle "Advanced" mode, set "Network protocol" to "IPv6 only", click "apply". Click "add peer", enter name, generate keypair, click "apply". Try to set the "inactive" toggle to "active", notice it jumps right back to "inactive". The log file at /var/log/wg-quick.log will contain the following error: # cat /var/log/wg-quick.log [#] logger -t wireguard 'Tunnel WireGuard-wg0 started' [#] ip6tables -t nat -A POSTROUTING -s fc00:253:0:0::/64 -o br0 -j MASQUERADE [#] ip -4 route flush table 200 [#] ip -4 route add default via dev wg0 table 200 Error: inet address is expected rather than "dev". [#] ip link delete dev wg0 wg-quick down wg0 wg-quick: `wg0' is not a WireGuard interface That's because in the generated Wireguard config ... # cat /etc/wireguard/wg0.conf [Interface] #random PrivateKey=x Address=fc00:253:0:0::1 ListenPort=51820 PostUp=logger -t wireguard 'Tunnel WireGuard-wg0 started' PostUp=ip6tables -t nat -A POSTROUTING -s fc00:253:0:0::/64 -o br0 -j MASQUERADE PostDown=logger -t wireguard 'Tunnel WireGuard-wg0 stopped' PostDown=ip6tables -t nat -D POSTROUTING -s fc00:253:0:0::/64 -o br0 -j MASQUERADE PostUp=ip -4 route flush table 200 PostUp=ip -4 route add default via dev wg0 table 200 PostUp=ip -4 route add 10.0.0.0/16 via 10.0.1.1 dev br0 table 200 PostDown=ip -4 route flush table 200 PostDown=ip -4 route add unreachable default table 200 PostDown=ip -4 route add 10.0.0.0/16 via 10.0.1.1 dev br0 table 200 [Peer] #random PublicKey=y AllowedIPs=fc00:253:0:0::2 I censored the keys, obviously. 10.0.0.0/16 is my local IPv4 network while 10.0.1.1 is my Gateway. Notice in the "PostUp" line it says "ip -4 route add default via dev wg0 table 200". That's not a valid route, there's supposed to be an IPv4 address between "via" and "dev", but there isn't, so Wireguard fails to start. If I switch from "IPv6 only" to "IPv4 + IPv6", that line turns into a valid route: "PostUp=ip -4 route add default via 10.253.0.1 dev wg0 table 200". EDIT: The code that adds these PostUp lines to the config file explicitly mentions it's only working for IPv4, so why is that UnRAID code executed when I select "IPv6 only" for Wireguard? Also, the logging is somewhat wrong. The PostUp commands are executed in order, so it first says "Tunnel started" despite the tunnel not actually being up yet. It'd be better to have the 1st PostUp be something like "Starting tunnel ..." and then the last PostUp to be "Tunnel started.", that way if the tunnel fails to start due to bugs like this, the log won't claim it was successful. Is there a possibility, in a future release, to add something like an "ultra-advanced" mode where people can just upload their own wg0.conf and use that instead of having to click together a tunnel through the UI? EDIT: Looks like that already exists with the "Import Tunnel" button, haven't tested yet if my use-case would work with that, though. I did search through the forum for existing bug reports but didn't find any - probably nobody is using an IPv6-only wireguard tunnel. I can post diagnostics if necessary, but this doesn't look like a bug that's unique to my setup.
  11. The data rebuild is now finished, all drives are green / healthy again and all the data is still there. The slow rebuild was (most likely) caused by the "Folder Caching" plugin (which has a setting to not run while the mover is running but apparently no setting to not run during a Parity rebuild ...) so that mystery is solved, too. Thanks!
  12. Okay, since my last post I've ordered a ton of additional fans for the machine, and a new 18TB drive. I've successfully precleared that drive in another unRAID machine and then replaced the broken drive. Added tons of additional fans to the machine and the data rebuild from parity is currently running. Drives are all running at 34°C now. Thanks for all the advice I've received so far. I have two additional questions regarding that, though: A) During the first ~2 hours of the rebuild, it was running very very slow. Like, 20-30 MB/s a second and it said it would take like a week to rebuild the drive. After 2-3 hours it shot up to more realistic speeds of 200 MB/s. Is this "normal" / is there anything else happening at the beginning of a rebuild that causes these slow speeds? I have the machine disconnected from the network and all VMs / Dockers stopped so there's nothing else using the disks. B) I've noticed that during the rebuild, all the other drives are hit with tons and tons of reads (duh, that's the point of parity, using the Parity bits and all other data bits to recreate one missing drive). I'm wondering, though, since I have dual parity, why would it not leave one disk spun-down? Since with two parity drives it would theoretically be able to recreate two broken drives at the same time, why not leave one of the disks alone, reducing the risk of an additional drive breaking due to the load of the rebuild? Why not leave Parity 2 down / not access it at all and just rebuild from Parity 1? Or even better, leave the data disk that ist most full (= has the most content at risk) spun down and use both Parity disks instead?
  13. I have notifications enabled, and no, that disk hasn't been showing SMART warnings before. I got the notification about SMART errors just today. Last time I actively checked the dashboard was a couple days ago and everything was fine. And the notifications are definitely working, as I always see the ones related to Docker image updates. Only strange thing was a temperature warning for that disk today at 2:03am (49°C), then "returned to normal temp" at 3:23am, then "Array has 1 disk with read errors (49)" at 2:49pm, and then the panic alert "Disk 1 in error state (disk dsbl)" at 2:51pm. Other than that, no warnings. I should probably crank up the fan speeds a bit during the summer. Seagate says the drive can operate in up to 60°C and it did never reach that, so I assumed it was fine. Though, that question was more regarding the drive change process itself. I will post again once I have the new drive here and replaced the drive. Thanks for all the helpful answers so far.
  14. So is there anything special I need to watch out for? Right now the server is running but the array is stopped. The broken disk has a red "X" and the others have their green bubble in the "Main" menu. Once I have the new drive I just un-assign the broken drive, shut down the server, replace the drive, go into the "Main" menu and assign the new drive to the slot that had the failed drive in it, then start the array, and UnRAID will automatically re-build the contents onto the new drive, as described at https://wiki.unraid.net/Replacing_a_Data_Drive ? Or is there a difference in procedure with replacing a working data drive vs replacing a failed data drive?
  15. Here's the new diagnostics. Looks like a dead disk. 415 reallocated sectors (used to be 0), 2 reported uncorrected (used to be 0), and 3898 pending sector and offline uncorrectable. Zero CRC errors so unlikely to be a bad cable - right? Time to replace the disk, I suppose ... tower-diagnostics-20220805-1754.zip