ljm42

Administrators
  • Posts

    4342
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by ljm42

  1. I'd recommend installing the Fix Common Problems plugin, it will alert you if it finds call traces related to macvlan in your syslog. If you aren't seeing call traces with your setup then I wouldn't worry about it.
  2. Thanks for testing, I've not seen that. On Settings -> Docker, get back to where only there is only a custom network on eth0. If you see the option for macvlan/ipvlan (you may not, depending on other settings), be sure it is set to macvlan. Then take a screenshot of the page. Please take a screenshot of Settings -> Network -> eth0. "Enable bridging" should be no. Then start the array Go to one of your Docker containers and take a screenshot of the "Network Type" options. I would expect to see "Custom: eth0" as an option, ideally selected by default. Then post all of the screenshots and the full diagnostics.zip (from Tools -> Diagnostics) Thanks!
  3. These links are fixed, thanks for reporting!
  4. We have a 6.12.4 rc release available that includes btrfs-progs 6.3.3, would appreciate your help confirming it resolves this issue:
  5. In this mode it is always set to macvlan so the setting is hidden
  6. The unraidbackup_id_ed25519* files are used by Unraid Connect Flash Backup when making outgoing connections to the backup server. Don't delete them, and don't post their contents as they are specific to your server.
  7. Hey everyone! We have a 6.12.4 rc release available that resolves the macvlan issue without needing to use two nics. We'd appreciate your help confirming the fixes before releasing 6.12.4 stable:
  8. Are you sure you are looking at eth1? eth1 should have a "none" option, eth0 will not.
  9. Hey folks! We have a 6.12.4 rc release available that should allow the NUT plugin to shut the server down, would appreciate your help confirming it resolves this issue:
  10. We have a 6.12.4 rc release available with significant changes to networking, would appreciate your help confirming it resolves this issue:
  11. Hey everyone! We have a 6.12.4 rc release available that resolves the macvlan issue and several corner case bugs that folks have reported. We'd appreciate your help confirming the fixes before releasing 6.12.4 stable:
  12. This release has a fix for macvlan call traces(!) along with other bug fixes, security patches, and one new feature. We'd like your feedback before releasing it as 6.12.4 This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip. Upgrade steps for this release As always, prior to upgrading, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Update all of your plugins. This is critical for the NVIDIA and Realtek plugins in particular. If the system is currently running 6.12.0 - 6.12.2, we're going to suggest that you stop the array at this point. If it gets stuck on "Retry unmounting shares", open a web terminal and type: umount /var/lib/docker The array should now stop successfully (This issue was resolved with 6.12.3) Go to Tools -> Update OS, change the Branch to "Next". If the update doesn't show, click "Check for Updates" Wait for the update to download and install If you have any plugins that install drivers (NVIDIA, Realtek, etc), wait for the notification that the new version of the driver has been downloaded. Reboot Add Tools/System Drivers page This new page gives you visibility into the drivers available/in use on your system. 3rd party drivers installed by plugins (such as NVIDIA and Realtek) have an icon that links to the support page for that driver. And you can now add/modify/delete the modeprobe.d config file for any driver without having to find that file on your flash drive. Thanks to @SimonF for adding this functionality! Fix for macvlan call traces! The big news in this test release is that we believe we have resolved the macvlan issues that have been plaguing us recently! We'd appreciate your help confirming the changes. Huge thanks to @bonienl for tracking this down! The root of the problem is that macvlan used for custom Docker networks is unreliable when the parent interface is a bridge (like br0), it works best on a physical interface (like eth0). We believe this to be a longstanding kernel issue and have posted a bug report. If you are getting call traces related to macvlan, as a first step we'd recommend navigating to Settings/Docker, switch to advanced view, and change the "Docker custom network type" from macvlan to ipvlan. This is the default configuration that Unraid has shipped with since version 6.11.5 and should work for most systems. However, some users have reported issues with port forwarding from certain routers (Fritzbox) and reduced functionality with advanced network management tools (Ubiquity) when in ipvlan mode. For those users, in this rc we have a new method that reworks networking to avoid this. Simply tweak a few settings and your Docker containers, VMs, and WireGuard tunnels will automatically adjust to use them: Settings -> Network Settings -> eth0 -> Enable Bridging = No Settings -> Docker -> Host access to custom networks = Enabled Note: if you previously used the two-nic method for docker segregation, you'll also want to revert that: Settings -> Docker -> custom network on interface eth0 (i.e. make sure eth0 is configured for the custom network, not eth1) When you start the array, the host, VMs, and Docker containers will all be able to communicate, and there should be no more call traces! Troubleshooting If your Docker containers with custom IPs aren't starting, edit them and change the "Network type" to "Custom: eth0". We attempted to do this automatically, but depending on how things were customized you might need to do it manually. If your VMs are having network issues, edit them and set the Network Source to "vhost0". Also, ensure there is a MAC address assigned. If your WireGuard tunnels won't start, make a dummy change to each tunnel and save. If you are having issues port forwarding to Docker containers (particularly on a Fritzbox) delete and recreate the port forward in your router. To get a little more technical... After upgrading to this release, if bridging remains enabled on eth0 then everything works as it used to. You can attempt to work around the call traces by disabling the custom Docker network, or using ipvlan instead of macvlan, or using the two-nic Docker segmentation method with containers on eth1. Starting with this release, when you disable bridging on eth0 we create a new macvtap network for Docker containers and VMs to use. It has a parent of eth0 instead of br0, which is how we avoid the call traces. A side benefit is that macvtap networks are reported to be faster than bridged networks, so you may see speed improvements when communicating with Docker containers and VMs. FYI: With bridging disabled for the main interface (eth0), then the Docker custom network type will be set to macvlan and hidden unless there are other interfaces on your system that have bridging enabled, in which case the legacy ipvlan option is available. To use the new fix being discussed here you'll want to keep that set to macvlan. Other Bug Fixes This release resolves corner cases in networking, Libvirt, Docker, WireGuard, NTP, NGINX, NFS and RPC. And includes an improvement to the VM Manager so it retains the VNC password during an update. And has a change to the shutdown process to allow the NUT plugin to shut the system down. A small change is that packages in /boot/extra are now treated more like packages installed by plugins, and the installation is logged to syslog rather than to the console. Known Issues Please see this page for information on crashes related to the i915 driver: https://docs.unraid.net/unraid-os/release-notes/6.12.0#known-issues If Docker containers have issues starting after a while, and you are running Plex, go to your Plex Docker container settings, switch to advanced view, and add this to the Extra Params --no-healthcheck This thread is perfect for quick questions or comments, but if you suspect there will be back and forth for your specific issue, please start a new topic. Be sure to include your diagnostics.zip. Rolling Back Before rolling back to an earlier version, it is important to first change this back to yes: Settings -> Network Settings -> eth0 -> Enable Bridging = Yes And then start the array (along with the Docker and VM services) to update your Docker containers, VMs, and WireGuard tunnels back to their previous settings which should work in older releases. Once in the older version, confirm these settings are correct for your setup: Settings -> Docker -> Host access to custom networks Settings -> Docker -> Docker custom network type Changes vs. 6.12.3 Networking New vhost network for both containers and VMs. When bridging enabled: Create shim interface which is attached to bridge interface Copy parent address to shim interface with lower metric to allow host access More specific routes are no longer created When bridging disabled: Copy parent address to vhost interface with lower metric to allow host access Bug fixes and improvements create_network_ini: fixed dhcp hook improved IP address collection diagnostics: Add previous Unraid version to diagnostics version txt file. Add ntp.conf, sshd.config, and servers.conf (with anonymized URLs) anonymize IP addresses libvirt, nginx, nfs, rpc: changed running process detection nfsclient: start negotiation with v4, turn off atime modification rc.6: leave /usr and /lib mounted during shutdown rc.docker: create same IPv6 network for containers and services add more logging when stopping dockerd rc.inet1: do not use promiscuous mode for bridging add persistent option to dhcpcd rc.library: interfaces always listed in the same order, fix show ipv6 rc.libvirt: remove 'itco' watchdog from XML if present rc.local: annotate auto-generated /etc/modprobe.d/zfs.conf file rc.services: add logging exclude WireGuard "VPN tunneled access for docker" tunnels from services exclude WireGuard tunnels for ntp (code optimization) webgui: Update monitor_nchan Feedback: refactor feedback script Shares and Pools: show "Minimum free space" as absolute number instead of percentage Pools: minimum free space: only enabled when array is stopped VM Manager: Retain VNC password during update. VM Manager: Remove downloaded '.vv' files. Dashboard: hide ZFS bar when no ZFS is used Network settings: fix DNS settings sometimes disappear Translations: trim key and value in language files add System Drivers page Linux kernel version 6.1.46 (CVE-2023-20593) CONFIG_SCSI_MPI3MR: Broadcom MPI3 Storage Controller Device Driver Base Distro btrfs-progs: 6.3.3 curl: version 8.2.0 (CVE-2023-32001) kernel-firmware: version 20230724_59fbffa krb5: version 1.19.2 (CVE-2023-36054) openssh: version 9.3p2 (CVE-2023-38408) openssl: version 1.1.1v (CVE-2023-3817 CVE-2023-3446) samba: version 4.17.10 (CVE-2023-3496 CVE-2022-2127 CVE-2023-34968 CVE-2023-3496 CVE-2023-3347)
  13. In fact, it is going very well! I expect to announce a prerelease version Soon (TM)
  14. If you assign an IP to a container it is just for that container so there is no risk of port overlap. Any containers that don't have a custom IP will share the host's IP and you need to make sure there is no port overlap.
  15. The webgui needs to work on a variety of screen sizes and the search bar takes up a significant amount of space. There are no additional mouse clicks with this method, you click once on the icon and then start typing. Or you can keep your hands off the mouse entirely and press CMD/CTRL-K. I don't see this changing back.
  16. The system will be signed out of Unraid Connect after one week of not connecting I believe. So if the server was off, or the Internet was down, or the api was stopped for one week it will be signed out. However, since you are using Cloudflare to get a tunnel to the server, Unraid Connect has no bearing on your ability to connect to the server. If the tunnel is up and accessible like you say, then you can use a browser to reach one of these urls: http://server.dynamic.domainname.com (or http://server.dynamic.domainname.com:HTTPPORT ) https://server.dynamic.domainname.com (or https://server.dynamic.domainname.com:HTTPSPORT ) If that doesn't work, then the Cloudflare tunnel isn't setup properly. It has nothing to do with Unraid Connect.
  17. You would need to contact your ISP to get a new WAN IP. Be sure to setup a custom/random WAN Port for the server: https://docs.unraid.net/connect/remote-access Or disable the port forward if you don't want to use it any more.
  18. I'm not super clear on what the problem is, but if you can ping the server then as long as you know the http or https port you should be able to access the login screen using your browser.
  19. I am glad your issue appears to be resolved! This thread was for a very specific issue, so if your issue comes back please start a new thread with all the details, as that would mean it is not similar enough to have the same solution being discussed in this thread.
  20. If there are errors, they will be logged in /var/log/notify_Discord . You can view the log with: cat /var/log/notify_Discord If you post the info here I may be able to help parse it.
  21. This sounds like something else is going on, I'd recommend starting a new thread with details of your issue.
  22. Please upload your diagnostics.zip (from Tools -> Diagnostics). Any NIC passed through to a VM would not be visible to FCP, so there must be something else triggering the notification.
  23. Something is holding your zpool open, but it doesn't appear to be the docker.img: Jul 27 15:15:42 Tower emhttpd: Unmounting disks... Jul 27 15:15:42 Tower emhttpd: shcmd (154432): /usr/sbin/zpool export cache Jul 27 15:15:42 Tower root: cannot unmount '/mnt/cache/system': pool or dataset is busy I'd suggest starting your own thread, I don't think you are hitting the issue this thread is about, and there are other things that are more concerning: Jul 26 20:06:09 Tower kernel: critical medium error, dev sdh, sector 3317704632 op 0x0:(READ) flags 0x0 phys_seg 72 prio class 2 Jul 26 20:06:09 Tower kernel: md: disk0 read error, sector=3317704568 I'm not an expert on that but hopefully someone else can lend a hand.
  24. See if this thread helps: https://forums.unraid.net/topic/79981-solved-unraid-wont-boot-from-usb-after-reboot-restart/#comment-991877
  25. Interesting. I don't see a way to split this conversation into a new bug report, so please create a separate bug report for this. The more info you can provide the better. Such as, what is the make/model of the UPS, what was the last version that shut it down, etc.