• Unraid OS version 6.12.0-rc6 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     

    For exclusive shares we made an implementation change.  We found issues using bind-mounts with ZFS pools with internal nested child datasets.  To overcome this problem, symlinks are created in /mnt/user instead.  For example, for an exclusive share named "myshare" which exists only on "mypool" this symlink is generated:

     

    /mnt/user/myshare -> /mnt/mypool/myshare

     

    This implementation is actually a little cleaner and provides the same benefits of faster throughput.

     


    Version 6.12.0-rc6 2023-05-17

    Changes vs. 6.12.0-rc5

    Use symlinks instead of bind-mounts for exclusive shares.

    Fix share rename when share contains space and located on zfs pool.

    Share Edit: allow 1 letter names

    Network improvements:

    • rc.docker - suppress ipv6 link-local address for docker0 and shim interfaces when set as ipv4 only
    • rc.avahidaemon - let service listen on regular interfaces only which have an IP address, this includes the primary interface + set ipv4 / ipv6 support
    • rc.samba - let smb, nmb service listen on regular interfaces only which have an IP address, this includes the primary interface + set ipv4 / ipv6 support (also for wsdd2)
    • rc.ssh - listen on regular interfaces only which have an IP address, this includes the primary interface + set ipv4 / ipv6 support
    • rc.inet1 - add iptables processing to bridge interfaces to make them operate similarly as macvlan interfaces
    • create_network_ini - restart smb when network changes are done

    VMs: fixed notification subject

    TRIM: fix operation when ZFS is not active

    Network settings: fix bug in description field

    bash_completion: version 2.11 docker: version 23.0.6

    Use 'zfs set atime=off' Upon root dataset mount; child datasets should inherit this setting.

    Continue format if blkdiscard command fails.

    Add Pushbits Agent for Matrix/Synapse integration

    Share Edit: warn when invalid zfs name is used

    Lock / unlock button: switch green / red color

    • Green is normal state (page is locked)
    • Red is attention state (page is unlocked)

    Linux kernel

    • version: 6.1.29

    Version 6.12.0 (Consolidated)

    Upgrade notes

    General

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

    If you revert back from 6.12 to 6.11.5 or earlier, you have to force update all your Docker containers and start them manually after downgrading. This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    Upon boot, if all PCI devices specified in 'config/vfio-pci.cfg' file do not properly bind, VM Autostart is prevented. You may still start individual VMs. This is to prevent Unraid host crash if hardware PCI IDs changed because of a kernel update or physical hardware change. To restore VM autostart, examine '/var/log/vfio-pci-errors' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot.

    Linux Multi-Gen LRU is a relatively new feature now included but not enabled by default. You can enable by adding this line to your 'config/go' file:

    echo y > /sys/kernel/mm/lru_gen/enabled

    If you revert back from 6.12 to 6.11.5 or earlier you many need to remove that line.

    Obsolete/Broken Plugins

    There are a few plugins which are known to be incompatible with Unraid 6.12, and upon boot will not be installed. You will get a notification for each plugin that is affected, and can review the list by going to Plugins/Plugin File Install Errors.

    • disklocation-master version 2022.06.18 (Disk Location by olehj, breaks the dashboard)
    • plexstreams version 2022.08.31 (Plex Streams by dorgan, breaks the dashboard)
    • corsairpsu version 2021.10.05 (Corsair PSU Statistics by Fma965, breaks the dashboard)
    • gpustat version 2022.11.30a (GPU Statistics by b3rs3rk, breaks the dashboard)
    • ipmi version 2021.01.08 (IPMI Tools by dmacias72, breaks the dashboard)
    • nut version 2022.03.20 (NUT - Network UPS Tools by dmacias72, breaks the dashboard)
    • NerdPack version 2021.08.11 (Nerd Tools by dmacias72)
    • upnp-monitor version 2020.01.04c (UPnP Monitor by ljm42, not PHP 8 compatible)
    • ZFS-companion version 2021.08.24 (ZFS-Companion Monitor by campusantu, breaks the dashboard)

    Some of the affected plugins have been taken over by different developers, we recommend that you go to the Apps page and search for replacements. Please ask plugin-specific questions in the support thread for that plugin.

    Known issues

    • We are aware that some 11th gen Intel Rocket Lake systems are experiencing crashes related to the i915 iGPU. If your Rocket Lake system crashes under Unraid 6.12.0, open a web terminal and run:

      echo "options i915 enable_dc=0" >> /boot/config/modprobe.d/i915.conf

      then reboot.

      Using this parameter will result in higher power use but it may resolve this issue for these GPUs. When Unraid 6.13 is released it will have a newer Linux kernel with better i915 support, we anticipate that at that point you can revert this tweak with:

      rm /boot/config/modprobe.d/i915.conf

    • If "Docker custom network type" is set to "macvlan" you may get call traces and crashes on 6.12 even if you did not on 6.11. If so, we recommend changing to "ipvlan", or if you have two network cards you can avoid the issue completely: https://forums.unraid.net/topic/137048-guide-how-to-solve-macvlan-and-ipvlan-issues-with-containers-on-a-custom-network/

    ZFS Pools

    For a good overview of ZFS, see https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

    New in this release is the ability to create a ZFS file system in a user-defined pool. In addition you may format any data device in the unRAID array with a single-device ZFS file system.

    We are splitting full ZFS implementation across two Unraid OS releases. Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4-way mirror in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support scheduled trimming of ZFS pools.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported. Note: imported hybrid pools may not be expanded in this release.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror (up to 4-way), raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

    Pools created with the steini84 plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

    Autotrim can be configured as on or off (except for single-device ZFS volumes in the unRAID array).

    Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.

    When creating a new ZFS pool you may choose zfs - encrypted, which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories.

    Share storage conceptual change

    New in this release is a conceptual change in the way storage is assigned to shares. The old concept of main storage being the unRAID array with an optional "Cache" is confusing to many new users, especially since cache has a specific meaning in ZFS.

    Also outlined below, we introduced the concept of an exclusive share. This is simply a share where all the data exists in a single named pool. In this case the FUSE-based User Share file system returns a symlink to the actual share directory in the pool. All operations within the share, including data transfer, therefore bypass FUSE, resulting in greater performance. This feature is primarily aimed at maximizing I/O for large fast ZFS pools accessed via a fast network

    This is front-end change only; existing shares will be viewed with this new structure automatically upon upgrading, and will automatically revert to the previous style if you revert to an earlier version.

    Configuring the storage options for a share is specified using two inputs:

    • Primary storage
    • Secondary storage

    Primary storage is where new files/folders are created. If Primary storage is below the Minimum Free Space setting then new files and folders will be created in Secondary storage, if configured.

    Each input presents a drop-down which lists "array", "none", and each named pool as a selection according to some configuration rules:

    For the Primary storage drop-down:

    • the "none" option is omitted, ie, Primary storage must be selected
    • any named pool can be selected
    • "Array" can be selected (meaning the unRAID array)

    For the Secondary storage drop-down:

    • the "none" option is included, ie, Secondary storage is optional
    • if Primary storage is a pool name, then the only options are "none" and "Array". In the future other pools will be listed here as well.
    • if Primary storage is "Array", then only "none" appears as an option

    When "Array" is selected for either Primary or Secondary storage, a set of additional settings slide in:

    • Allocation method
    • Included disk(s)
    • Excluded disk(s)
    • Split level

    When a btrfs named pool is selected for either Primary or Secondary storage, an additional setting slides in:

    • Enable Copy-on-write

    When a ZFS named pool is selected for either Primary or Secondary storage, there are no additional settings at this time but there could be some in the future. For example, since a share is created as a ZFS dataset, it could have a different compression setting than the parent pool if we need to implement this.

    Mover action

    When there is Secondary storage configured for a share the "Mover action" setting becomes enabled, letting the user select the transfer direction of the mover:

    • Primary to Secondary (default)
    • Secondary to Primary

    Exclusive shares

    If Primary storage for a share is a pool and Secondary storage is set to "none", then a symlink is returned in /mnt/user/ pointing directly to the pool share directory. (An additional check is made to ensure the share also does not exist on any other volumes.) There is a new status flag, 'Exclusive access' which is set to 'Yes' when a symlink is in place; and, 'No' otherwise. Exclusive shares are also indicated on the Shares page.

    The advantage of setting up symlinks is that I/O bypasses FUSE-based user share file system (shfs) which can significantly increase performance.

    There are some restrictions:

    • Both the share Min Free Space and pool Min Free Space settings are ignored when creating new files on an exclusive share.
    • If there are any open files, mounted loopback images, or attached VM vdisk images on an exclusive share, no settings for the share can be changed. As a workaround, create a directory for the share on another volume and restart the array to disable exclusive access and make the necessary changes to the share settings.
    • If the share directory is manually created on another volume, files are not visible in the share until after array restart, upon which the share is no longer exclusive.

    Clean Up button

    Appearing on the Shares page, a button called CLEAN UP, when enabled indicates there are config/share/.cfg files for shares that do not exist. Clicking this button will remove those files.

    Other Improvements

    btrfs pools

    Autotrim can be configured as on or off when used in a pool.

    Compression can be configured as on or off. on selects zstd. Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as on or off when used as a single-slot pool.

    Docker

    It is possible to configure the Docker data-root to be placed in a directory on a ZFS storage pool. In this case Docker will use the 'zfs' storage driver. This driver creates a separate dataset for each image layer. Because of this, here is our recommendation for setting up Docker using directory:

    First, create a docker user share configured as follows:

    • Share name: docker
    • Use cache pool: Only
    • Select cache pool: name of your ZFS pool

    Next, on Docker settings page:

    • Enable docker: Yes
    • Docker data-root: directory
    • Docker directory: /mnt/user/docker

    If you ever need to delete the docker persistent state, then bring up the Docker settings page and set Enable docker to No and click Apply. After docker has shut down click the Delete directory checkbox and then click Delete. This will result in deleting not only the various files and directories, but also all layers stored as datasets.

    Before enabling Docker again, be sure to first re-create the docker share as described above.

    Other changes:

    • CreateDocker: changed label Docker Hub URL to Registry URL because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table
    • Fix: Docker Containers console will not use bash if selected

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    Other changes:

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.
    • let page load even when PCI devices appear missing or are misassigned
    • Make remote viewer and web console options selectable.
    • Option to download .vv file and start remote viewer is browser set to open file .vv when downloaded.
    • Add remote viewer console support
    • Remove-lock-posix='on'-flock='on'/-
    • fix VM marked as Autostart not starting following manual array Start
    • Fix for Max memory > 1TB

    Dashboard

    The webGUI Dashboard has been redesigned and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire. There is a small lock icon on the menu bar which must be clicked to enable this function.

    Note: The lock icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Release bz file differences

    Unraid OS is comprised of a set of 5 so-called bz files in the root of the USB Flash boot device:

    • bzimage - the Linux kernel
    • bzroot - the root file system, sans console desktop
    • bzroot-gui - additional files needed for console desktop
    • bzmodules - modules (drivers) associated with the Linux kernel
    • bzfirmware - device firmware required by certain modules

    Starting with 6.12 release, the content of these files has been rearranged:

    • bzimage - the Linux kernel (same as before)
    • bzroot - the root file system excluding the /usr directory tree
    • bzroot-gui - a single file which auto-starts the console desktop (for compatibility)
    • bzmodules - modules (drivers) associated with the Linux kernel and device firmware required by certain modules
    • bzfirmware - the /usr directory and all files contained therein, including console desktop

    The results of this change is to speed up the boot process and free up nearly 1G of RAM. It also permits us to add more "stuff" to Unraid OS in the future without requiring more RAM. Finally, when booted in non-GUI mode, the desktop can be started by logging in at the console and typig the 'slim' command.

    The files bzfirmware and bzmodules are squashfs images mounted using overlayfs at /usr and /lib respectively. Since these files are loopback-mounted, care must be taken if ever you want to perform a manual update.

    What is a manual update? This is a method of updating Unraid OS on your USB flash boot device without using the Tools/Update OS function. Typically one would either:

    • open a Terminal window, wget the release zip file, unzip the release, and then 'cp' the bz files to root of the boot device.

    or

    • export the 'flash' share on your network and drag the bz files from a PC directly to the flash.

    Either method, starting with 6.12 can fail because the bzfirmware file will be overwritten while it is still mounted - not good.

    To get around this, you must first create a temp directory on the flash device and then 'mv' (or drag) all the bz files to this temp directly. Now you can copy the new bz files in place and reboot.

    Change Log

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bash_completion: version 2.11
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 23.0.6
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • firefox: version 111.0 (AppImage)
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • liburcu: version 0.14.0
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.11
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.4
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.7
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 6.1.1
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

    Linux kernel

    • version 6.1.29
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_INTEL_PMC_CORE: Intel PMC Core driver
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver
    • CONFIG_LRU_GEN: Multi-Gen LRU
    • CONFIG_SERIAL_8250_NR_UARTS=32: Maximum number of 8250/16550 serial ports
    • CONFIG_SERIAL_8250_RUNTIME_UARTS=4: Number of 8250/16550 serial ports to register at runtime

    Misc

    • avahi: enable/disable IPv4/IPv6 based on network settings and restrict avahidaemon to primary interface.
    • cgroup2 now the default
    • loopback images no longer mounted using directio
    • newperms script restricted to operate on /mnt/ only.
    • upgradepkg patched to prevent replacing existing package with older version.
    • current PCI bus/device information saved in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • networking: fix nginx recognizing IP address from slow dhcp servers
    • mover: fix: improper handling of symlinks
    • mover: fix: Mover logging syslog entries format different from previous releases
    • plugin: Display Run command retval in error message
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • terminal: OpenTerminal: change termination signal (hard stop)
    • upgrade Unraid OS: check for earlier upgrade without reboot
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • webgui: Dashboard changes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix: Local Firefox account pop-up postmessages not working
    • webgui: SMART test cannot be run on a UD disk because there is no spin down delay selection
    • webgui: status footer stuck on "Starting services" when applying share config setting chagnes.
    • webgui: Fix table layout for orphan images
    • webgui: Plugin: Do not show update button if incompatible
    • webgui: OpenTerminal: limit clients
    • webgui: Context menu: automatic triangle placement
    • webgui: Dashboard: fix pool warnings
    • webgui: Allow SMART long test for UD
    • webgui: Read processor type from /proc/cpuinfo
    • webgui: CSS: solve scrollbar issue in firefox
    • webgui: plugin: Make wget percentage detection more robust
    • webgui: Add share: fix hidden share name check
    • webgui: Display settings: add missing defaults
    • webgui: Array Operation: prevent double clicking of Start button
    • webgui: DeviceInfo: show shareFloor with units
    • webgui: DeviceInfo: added automatic floor calculation
    • webgui: Added autosize message
    • webgui: Shares: added info icon
    • webgui: Updated DeviceInfo and Shares page
    • webgui: Fix network display aberration.
    • webgui: Auto fill-in minimum free space for new shares
    • webgui: feat(upc): update to v3 for connect
    • webgui: Share/Pool size calculation: show and allow percentage values
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • wireguard: fix nginx issue when partial WireGuard config
    • Like 10



    User Feedback

    Recommended Comments



    5 minutes ago, jbear said:

    I actually already had a trailing / and did not state that in my post. 

    The trailing "/" is needed for the symlink.  Using /mnt/user/, doesn't help if the docker container doesn't add a trainling "/" to the share folder like this /mnt/user/symlink/ to reference the folder.

    • Upvote 1
    Link to comment
    7 minutes ago, dlandon said:

    The trailing "/" is needed for the symlink.  Using /mnt/user/, doesn't help if the docker container doesn't add a trainling "/" to the share folder like this /mnt/user/symlink/ to reference the folder.

     

    I understand, this change has created some issues for me, I fully understand how to work around them.  I was just stating what happened to me after I installed RC6, in the event others had a similar configuration.  I assume, it could be argued that passing /mnt/user/ is not a good idea, it has worked this way for many years until RC6.  That is all.

    Edited by jbear
    • Thanks 1
    Link to comment

    I want to make one thing clear, UNRAID has been an AMAZING product.  There are many times over the last 10 years that I have thought, "UNRAID does everything I want it to do", however,  Lime Technology and the community continue to make it even better.  Pretty unique if you ask me.

     

    Sorry to go OT.

     

    Cheers.

    Edited by jbear
    Link to comment
    17 minutes ago, jbear said:

    I want to make one thin clear, UNRAID has been an AMAZING product.  There are many times over the last 10 years that I have thought, "UNRAID does everything I want it to do", however,  Lime Technology and the community continue to make it even better.  Pretty unique if you ask me.

     

    Sorry to go OT.

     

    Cheers.

    No worries.  This is the kind of feedback we need to understand how people use Unraid and what issues they have.  I'll leave this open until we have some sort of resolution.

     

    Sorry, too many conversations going on about the same subject.

    Link to comment
    1 hour ago, bonienl said:

     

    Unraid is not involved in custom networks created by the user, it is however managing macvlan / ipvlan custom networks.

     

    When you create your own custom networks using docker CLI, it is your own responsiblity to do the correct creation of such a network.

     

    that I totally understand and that is fine, but it still dont explain why

    "docker create network test --subnet 192.168.2.0/24 --gateway 192.168.2.1" does not work as expected but

    "docker create network test --subnet 172.2.0.0/16 --gateway 172.2.0.1" does work.

    By working I mean I can access the dockers from the lan (in my case thats is 192.168.0.0/24)
    (aka the network gets routed/not routed  correctly. By not stetting a driver they both defaults to bridge, so should work in both cases)

    Edited by isvein
    Link to comment

    I upgraded from RC5 to RC6.

    Neither the docker service nor VMs are starting up.

     

    Docker service complains about /mnt/user/system/docker/docker.img
    VMs complain about /mnt/user/system/libvirt/libvirt.img


    Seems like the symlink is dead

     

    root@server:/mnt/user# ls -lash /mnt/user/system
    
    0 lrwxrwxrwx 4 nobody users 47 Mar 14  2021 /mnt/user/system -> /mnt/applications/system



    This is the content.
     

    root@TheSilence:/mnt/applications# ls -lash /mnt/applications/
    
    total 24G
    
    16K drwxrwxrwx 1 nobody users  98 Jul 17  2022 ./
    
      0 drwxr-xr-x 9 root   root  180 May 19 14:14 ../
    
      0 drwxrwxrwx 1 nobody users 442 Apr 23 09:00 appdata/
    
      0 drwx--x--x 1   1000 users  14 Nov 25  2020 docker/
    
      0 drwxrwxrwx 1 nobody users  38 Jan 14 19:21 domains/
    
    24G -rw-r--r-- 1 root   root  24G Mar 14  2021 hassio.qcow2
    
      0 drwxrwxrwx 1 nobody users 264 Jan 14 19:17 isos/
    
      0 drwxrwxrwx 1 nobody users 124 Jun 21  2022 sync/



    And it's on a mount:

    Quote

     

    root@server:/mnt/applications# mount | grep applications

    /dev/sdd1 on /mnt/applications type btrfs (rw,noatime,ssd,discard=async,space_cache=v2,subvolid=5,subvol=/)

     

     

    Edited by rb2k
    formatting
    Link to comment

    I found something over here on /mnt/disk1. A bit confused since it's a regular XFS drive and not my SSD where this used to be?

    /dev/md1p1 is on /mnt/disk1 type xfs (rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota)
     

    root@server:/# ls -lash /mnt/disk1/system/docker/
    
    total 51G
    
      0 drwxrwxrwx 2 root   root   24 Mar 14  2021 ./
    
      0 drwxrwxrwx 4 nobody users  47 Mar 14  2021 ../
    
    51G -rw-rw-rw- 1 nobody users 50G May 19 14:10 docker.img
    
    root@TheSilence:/mnt/applications# ls -lash /mnt/disk1/system/libvirt/
    
    total 1.1G
    
       0 drwxrwxrwx 2 root   root    33 Mar 14  2021 ./
    
       0 drwxrwxrwx 4 nobody users   47 Mar 14  2021 ../
    
    1.1G -rw-rw-rw- 1 nobody users 1.0G May 19 14:09 libvirt.img

     

     

    EDIT:  "cp -R /mnt/disk1/system /mnt/applications/" fixed it.

    I had to manually start the docker and VM service, but they're up and running now.

     

    I am not 100% sure if I had some non standard setup somehow previously, but I do recall trying to move things to SSDs at some point. Maybe that caused the issue.

     

    Edited by rb2k
    Link to comment
    4 hours ago, dlandon said:

    ...

    I know all that. I'm merely pointing out how this automatic change can and will break existing setups, so be prepared for the multitude of inbound support posts that it will generate. Maybe it's wiser to have this new behavior be an explicit opt-in feature.

    Link to comment

    RC6 Zfs dataset user share Strange Behavior

     

    when creating a dataset on a zfs pool, a user share is automatically created , but it shows up as its assigned to the Array,

    Primary Storage = Array. 

     

     

    image.thumb.png.b6e1dca628f2b78f1f5eca42a10d0e2f.png

    image.thumb.png.77267e6d25ab4cb6a6f07badf76728ee.png

     

     

    image.thumb.png.dd08ae1ede5d6399bb4423637e1debdc.png

     

    Edited by bobC
    add extra pic
    Link to comment

    I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover.  For 11.5 I had been running a dedicated mover_cache using a single SSD formatted as XFS.  With the move to 12.RC5 I decided to reformat the mover_cache as ZFS.  So far so good.  I also reformatted my array drives from XFS to ZFS.  Now using 12.RC5/12.RC6 when I save a file, such as an Excel Spreadheet, it is appropriately saved on the ZFS formatted mover_cache. When I invoke Mover, the file is then moved to the appropriate share on the ZFS Array Drive as it should.  All good so far and no change in behavior from 11.5 all XFS drives.  However if I try to open the file after it has been moved to the Array ZFS drive it is now seen as Read Only.  If I open it while it is on the SSD ZFS mover_cache I can open it without issue, full r/w privileges.  If I move the file to the Array, then Stop/Start the array or Reboot the server, the file can be opened with full r/w privileges.  Through a lot of trial and error I seem to have solved the problem by reverting the mover_cache from ZFS to XFS, leaving the array drives as ZFS.  In this configuration, there is no issue but it kind of defeats my hope of mirroring the cache drives in the future.  I started with a single SSD ZFS for the mover_cache, with intention to migrate my other cache pools(dockers), but I am holding for now.  I know nothing about ZFS and perhaps this expected behavior and I may have missed it in the release notes, but it was a surprise to me.

     

    The only other change  I made was to increase the memory available to the ZFS filesystem from 1/8 to ¼.  I have 32G of ECC ram and it mostly goes unused so I thought doubling the ZFS ram might be beneficial.  I haven’t seen this reported elsewhere, perhaps because most have not converted the array to ZFS.  Any other thoughts/guidance is appreciated.

    Link to comment

    Is there a way to modify how rc.sshd functions?

     

    The sshd_check() function was added to limit incoming SSH to physical devices only. I can see the security beneift but I feel it goes to far and limits some beneficial incoming connections such as over the WG tunnels. Can the function be revised to also parse the /boot/config/wireguard dir and include the wg interfaces within the allowed interfaces?

    Link to comment
    5 hours ago, DiscoverIt said:

    Is there a way to modify how rc.sshd functions?

     

    A future version will add and remove dynamically Wireguard tunnels when these are made active or inactive.

     

    Link to comment
    8 hours ago, jsiemon said:

    I’ve been running 12.RC5 and now 12.RC6 as upgraded from 11.5 and I’m experiencing a problem with Mover. 

    I cannot reproduce this, please create a bug report with a step by step description on how to preproduce, also attach your diags and mention the share name you are using.

    Link to comment
    4 hours ago, JorgeB said:

    I cannot reproduce this, ...

    JorgeB - Thanks for taking time to respond to my issue.  Your response got me to think further about my setup as I was trying to go step-by-step as to what I did.  I realized I had created the ZFS mover_cache using the Erase button on the Disk Settings Page.  This morning I reverted my XFS mover_cache back to ZFS mover_cache using the FORMAT Button on the Main Page.  Now everything seems to work properly after invoking the MOVE.  No more Read Only ERRORS on my saved files after being moved to the share.  I will continue to monitor this, but for now everything is working as expected.  Chalk this up to user error/stupidity, not a bug.  Thanks again for your help in working through this.

    • Like 1
    Link to comment

    What's the motivation for these changes?

     

    Quote
    • rc.samba - let smb, nmb service listen on regular interfaces only which have an IP address, this includes the primary interface + set ipv4 / ipv6 support (also for wsdd2)
    • rc.ssh - listen on regular interfaces only which have an IP address, this includes the primary interface + set ipv4 / ipv6 support

     

    As implemented, this breaks Samba and SSH access for plugins which bring VPN connections (specifically in my case, Tailscale).

    Link to comment

    I'm so excited Unraid is bringing native support for ZFS! So frickin' awesome.

     

    With RC 6 I decided to give it a shot on the test server. The one thing i'm concerned with is figuring out how to import the existing pool which contains multiple vdevs. My setup is as follows:

     

    dumpster                               
    
      raidz1-0                             
    
        ata-WDC_WUH721414ALE6L4_9RJNXNBC      
    
        ata-WDC_WUH721414ALE6L4_9RJR6S5C       
    
        ata-WDC_WUH721414ALE6L4_9RJPW0WC       
    
        ata-WDC_WUH721414ALE6L4_9RJPV64C       
    
      raidz1-1                             
    
        ata-WDC_WD121KRYZ-01W0RB0_8DH8S2AH     
    
        ata-WDC_WD121KRYZ-01W0RB0_8DH898VH     
    
        ata-WDC_WD121KRYZ-01W0RB0_8DH91Z1H     
    
        ata-WDC_WD121KRYZ-01W0RB0_8DH4VWMH     
    
      raidz1-2                              
    
        ata-WDC_WD181KRYZ-01AGBB0_3FHR6YST     
    
        ata-WDC_WD181KRYZ-01AGBB0_3FHS6EBT     
    
        ata-WDC_WD181KRYZ-01AGBB0_3GKUE1VE     
    
        ata-WDC_WD181KRYZ-01AGBB0_3GKZVZBE 

     

    raidz1-0 contains 4x 14TB drives

    raidz1-1 contains 4x 12TB drives

    raidz1-2 contains 4x 18TB drives

     

    I haven't created any special datasets, just created the pool & the 3 vdevs.

     

    In the documentation, it looks like I would create the pool with 12 drives? will unraid automatically identify the 3 vdevs? Or do i need to import it differently?

     

    All ZFS work was done using the plugins for version 6 of unraid.

     

    Thank you!

    Edited by v3life
    Link to comment

    I'm itching to upgrade, but I am concerned with the possibility of the "exclusive" shares feature breaking my existing structure.

     

    I know it's already been mentioned that options are being considered, but I wanted to add my support for, at the very least, an option to disable the functionality entirely.  I have no need currently to improve share performance.  If it could be toggled at the share level, that could be nice, but the ability to globally disable it is all I would really need for now, just so I could upgrade without worrying about immediately breaking something that's been working for years.

    Link to comment
    21 minutes ago, fritzdis said:

    I'm itching to upgrade, but I am concerned with the possibility of the "exclusive" shares feature breaking my existing structure.

     

    Really depends upon how you are set up if there will be an issue.  It's the passing of /mnt or /mnt/user to a container where there is a problem and then accessing the exclusive share from it.  (EG: on the Krusader app, /media gets mapped to /mnt/user.  You can access everything from it except for the exclusive shares)

     

    99.9% of the time this shouldn't be done, as it's a quick and dirty shortcut, and you are effectively giving the container access to your entire server (eg: Does Plex really need access to your banking information?)

     

    In other words, most people will already have their appdata share being "exclusive" and its not an issue because it winds up getting passed through as /config -> /mnt/user/appdata)

     

    But, as has been stated already it's being looked at.

    • Like 1
    Link to comment
    1 hour ago, Squid said:

    Really depends upon how you are set up if there will be an issue.  It's the passing of /mnt or /mnt/user to a container where there is a problem and then accessing the exclusive share from it.  (EG: on the Krusader app, /media gets mapped to /mnt/user.  You can access everything from it except for the exclusive shares)

     

    99.9% of the time this shouldn't be done, as it's a quick and dirty shortcut, and you are effectively giving the container access to your entire server (eg: Does Plex really need access to your banking information?)

     

    In other words, most people will already have their appdata share being "exclusive" and its not an issue because it winds up getting passed through as /config -> /mnt/user/appdata)

     

    But, as has been stated already it's being looked at.

    Thanks!  The vast majority of my mappings are /something -> /mnt/user/something or similar (with or without trailing slash).  I have a single /mnt/user mapping that I can change to separate mappings to the specific shares I need.

     

    So maybe I'll take the plunge on RC6 so I can get started on converting my data disks to ZFS one by one.  As far as I can tell, this was the only known issue holding me back.

    Link to comment

    Is 6.12.x going to change macvlan to ipvlan by itself? I had a lot of network issues and changed back to 6.11.5 really quick. After that the network was fucked up. I switched between 6.11.5 and 6.11.4 nothing working. Couldn’t upgrade plug-ins, no updates on dockers, even no update on the system.

    The there was the right hint out of the community. Ipvlan behind an AVM Fritzbox (very common in germany) is not going to work. 
    After switching to macvlan everything works like charm. (Switching back was very tricky.

    - Disable dockers, apply

    - switch back to macvlan, apply but unraid switch back to ipvlan by it’s own 


    - disable dockers, apply

    - now update from 6.11.4 to 6.11.5 

    - ipvlan to macvlan

    - working again

     

    With dockers disabled, no hickup in the Fritzbox because not several IP addresses behind 1 MAC address.


    conclusion: if 6.12 will not work with macvlan, everyone with a Fritzbox will stuck on 6.11

    “If "Docker custom network type" is set to "macvlan" you may get call traces and crashes on 6.12 even if you did not on 6.11. If so, we recommend changing to "ipvlan", or if you have two network cards you can avoid the issue completely: https://forums.unraid.net/topic/137048-guide-how-to-solve-macvlan-and-ipvlan-issues-with-containers-on-a-custom-network/“

    Link to comment
    16 hours ago, v3life said:

    n the documentation, it looks like I would create the pool with 12 drives? will unraid automatically identify the 3 vdevs? Or do i need to import it differently?

     

    All ZFS work was done using the plugins for version 6 of unraid.

     

    Unraid should import that pool without any issues, just add a new pool with 12 slots, assign all devices, leave the fs set to auto, and start array, also note that if the pool is online it must be exported first, Unraid needs to do the zpool import.

     

    If for some reason it's not imported the existing pool won't be damaged, create a bug report with the diags.

    Link to comment
    2 hours ago, Schicksal said:

    Is 6.12.x going to change macvlan to ipvlan by itself?

    No, only new installs default to ipvlan, but that was already changed on v6.11.5

    Link to comment

    Hi All

    I just did the upgrade from the stable to this release candidate rc 6 and after some time the system got really unresponsive and I got this from the log terminal: (Never seen this before?)

    image.thumb.png.3a4980d673677d4851d44abb42a00ced.png

     

    I then got the diag. (Took forever but I got it in the end) 🙂
    Hope some one can share some light on this (Looks like all my dockers and VM are still running)

     

    diagnostics-20230521-1109.zip

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.