• Unraid OS version 6.12.0-rc5 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     

    Anticipating stable release in a couple of days.

     


    Version 6.12.0-rc5 2023-05-01

    Bug fixes/improvements:

    • Auto-mount all nested ZFS datasets upon array Start.
    • Fix bug in bind-mount when share names contain spaces.
    • Restrict avahidaemon to primary interface.
    • Restrict 'newperms' script to operate on /mnt/ only.
    • Fixed typos in help text.

     

    Linux kernel

    Fixes bug in bind-mount when share names contain spaces.

    • version: 6.1.27

     

    Version 6.12.0-rc4 2023-04-27

    New in this release is a conceptual change in the way storage is assigned to shares. Normally such a change would not happen in an -rc series; however, in this case the under-the-hood coding was minimal. The concept outlined below is something we planned on introducing in the Unraid OS 6.13 cycle. We decided to introduce this change now because of increased interest in Unraid OS now that ZFS is supported.

    The old concept of main storage being the unRAID array with an optional "Cache" is confusing to many new users, especially since cache has a specific meaning in ZFS.

    Also outlined below, we introduced the concept of an exclusive share. This is simply a share where all the data exists in a single named pool. In this case we set up a bind-mount, bypassing the FUSE-based User Share file system. This feature is primarily aimed at maximizing I/O for large fast ZFS pools accessed via a fast network

    Share storage conceptual change

    Configuring the storage options for a share is specified using two inputs:

    • Primary storage
    • Secondary storage

    Primary storage is where new files/folders are created. If Primary storage is below the Minimum Free Space setting then new files and folders will be created in Secondary storage, if configured.

    Each input presents a drop-down which lists "array", "none", and each named pool as a selection according to some configuration rules:

    For the Primary storage drop-down:

    • the "none" option is omitted, ie, Primary storage must be selected
    • any named pool can be selected
    • "Array" can be selected (meaning the unRAID array)

    For the Secondary storage drop-down:

    • the "none" option is included, ie, Secondary storage is optional
    • if Primary storage is a pool name, then the only options are "none" and "Array"
    • if Primary storage is "Array", then only "none" appears as an option

    When "Array" is selected for either Primary or Secondary storage, a set of additional settings slide in:

    • Allocation method
    • Included disk(s)
    • Excluded disk(s)
    • Split level

    When a btrfs named pool is selected for either Primary or Secondary storage, an additional setting slides in:

    • Enable Copy-on-write

    When a ZFS named pool is selected for either Primary or Secondary storage, there are no additional settings at this time but there could be some in the future. For example, since a share is created as a ZFS dataset, it could have a different compression setting than the parent pool if we need to implement this.

    Mover action

    When there is Secondary storage configured for a share the "Mover action" setting becomes enabled, letting the user select the transfer direction of the mover:

    • Primary to Secondary (default)
    • Secondary to Primary

    Exclusive shares

    If Primary storage for a share is a pool and Secondary storage is set to "none", then we can set up a bind-mount in /mnt/user/ directly to the pool share directory. (An additional check is made to ensure the share also does not exist on any other volumes.) There is a new status flag, 'Exclusive access' which is set to 'Yes' when a bind-mount is in place; and, 'No' otherwise. Exclusive shares are also indicated on the Shares page.

    The advantage of setting up a bind-mount is that I/O bypasses FUSE-based user share file system (shfs) which can significantly increase performance.

    There are some restrictions:

    • Both the share Min Free Space and pool Min Free Space settings are ignored when creating new files on an exclusive share.
    • If there are any open files, mounted loopback images, or attached VM vdisk images on an exclusive share, no settings for the share can be changed.
    • If the share directory is manually created on another volume, files are not visible in the share until after array restart, upon which the share is no longer exclusive.

    Change Log vs. 6.12.0-rc3

    Linux kernel

    • version: 6.1.26

    Misc

    • avahi: enable/disable IPv4/IPv6 based on network settings
    • webgui: DeviceInfo: show shareFloor with units
    • webgui: DeviceInfo: added automatic floor calculation
    • webgui: Added autosize message
    • webgui: Shares: added info icon
    • webgui: Updated DeviceInfo and Shares page [explain]
    • webgui: Fix network display aberration.
    • webgui: Auto fill-in minimum free space for new shares
    • webgui: feat(upc): update to v3 for connect
    • webgui: Share/Pool size calculation: show and allow percentage values
    • webgui: VM manager: Make remote viewer and web console options selectable.
    • webgui: VM manager: Option to download .vv file and start remote viewer is browser set to open file .vv when downloaded.
    • webgui: VM manager: Add remote viewer console support
    • webgui: VM manager: Remove-lock-posix='on'-flock='on'/-

    Base Distro

    • openzfs: version 2.1.11

    Version 6.12.0-rc3 2023-04-14

    Upgrade notes

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

    If you revert back from 6.12 to 6.11.5 or earlier, you have to force update all your Docker containers and start them manually after downgrading. This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    Upon boot, if all PCI devices specified in 'config/vfio-pci.cfg' file do not properly bind, VM Autostart is prevented. You may still start individual VMs. This is to prevent Unraid host crash if hardware PCI IDs changed because of a kernel update or physical hardware change. To restore VM autostart, examine '/var/log/vfio-pci-error' and remove offending PCI IDs from 'config/vfio-pci.cfg' file and reboot.

    Linux Multi-Gen LRU is a relatively new feature now included but not enabled by default. You can enable by adding this line to your 'config/go' file:

    echo y > /sys/kernel/mm/lru_gen/enabled

    If you revert back from 6.12 to 6.11.5 or earlier you many need to remove that line.

    Obsolete/Broken Plugins

    There are a few plugins which are known to be incompatible with Unraid 6.12, and upon boot will not be installed. You will get a notification for each plugin that is affected, and can review the list by going to Plugins/Plugin File Install Errors.

    • disklocation-master version 2022.06.18 (Disk Location by olehj, breaks the dashboard)
    • plexstreams version 2022.08.31 (Plex Streams by dorgan, breaks the dashboard)
    • corsairpsu version 2021.10.05 (Corsair PSU Statistics by Fma965, breaks the dashboard)
    • gpustat version 2022.11.30a (GPU Statistics by b3rs3rk, breaks the dashboard)
    • ipmi version 2021.01.08 (IPMI Tools by dmacias72, breaks the dashboard)
    • nut version 2022.03.20 (NUT - Network UPS Tools by dmacias72, breaks the dashboard)
    • NerdPack version 2021.08.11 (Nerd Tools by dmacias72)
    • upnp-monitor version 2020.01.04c (UPnP Monitor by ljm42, not PHP 8 compatible)
    • ZFS-companion version 2021.08.24 (ZFS-Companion Monitor by campusantu, breaks the dashboard)

    Some of the affected plugins have been taken over by different developers, we recommend that you go to the Apps page and search for replacements. Please ask plugin-specific questions in the support thread for that plugin.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool. In addition you may format any data device in the unRAID array with a single-device ZFS file system.

    We are splitting full ZFS implementation across two Unraid OS releases. Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4-way mirror in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support scheduled trimming of ZFS pools.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported. Note: imported hybrid pools may not be expanded in this release.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror (up to 4-way), raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

    Pools created with the steini84 plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

    Autotrim can be configured as on or off (except for single-device ZFS volumes in the unRAID array).

    Compression can be configured as on or off, where on selects lz4. Future update will permit specifying other algorithms/levels.

    When creating a new ZFS pool you may choose zfs - encrypted, which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    Top-level user shares in a ZFS pool are created as datasets instead of ordinary directories.

    btrfs pools

    Autotrim can be configured as on or off when used in a pool.

    Compression can be configured as on or off. on selects zstd. Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as on or off when used as a single-slot pool.

    Docker

    It is possible to configure the Docker data-root to be placed in a directory on a ZFS storage pool. In this case Docker will use the 'zfs' storage driver. This driver creates a separate dataset for each image layer. Because of this, here is our recommendation for setting up Docker using directory:

    First, create a docker user share configured as follows:

    • Share name: docker
    • Use cache pool: Only
    • Select cache pool: name of your ZFS pool

    Next, on Docker settings page:

    • Enable docker: Yes
    • Docker data-root: directory
    • Docker directory: /mnt/user/docker

    If you ever need to delete the docker persistent state, then bring up the Docker settings page and set Enable docker to No and click Apply. After docker has shut down click the Delete directory checkbox and then click Delete. This will result in deleting not only the various files and directories, but also all layers stored as datasets.

    Before enabling Docker again, be sure to first re-create the docker share as described above.

    Other changes:

    • CreateDocker: changed label Docker Hub URL to Registry URL because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table
    • Fix: Docker Containers console will not use bash if selected

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.

    Dashboard

    The webGUI Dashboard has been redesigned and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire. There is a small lock icon on the menu bar which must be clicked to enable this function.

    Note: The lock icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Release bz file differences

    Unraid OS is comprised of a set of 5 so-called bz files in the root of the USB Flash boot device:

    • bzimage - the Linux kernel
    • bzroot - the root file system, sans console desktop
    • bzroot-gui - additional files needed for console desktop
    • bzmodules - modules (drivers) associated with the Linux kernel
    • bzfirmware - device firmware required by certain modules

    Starting with 6.12 release, the content of these files has been rearranged:

    • bzimage - the Linux kernel (same as before)
    • bzroot - the root file system excluding the /usr directory tree
    • bzroot-gui - a single file which auto-starts the console desktop (for compatibility)
    • bzmodules - modules (drivers) associated with the Linux kernel and device firmware required by certain modules
    • bzfirmware - the /usr directory and all files contained therein, including console desktop

    The results of this change is to speed up the boot process and free up nearly 1G of RAM. It also permits us to add more "stuff" to Unraid OS in the future without requiring more RAM. Finally, when booted in non-GUI mode, the desktop can be started by logging in at the console and typig the 'slim' command.

    The files bzfirmware and bzmodules are squashfs images mounted using overlayfs at /usr and /lib respectively. Since these files are loopback-mounted, care must be taken if ever you want to perform a manual update.

    What is a manual update? This is a method of updating Unraid OS on your USB flash boot device without using the Tools/Update OS function. Typically one would either:

    • open a Terminal window, wget the release zip file, unzip the release, and then 'cp' the bz files to root of the boot device.

    or

    • export the 'flash' share on your network and drag the bz files from a PC directly to the flash.

    Either method, starting with 6.12 can fail because the bzfirmware file will be overwritten while it is still mounted - not good.

    To get around this, you must first create a temp directory on the flash device and then 'mv' (or drag) all the bz files to this temp directly. Now you can copy the new bz files in place and reboot.

    Linux kernel

    • version 6.1.23
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_INTEL_PMC_CORE: Intel PMC Core driver
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver
    • CONFIG_LRU_GEN: Multi-Gen LRU
    • CONFIG_SERIAL_8250_NR_UARTS=32: Maximum number of 8250/16550 serial ports
    • CONFIG_SERIAL_8250_RUNTIME_UARTS=4: Number of 8250/16550 serial ports to register at runtime

    Misc

    • cgroup2 now the default
    • loopback images no longer mounted using directio
    • upgradepkg patched to prevent replacing existing package with older version.
    • current PCI bus/device information saved in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • networking: fix nginx recognizing IP address from slow dhcp servers
    • mover: fix: improper handling of symlinks
    • mover: fix: Mover logging syslog entries format different from previous releases
    • plugin: Display Run command retval in error message
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • terminal: OpenTerminal: change termination signal (hard stop)
    • upgrade Unraid OS: check for earlier upgrade without reboot
    • VM Manager: let page load even when PCI devices appear missing or are misassigned
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • webgui: Dashboard changes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix: Local Firefox account pop-up postmessages not working
    • webgui: VM Manager: fix VM marked as Autostart not starting following manual array Start
    • webgui: SMART test cannot be run on a UD disk because there is no spin down delay selection
    • webgui: status footer stuck on "Starting services" when applying share config setting chagnes.
    • webgui: Fix table layout for orphan images
    • webgui: Plugin: Do not show update button if incompatible
    • webgui: OpenTerminal: limit clients
    • webgui: Context menu: automatic triangle placement
    • webgui: Dashboard: fix pool warnings
    • webgui: Allow SMART long test for UD
    • webgui: Read processor type from /proc/cpuinfo
    • webgui: CSS: solve scrollbar issue in firefox
    • webgui: plugin: Make wget percentage detection more robust
    • webgui: Add share: fix hidden share name check
    • webgui: Display settings: add missing defaults
    • webgui: Array Operation: prevent double clicking of Start button
    • wireguard: fix nginx issue when partial WireGuard config

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 20.10.23
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • firefox: version 111.0 (AppImage)
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • liburcu: version 0.14.0
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.9
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.4
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.7
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 6.1.1
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

     

    • Like 11
    • Thanks 1



    User Feedback

    Recommended Comments



    24 minutes ago, Squid said:

    That won't work.  It's a common misconception of the syslog server.  The share option is for where to send logs that are incoming (eg from another server).  You want the mirror to flash

    You can set up the syslog server to log to itself and logs will go to the syslog share.  You have to set it up on the syslog server itself.  i.e. if the syslog server is 192.168.1.10, you can set the syslog server on 192.168.1.10 to log to itself by setting 192.168.1.10 in the syslog setup.

    • Upvote 1
    Link to comment

    Regarding exclusive shares - I currently have my appdata folder set to cache-only and mount my docker configs to /mnt/cache/appdata (rather than /mnt/user/appdata) to bypass FUSE. It sounds like going forward I don't need to do this? Is it fine to leave my current settings or should I switch to /mnt/user/appdata?

    Link to comment
    1 hour ago, dlandon said:

    You can set up the syslog server to log to itself and logs will go to the syslog share.  You have to set it up on the syslog server itself.  i.e. if the syslog server is 192.168.1.10, you can set the syslog server on 192.168.1.10 to log to itself by setting 192.168.1.10 in the syslog setup.

    Nobody thinks of it like that, and it's never actually logged anything in my system setting the syslog server to the IP or 127.0.0.1. For remote systems it works no problems

    Link to comment
    51 minutes ago, Squid said:

    Nobody thinks of it like that, and it's never actually logged anything in my system setting the syslog server to the IP or 127.0.0.1. For remote systems it works no problems

    Works for me.

    • Upvote 3
    Link to comment
    5 hours ago, mattalat said:

    It sounds like going forward I don't need to do this? Is it fine to leave my current settings or should I switch to /mnt/user/appdata?

    If the share is exclusive you can use either one, it will perform the same.

    Link to comment
    11 hours ago, Squid said:

    Nobody thinks of it like that, and it's never actually logged anything in my system setting the syslog server to the IP or 127.0.0.1. For remote systems it works no problems

     

    10 hours ago, dlandon said:

    Works for me.

     

    Since I wrote the instructions on 'how to use' the Syslog Server, I just retested the process of setting up the Unraid server with the problem server as the Local syslog server.  It still works.  (The only problem with writing the file to the Unraid server with the problem is the 'write cache' that all modern OS use.  It is possible for the 'crash' to occur before the OS could finish writing the data to a physical device... )

    Edited by Frank1940
    • Upvote 1
    Link to comment

    Is there any chance that Unraid 6.12 stable release will include bash autocomplete config for docker?
     

     

     

    Link to comment

    I have zpool with single mirror vdev created on TrueNAS Scale and I want to import it to Unraid. Unfortunately I was not able to do it through GUI, so I am assuming that I need to do it through CLI. What is the best way to import zpool and mount it persistently, so it will mount automatically when I start array? As far as I am aware, editing fstab won't work on Unraid. I did import and mount pool once in rc4, but it did not last after reboot. 

     

    Also, are there any downfalls I should be aware of moving appdata and docker.img from btrfs ssd pool to zfs ssd pool? 

    Link to comment
    1 minute ago, Volkerball said:

    I have zpool with single mirror vdev created on TrueNAS Scale and I want to import it to Unraid.

    TrueNAS created zpools cannot be imported for now, because they use partition #2 for zfs, support for those pools is expected in the near future.

    Link to comment

    Hi everyone, I have been really excited for that zfs support for long time now (and even more exited for 6.13), and have been using all previous release candidates on my test server. I have now deployed rc5 to my production server (while also recreating every zfs pools from scratch) and it is running buttery smooth. Amazing work, guys, really! Big thanks to all of you!

     

    There are a few minor bugs and suggestions from my journey I wanted to share, some specific to this release, some not. Do with it what every you want 🙂

     

    Bug: Can´t configure existing qcow2 vDisk via GUI when creating a new VM (not specific to this release)

    When trying to create a new VM with a vDisk pointing to an existing qcow2 image the vDisk Type setting is removed from the GUI. After examining the XML I noticed that it is always set to raw, even if the vDisk path is pointing to a qcow2 image. Not sure if there is any logic that tries to detect the vDisk type that fails or if this is just a UI bug, happy to provide more information if needed.

     

    Bug: Manual vDisk location drop-down menu slowly moves to the right (haven't noticed this before tbh)

    On the same note, when opening and closing the drop-down menu to select a path, the menu always reappears a few inches to the right until it leaves the screen.

     

    Bug: Resizing dashboard leads to overlapping columns (specific to this release)

    When resizing the browser when the Dashboard tab is open (from three column view to two column view) the middle columns isn't positioned / sized correctly and is overlapping with the first column. 

     

    Suggestion: Rudimentary support for Special, ZIL/SLOG, L2ARC

    I know that support for more zfs features is planned for 6.13+, but it would be really helpful, if pools configured manually via console with a Special Metadata Device, a ZFS Intent Log or an L2ARC would be able to import and be used in disk shares, show things like pool utilization and trigger regular scrubs. I am using the Special Metadata Device for my main ZFS pool and tried to fix it by myself, but I wasn't able to figure out where the zfs import command is actually called during array startup (tried searching in the php files and scripts folder). Any hint pointing me in the right direction would be greatly appreciated...

    Got it working using this guide: 

     

     

    Suggestion: Display VLAN names when selection Network Source during VM creation

    When multiple VLANs are configured in network settings and multiple bridges are created, it would be helpful to have the VLAN name directly within the drop-down menu when selecting "Network Source". This is basically already done when selecting a custom network in docker (br0.40 -- MyVLAN).

     

    Suggestion: Separate button for pool configuration in Main tab

    Maybe this is just me, but I have found myself a couple of times struggling to find the GUI setting for configuring pool properties (e.g., Compression, Encryption, ZFS Pool Layout). I think clicking on the first disk in the array is a bit unintuitive, maybe having a separate edit button (e.g. next to the disk spin up / down buttons) would be more straight forward. But again, just my opinion, maybe that's just me getting old...

    Edited by jonpetersathan
    Link to comment
    20 hours ago, JorgeB said:

    TrueNAS created zpools cannot be imported for now, because they use partition #2 for zfs, support for those pools is expected in the near future.

    I see. I am guessing that my only option right now would be mount zfs pool temporarily through CLI, transfer files to unraid array, format disks to zfs through unraid GUI, recreate datasets and transfer files back?

    Link to comment
    29 minutes ago, jonpetersathan said:

    it would be really helpful, if pools configured manually via console with a Special Metadata Device, a ZFS Intent Log or an L2ARC would be able to import and be used in disk shares,

    You can already import those pools, if you need help importing one please create a new thread in the general support.

     

     

    Link to comment
    22 minutes ago, Volkerball said:

    I see. I am guessing that my only option right now would be mount zfs pool temporarily through CLI, transfer files to unraid array, format disks to zfs through unraid GUI, recreate datasets and transfer files back?

    That would work.

    • Thanks 1
    Link to comment
    37 minutes ago, JorgeB said:

    You can already import those pools, if you need help importing one please create a new thread in the general support.

     

     

    Yes I know, but when importing the pool manually, Disk Shares and for example disk usage in the dashboard, as well as automated scrubs doesn't work, right? Or am I missing something?

    Link to comment
    10 minutes ago, jonpetersathan said:

    Disk Shares and for example disk usage in the dashboard, as well as automated scrubs doesn't work, right?

    They do if you import it using the GUI, as an Unraid pool.

    Link to comment
    On 5/7/2023 at 10:03 AM, JorgeB said:

    There have been some reports that doing a new config, especially with rc2, could cause spin down issues, for some it helped going back to 6.11.5 then upgrading back, others going back to v6.11.5, do a new config and then upgrade back, zfs won't mount with v6.11.5 but you can still try either of those to see if it helps, of course don't format the disks when they come up unmountable.

    Hello everyone,

    that's exactly how I tried it. The rebuilding of the parity was finished yesterday evening. Unfortunately, the spindown still doesn't work. Maybe it also has something to do with the weird mounts. I am attaching a new diagnostics and the output from the "df" command. Maybe someone will think of something.
    Should I perhaps create the "Cache" pool again?

    Kind regards
    yokes

    brutus-diagnostics-20230509-1755.zip Ausgabe df.txt

    Link to comment
    13 minutes ago, Jochen Kklaus said:

    The rebuilding of the parity was finished yesterday evening.

    FYI no need to rebuild parity, just check "parity is already valid" after the new config, but if that didn't help not sure what could.

    Link to comment
    47 minutes ago, JorgeB said:

    They do if you import it using the GUI, as an Unraid pool.

    Nope, when importing a zfs pool after adding a Special Metadata Device, the GUI reports back "Unmountable: Unsupported or no file system". Everything else is the same, the ZFS pool has been created by unRAID, only thing I did is adding the Special device.

    Link to comment
    7 minutes ago, jonpetersathan said:

    Nope, when importing a zfs pool after adding a Special Metadata Device, the GUI reports back "Unmountable: Unsupported or no file system". Everything else is the same, the ZFS pool has been created by unRAID, only thing I did is adding the Special device.

     

    1 hour ago, JorgeB said:

    please create a new thread in the general support.

    And post the diagnostics

    Link to comment
    On 5/7/2023 at 10:07 PM, Squid said:

    Nobody thinks of it like that, and it's never actually logged anything in my system setting the syslog server to the IP or 127.0.0.1. For remote systems it works no problems

     

    If I remember right 127* doesn't work, you have to use the LAN IP.

    I've run mine like that for years.

     

    I also have these tweaks in config/go:

    # Put syslogs in system/logs (web ui only allows share root)
    cp /etc/rsyslog.conf /etc/rsyslog.conf.orig
    sed -i -e 's/\/mnt\/user\/system\//\/mnt\/user\/system\/logs\//g' /etc/rsyslog.conf
    # name logs by hostname instead of IP
    sed -i -e 's/FROMHOST-IP/FROMHOST/g' /etc/rsyslog.conf
    # Apply changes
    /etc/rc.d/rc.rsyslogd restart

     

    With the corresponding change to logrotate (along with other tweaks) in a user script set to run At Startup of Array – for some reason they didn't work in config/go.

    #!/bin/bash
    
    # update logrotate with non-default syslog location
    cp /etc/logrotate.d/rsyslog.local /etc/logrotate.d/rsyslog.local.orig
    sed -i -e 's/\/mnt\/user\/system\//\/mnt\/user\/system\/logs\//g' /etc/logrotate.d/rsyslog.local
    
    # add compression
    sed -i -e 's/missingok/missingok\n  compress\n  delaycompress/g' /etc/logrotate.d/rsyslog.local
    
    # make sure log size doesn't exceed 10MB (setting to 10MB unintuitively allow up to 11MB)
    sed -i -e 's/size 10M/size 9M/g' /etc/logrotate.d/rsyslog.local
    
    # not sure this is necessary
    /etc/rc.d/rc.rsyslogd restart
    
    exit 0

     

     

     

    Link to comment

    After upgrading, the "Minimum free space" for all of my shares was set to 51.2%, both those that are assigned to a cache pool and those that are not.  The pop-up help for this field says:

     

    Quote

    Choose a value which is equal or greater than the biggest single file size you intend to copy to the share. Include units KB, MB, GB and TB as appropriate, e.g. 10MB.

     

    Which suggests that the value should be a size, not a percentage. Does the help text need to be changed?

     

    What is the behavior of this setting for shares with no "secondary storage" configured? That's not clear from the release notes or help text. 

     

    Why did it default to something like 51.2%? If I understand the function correctly, that is not a sensible value for the system to choose automatically. 

     

     

    Edited by WalkerJ
    Link to comment

    I wanted to report an issue i experienced with v6.12.0-rc5.

     

    In short, 12+hrs after upgrading i lose access to the Web UI and samba shares. I did not check SSH, and I was not able to check the server locally before rolling back to v6.12.0-rc4.1, which has been stable, including all other RC's before 5.

     

    Gracefully powering down the server with a press of the power button will bring the server back, however, the issue repeats itself 12+hrs later.

     

    If anyone has any suggestions on what log i should monitor please let me know.

     

    Thank You.

    diagnostics-20230503-0805.zip

    Link to comment
    6 hours ago, chris1259 said:

    If anyone has any suggestions on what log i should monitor please let me know.

    Diags are after rebooting, so not much to see, we'd need them before rebooting.

    Link to comment
    14 hours ago, jonpetersathan said:

    Nope, when importing a zfs pool after adding a Special Metadata Device, the GUI reports back "Unmountable: Unsupported or no file system"

    I completely forgot that I've made a FAQ entry on how to create and re-import those pools, so take a look there first, especially at the re-import pool part, if it still doesn't work then create a new post in the general support forum with the diags.

    • Like 1
    Link to comment
    3 hours ago, JorgeB said:

    Diags are after rebooting, so not much to see, we'd need them before rebooting.

    If i could reach the server via SSH, is there a command i could run that would generate the diags? Thanks.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.