• Unraid OS version 6.12.0-rc1 available


    limetech

    The 6.12 release includes initial ZFS support in addition to the usual set of bug fixes, kernel, and package updates.

     

    Please create new topics here in this board to report Bugs or other Issues.

     

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Note regarding plugins:  This release includes upgrading PHP from v7.4 to v8.2.3, necessary for continued PHP security updates. We recommend you upgrade all your plugins before updating the OS, and be aware that any plugin which hasn't been updated recently (with a comment about PHP 8 in the change logs) may not work correctly. You may want to uninstall that plugin until the author has had time to update it.  Plugin authors are aware of necessary changes and many plugins have already been updated.  If you encounter an issue with a plugin, please create a nice post in the plugins support topic.

     

    Special thanks to all our beta testers and especially:

     

    @bonienl for his continued refinement and updating of the Dynamix webGUI.

    @Squid for continued refinement of Community Apps and associated feed.

    @dlandon for continued refinement of Unassigned Devices plugin.

    @ich777 for continued support of third-party drivers, recommendations for base OS enhancements and beta testing.

    @JorgeB for rigorous testing of the storage subsystem and helping us understand ZFS nuances, of which there are many.

    @SimonF for curating our VM Manager and adding some very nice enhancements.

     

    Thanks to everyone above and our Plugin Authors for identifying and putting up with all the changes which came about from upgrading PHP from v7 to v8.

     

    Finally a big Thank You! to @steini84 who brought ZFS to Unraid via plugin several years ago.

     


    Version 6.12.0-rc1 2023-03-14

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

     

    If you revert back from 6.12 to 6.11.5 or ealier, you have to force update all your Docker containers and start them manually after downgrading.  This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool.  In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases.  Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4 devices in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror, raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

     

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as "on" or "off" (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as "on" or "off", where "on" selects "lz4". Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    btrfs pools

    Autotrim can be configured as "on" or "off" when used in a pool.

     

    Compression can be configured as "on" or "off". "on" selects "zstd". Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as "on" or "off" when used as a single-slot pool.

    Docker

    • CreateDocker: changed label "Docker Hub URL" to "Registry URL" because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.

    Dashboard

    The webGUI Dashboard has been redesigned, and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire.  There is a small "lock" icon on the menu bar which must be clicked to enable this function.

     

    Note: The "lock" icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Linux kernel

    • version 6.1.19
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 20.10.23
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.9
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.3
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.4
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 5.13.0
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

    Misc

    • cgroup2 now the default
    • do not mount loopback images using directio
    • Patch upgradepkg to prevent replacing existing package with older version.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • mover: fixed bug: improper handling of symlinks
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • Like 16
    • Thanks 4
    • Upvote 1



    User Feedback

    Recommended Comments



    Found the offending plugin. It is the GPU statistics plugin. I have removed it.  Corrected the display issue.  Reinstalled problem was back.  So temporarily I have decided to remove it.

     

    image.png.b0d1dbe0a2b9d8097e3ac8d43dd1ee21.png

    Link to comment
    11 minutes ago, SimonF said:

    it is awaiting merge of the PR by the owner

    Thanks.  I had thought that this had already been handled.  Since the current release crashes the dashboard now, it is also being marked as incompatible.

     

    Once the PR is merged and released and  I'm notified of this then the incompatibility will be lifted.

    • Like 1
    Link to comment

    What are the resource requirements for UnRaid ZFS?   Is it unrealistic to expect my old Intel Core 2 Duo, 4GB ram to run a ZFS pool?

     

    I am assuming a current XFS/BTRFs array/pool cannot be converted to a ZFS pool without data loss?  Current drives have to be reformatted and wiped and data reloaded, right?

    Edited by hendrst1
    Link to comment
    7 hours ago, hendrst1 said:

    I am assuming a current XFS/BTRFs array/pool cannot be converted to a ZFS pool without data loss?  Current drives have to be reformatted and wiped and data reloaded, right?

    Yes, need to clear up the disks to be reformatted to switch filesystem. I'll be moving data off pools to array, disabling pools and then reconstruct them. For array drives, best bet to move data off disk one by one and replace, or rebuild whole array and copy data back from backup. Not fun like any major storage change

    Link to comment

    Right, I think I already knew the answer to that question.  Sorry, I shouldn't have asked it.  I am more interested in the resource requirements but I suspect I know the answer to that one as well - more resources than my old pc has.   I guess I will not be moving to ZFS until/unless I get a new machine.   

     

    For a few years I ran Open ZFS on Mac OS and came to know it quite well.  It was quite the cat's meow but had issues with Mac OS.  However,  think it will run quite well under Unraid.  I look forward to trying it out one day.

    Link to comment

    4GB is already minimum requirement for unraid in general, so yeah not going to be great if you add zfs unless you do nothing but storage.

    Link to comment

    Is the button to collapse categories (like the list of threads under the CPU etc) entirely gone for everyone else or just me?

    All I see is the settings cog and the "X" to remove.

    This is on the dashboard and all other pages. Perhaps it is a plugin but I have already removed anything that was incompatible.

    Edited by LordShad0w
    Link to comment
    On 3/17/2023 at 9:10 AM, JorgeB said:

    Note that for now you can only offline one device at a time even if the pool has more redundancy, like a raidz2 pool or stripped mirrors.

    Just to be clear (cos I'm just about to try it), can I only take one HDD offline at a time? I'm trying to copy data onto a fresh ZFS array and it's taken 12 hours to copy 200Gb, so ~4Mb/s. Copying directly from another NAS that I normally get 100Mb/s read speed from and both servers are LAG bonded. Thinking of trying to disable the 2 x parity drives but that won't work if I can only take down one.

    Link to comment
    6 minutes ago, infidel said:

    Just to be clear (cos I'm just about to try it), can I only take one HDD offline at a time? I'm trying to copy data onto a fresh ZFS array and it's taken 12 hours to copy 200Gb, so ~4Mb/s. Copying directly from another NAS that I normally get 100Mb/s read speed from and both servers are LAG bonded. Thinking of trying to disable the 2 x parity drives but that won't work if I can only take down one.

    That is for a pool, for zfs drives on the array you can have has many disabled data drives as there are parity drives, and of course you can also disable parity (one or both), also note that I've found some write performance issues with zfs when used as array data drives, it can vary with the disks used and other things, still needs more investigating.

    Link to comment
    34 minutes ago, JorgeB said:

    That is for a pool, for zfs drives on the array you can have has many disabled data drives as there are parity drives, and of course you can also disable parity (one or both), also note that I've found some write performance issues with zfs when used as array data drives, it can vary with the disks used and other things, still needs more investigating.

    Ah, I thought I HAD created a pool, seeing as "Add pool" apparently creates a cache and not a pool (at least, I don't see any raidz2 options there). Sorry, I've only been using Unraid a couple of days as a potential move from TrueNAS Scale. Native ZFS support is light on documentation being so new, so can you create ZFS pools from the UI or is this CLI only for now?

    Edited by infidel
    Link to comment
    17 minutes ago, infidel said:

    seeing as "Add pool" apparently creates a cache and not a pool

    Add pool creates a pool, but because you were talking about disabling the parity drives I assumed array.

     

    17 minutes ago, infidel said:

    at least, I don't see any raidz2 options there

    After assigning the pool devices and before starting the array click on the first one and change the appropriate filesystem options:

     

    imagem.png

     

     

     

     

     

    Link to comment
    22 minutes ago, JorgeB said:

    Add pool creates a pool, but because you were talking about disabling the parity drives I assumed array.

    Yeah, I had an array but wanted a pool

     

    Quote

    After assigning the pool devices and before starting the array click on the first one and change the appropriate filesystem options

    OK, did that. I unassigned all 8 drives from the array, created an 8-slot pool, changed the file type of the first one to raidz2, 1 group of 8 devices. I don't see anything on that page to init the pool. The Main page now shows 8 missing array drives (can't select any fewer than 8 slots), and 8 pools (only the first of which is zfs). If I unassign a drive from the extra pools, the first pool shrinks. The array is stopped with "Invalid configuration" and "Too many wrong and/or missing disks!". No messages about what I can do to fix that. Falls short of intuitive so far! If you're still willing to help, we can move this to Discord if it's easier.

    Link to comment
    3 minutes ago, infidel said:

    The Main page now shows 8 missing array drives

    You have to do a new config to clear the array (Tools -> New config), also note that for now Unraid still requires at least one data device assigned to the array, this can be a spare flash drive for example.

    Link to comment
    1 hour ago, JorgeB said:

    You have to do a new config

    OK, that's got me a bit further! The pool is now up and running. After a bit of playing, creating datasets from the Share page seems to fail silently*, but it works if I use zfs create, at which point I can manage it from Shares. Transfer speed is back up to what I'd expect. Thanks for your help!

     

    *running RC2 now

    Edited by infidel
    Link to comment
    1 minute ago, infidel said:

    creating datasets from the Share page seems to fail silently

    Any new share created with cache=yes or cache=only on that pool should create a new dataset, if it's not please create a bug report and don't forget to post the diagnostics.

    Link to comment

    Is it now possible to use zfs for an array drive? Can I have multiple filesystems mixed up within one array? Lets say... two disks with xfs and three disks with zfs?

     

    Is there any benefit for using zfs for an array drive instead of xfs?

     

    Thanks!

    Link to comment
    23 minutes ago, enJOyIT said:

    Is it now possible to use zfs for an array drive?

    Yes if you are running Unraid 6.12 rc2

    24 minutes ago, enJOyIT said:

    Can I have multiple filesystems mixed up within one array? Lets say... two disks with xfs and three disks with zfs?

    Yes.  Each disk is a self-contained file system and can be any of the types supported by Unraid.

    24 minutes ago, enJOyIT said:

    Is there any benefit for using zfs for an array drive instead of xfs?

    I would think that the main benefit is data corruption being detected in real time.   You get similar detection if using btrfs

    Link to comment
    Just now, itimpi said:

    Yes if you are running Unraid 6.12 rc2

    Yes.  Each disk is a self-contained file system and can be any of the types supported by Unraid.

    I would think that the main benefit is data corruption being detected in real time.   You get similar detection if using btrfs

     

    Thanks for replying!

     

    To migrate every of my single disks to zfs I have to copy via cli from diskXX (xfs) to diskXX (zfs), right? Would the parity recognize these copy actions?

    Link to comment
    35 minutes ago, enJOyIT said:

     

    Thanks for replying!

     

    To migrate every of my single disks to zfs I have to copy via cli from diskXX (xfs) to diskXX (zfs), right? Would the parity recognize these copy actions?

     

    Yes, if you do it properly.  The proper procedure has been documented and can be found here:

     

    https://wiki.unraid.net/index.php/File_System_Conversion#Mirror_each_disk_with_rsync.2C_preserving_parity

     

    You want to use the "Mirror each disk with rsync, preserving parity" method.   I did both of my servers a few years back.  Read the entire procedure first so you know exactly what you will be doing.  (I made a table that had step-by-step instructions with the actual disk identification numbers for each step.  I checked off each step as it was completed. {Some steps take many hours to complete...}  Be prepared as it will take two-to-three hours per TB of data!)

    Link to comment

    Ok, maybe I'll do it like this...

    Add a new drive

    format it to zfs

    copy from xfs-drive to this new drive via mc (/mnt/driveXX to /mnt/driveXY)

     

    and so on... until the last drive is switched to zfs.

     

    then rebuild the parity (two drive parity)

     

    I think this would be faster?

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.