• Unraid OS version 6.12.0-rc2 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc2 2023-03-18

    Bug fixes

    • networking: fix nginx recognizing IP address from slow dhcp servers
    • VM Manager: let page load even when PCI devices appear missing or are misassigned
    • webgui: Dashboard fixes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • zfs: fixed issue importing ZFS pools with additional top-level sections besides the root section

    ZFS Pools

    • Add scheduled trimming of ZFS pools.
    • Replace Scrub button with Clear button when pool is in error state.

    Base Distro

    • php: version 8.2.4

    Linux kernel

    • version 6.1.20
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver

    Misc

    • save current PCI bus/device information in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • Like 12
    • Thanks 3



    User Feedback

    Recommended Comments



    15 hours ago, Bizquick said:

    if I delete Snap shots I'm wondering they I still see the count here? the totals don't seams to match up.

    You'll need to ask that in the existing ZFS master plugin support thread.

    Link to comment

    I haven't seen anyone mention the default recordsize being used on creation of a ZFS dataset this release. It's using the default recordsize=128k but IMO based on the use case I suspect the vast majority of users here fall into a default of recordsize=1M would be much better especially if we're not going to get a way to specify that option during creation.

     

    Many experts on ZFS including Jim Salter recommend this recordsize for almost all use cases outside of VM and database storage at this point. See a bit of discussion on this from "mercenary_sysadmin" (Jim Salter) here: 

     

    Link to comment
    41 minutes ago, RaidRascal said:

    I haven't seen anyone mention the default recordsize being used on creation of a ZFS dataset this release. It's using the default recordsize=128k but IMO based on the use case I suspect the vast majority of users here fall into a default of recordsize=1M would be much better especially if we're not going to get a way to specify that option during creation.

    Yeah, I'm manually changing mine to 1M, with raidz especially I see better performance, though not with mirrors/single device, in the future there should be an option to change it using the GUI.

    • Like 2
    Link to comment
    On 4/2/2023 at 3:38 AM, Lightarrow said:

    Thank you, this worked for me, now I see the driver being used (I have a 5900x) by running the cpufreq-info command and I saw drops in temps right away.
     

     

    Great, did you notice any change in idle power consumption ?

    Following upgrade to 6.12.0-rc2 from 6.11.5 I have a 5W increase with the same setup and configuration (powertop, ...).

    Link to comment

    Hello!

     

    I have an array with 2 HDDs and 2 parity drives, the HDDs' file system is xfs encrypted, and I have a zfs cache with raidz1, also encrypted (same key as the HDDs).

    When I start the array (providing the key), it stops immediately again, error message is "wrong key". When I reboot in safe mode, everything works as expected.

    Yesterday, I had the same problem, and after some time I found this thread on the forum, saying that this error was due to the MyServer plugin.

     

     

    After uninstall of every plugin in normal mode, the array worked again. I added some plugins again (but the MyServer plugin) and today the error is back...

     

    Do I need to go through the complete cycle "uninstall everything - reboot - install one plugin - reboot - start array" again and again to find the offending one, or does anyone have an idea what's going wrong here?

     

    Any help would be appreciated, thx!

     

    unsershome-diagnostics-20230404-1133.zip

    Link to comment
    19 minutes ago, Woosah said:

    After uninstall of every plugin in normal mode, the array worked again. I added some plugins again (but the MyServer plugin) and today the error is back...

    The diags you posted have the pool mounted, post new ones after the problem, though if it always works in safe mode it does suggest a plugin problem.

    Link to comment
    1 hour ago, JorgeB said:

    The diags you posted have the pool mounted, post new ones after the problem, though if it always works in safe mode it does suggest a plugin problem.

     

    I have started with no plugins installed and then added one by one, and I think I have found the culprit(s): The error starts after adding the "unassigned devices" plugins, but I do not know which of the three it is or if it's depending on a combination of these. Here are the diagnostics directly from after the error's first occurrence.

     

    Thank you!

     

    unsershome-diagnostics-20230404-1309.zip

    Edited by Woosah
    Typo
    Link to comment
    3 minutes ago, Woosah said:

    Here are the diagnostics directly from after the error's first occurrence.

    Still not seeing any issues with the pool, I do see this as the last line:

     

    Apr  4 13:09:21 UnsersHome emhttpd: stale configuration

     

    Are you using Firefox? If yes reboot first then try a different browser.

    Link to comment
    19 minutes ago, JorgeB said:

    Still not seeing any issues with the pool, I do see this as the last line:

     

    Apr  4 13:09:21 UnsersHome emhttpd: stale configuration

     

    Are you using Firefox? If yes reboot first then try a different browser.

     

    Okay, that's weird. After reboot, I used Chrome instead of Firefox for the webGUI, added all three unassigned devices plugins, started the array, no error.

     

    Why? How?!?

     

    Link to comment
    1 minute ago, Woosah said:

    Why? How?!?

    There are known issues with Firefox and Unraid, not clear, at least not to me, if Firefox or Unraid is the problem, or maybe it's a plugin, but for now best to use a different browser.

    Link to comment
    1 minute ago, JorgeB said:

    There are known issues with Firefox and Unraid, not clear, at least not to me, if Firefox or Unraid is the problem, or maybe it's a plugin, but for now best to use a different browser.

     

    Good to know! Thanks a lot for the help, that was quite unnerving...! 🙂

    • Like 1
    Link to comment

    I understand that special vdevs are not supported via GUI, however I would really like to add a metadata vdev as I work with tons of tiny files. Can I create the zpool using the CLI and use it in Unraid somehow? I just don't want a future update to break my pool.

    Link to comment
    On 3/30/2023 at 3:20 AM, JorgeB said:

    Did you do a new config after updating to rc2? There's currently a known issue affecting spin down after doing that.

    I'm so glad I saw this, I was losing my mind. Do you have any more information on how to fix this? Do I just need to make a new config and re-assign all of my disks?

     

     

    Edit: Nevermind I see now that the new config is what caused the issue.. I can't go back to 6.11.5 because I have zfs pools. I guess I'll just wait and hope its fixed in RC3.

    Edited by Octalbush
    Link to comment
    56 minutes ago, Octalbush said:

    I understand that special vdevs are not supported via GUI, however I would really like to add a metadata vdev as I work with tons of tiny files. Can I create the zpool using the CLI and use it in Unraid somehow?

    Yes you can, you need to re-import the pool once that's done.

     

    51 minutes ago, Octalbush said:

    Do you have any more information on how to fix this?

    Most affected users reported that just going back to v6.11.5 makes spin down start working again, you can then upgrade back and it should remain working (as long as you don't do another new config), there was one report that a new config with v6.11.5 was needed to make it work, then he could upgrade back.

    Link to comment

    Are there any plans to address the default SPA slop space ZFS reserves? By default the setting is 5 which equals to 1/32 the size of the pool, this affects even single drive "pools" like those being used in an Unraid array. I verified RC2 is using 5 as default.

     

    In recent releases of zfs they are limiting the maximum space allowed to be used by this value to 128GB by default but that is still a substantial amount of space IMO especially if that is spread over many drives in an array.

     

    It can be easily checked/modified temporarily in real time with:

     

    cat /sys/module/zfs/parameters/spa_slop_shift
    
    or
    
    echo "some number" > /sys/module/zfs/parameters/spa_slop_shift
    

     

    The change takes affect immediately and the usable space updates in the webui nearly instantly.

     

    Link to official docs on this: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Module Parameters.html#spa-slop-shift

     

    Link to a much better explanation of how the setting works: https://unix.stackexchange.com/questions/582005/zpool-list-vs-zfs-list-why-free-space-is-10x-different/582391#582391

     

    I bring this up because searching for this on the forums here brings up literally nothing. This leads me to believe most people who hop on the zfs bandwagon will not know about this if it isn't exposed and explained through the web interface and will be very confused why their drives are suddenly much smaller than before.

     

    Obviously bad things can happen in ZFS land if a filesystem is filled to 100% and there isn't enough slop space left to help out so I know this is a tricky topic.

    • Like 1
    Link to comment
    2 hours ago, RaidRascal said:

    I know this is a tricky topic.

     

    Probably best to start a dedicated thread for this topic, once a new release comes out the old release threads are largely ignored by everyone

    Link to comment
    15 minutes ago, ljm42 said:

     

    Probably best to start a dedicated thread for this topic, once a new release comes out the old release threads are largely ignored by everyone

    This is the rc2 thread... Is there an rc3 thread I'm not seeing?

    Link to comment
    2 hours ago, Octalbush said:

    This is the rc2 thread... Is there an rc3 thread I'm not seeing?

    He means a thread about this specific issue like :

       [6.12 RC2] ZFS default SPA slop space

     

    Discussing many issues in the release thread is not the best way to have good follow up.

    Link to comment
    10 hours ago, RaidRascal said:

    Obviously bad things can happen in ZFS land if a filesystem is filled to 100% and there isn't enough slop space left to help out so I know this is a tricky topic.

    Thanks for bringing this up, was aware of the reserved space but not that it could be tuned, I think 6* might be a better default for Unraid, though people need to be careful to not completely fill up the filesystem, also good that it can be changed at anytime.

     

    For anyone wanting to do it now it can be done the same way as the ARC size is set, create the file:

    /boot/config/modprobe.d/zfs.conf

    with

    options zfs spa_slop_shift=6

     

    If you are already using zfs.conf for ARC control just add the new setting, e.g.:

    options zfs zfs_arc_max=67060137984 spa_slop_shift=6

    Then reboot.

     

    * it's kind of a shame that it's not possible to set this by pool, I think 6 is a good default for large array disks or pools, but possibly 5 would be better for small single device pools for example, in case the user lets the pool fill up.

    • Like 1
    Link to comment

    Appdata backup plugin is not supported in 6.12rc2, and asked to upgrade according to fix common problems. No plugin to update though. I deleted the plugin and looked for it in the Apps store. Nowhere to be found.  

    Link to comment
    On 3/30/2023 at 10:42 AM, hot22shot said:

     

    You have to pass amd_pstate=passive at boot to enable it. Just tested it myself.

    Me beeing a linux-noob, how do I do this in Unraid?

    Link to comment

    I really like how the ZFS array is coming along in this release. I just wonder how much we can expect to see in gui for ZFS? Like if we will get snap shot and scrub scheduling. Or if we are going to see what we have right now and be done.

    But currently I would like to see a little bit more like being able to make the main array a Raid z1 or z2 pool.

     

    Link to comment
    1 hour ago, Bizquick said:

    But currently I would like to see a little bit more like being able to make the main array a Raid z1 or z2 pool.

    This is not going to happen in the 6.12 release. 

    Link to comment
    3 hours ago, frodr said:

    Appdata backup plugin is not supported in 6.12rc2, and asked to upgrade according to fix common problems. No plugin to update though. I deleted the plugin and looked for it in the Apps store. Nowhere to be found.  

    It's marked as being incompatible with 6.12 (although I have no trouble with it).  There is a manual install link in the support thread for the plugin for a replacement beta version)

     

    You can install the deprecated one in 6.12 by enabling show incompatible and deprecated apps in CA's settings

     

    • Thanks 1
    Link to comment
    On 4/4/2023 at 8:21 AM, hot22shot said:

    Following upgrade to 6.12.0-rc2 from 6.11.5 I have a 5W increase with the same setup and configuration (powertop, ...).

     

    Got my 5W back, PCIe ACS override was at disabled, I put it back to both so that powertop could do its magic.

     

    2 hours ago, Koenig said:

    Me beeing a linux-noob, how do I do this in Unraid?

     

    it has to be added to the default entry in your syslinux.cfg. You can edit it by clicking on your flash drive in the Main dashboard.

    Edited by hot22shot
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.