• Unraid OS version 6.12.0-rc2 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc2 2023-03-18

    Bug fixes

    • networking: fix nginx recognizing IP address from slow dhcp servers
    • VM Manager: let page load even when PCI devices appear missing or are misassigned
    • webgui: Dashboard fixes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • zfs: fixed issue importing ZFS pools with additional top-level sections besides the root section

    ZFS Pools

    • Add scheduled trimming of ZFS pools.
    • Replace Scrub button with Clear button when pool is in error state.

    Base Distro

    • php: version 8.2.4

    Linux kernel

    • version 6.1.20
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver

    Misc

    • save current PCI bus/device information in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • Like 12
    • Thanks 3



    User Feedback

    Recommended Comments



    11 minutes ago, KarlMeyer said:

    Any word on special vdev (Metadata Special Device) support?

    It's supported but you are still not able to create one using the GUI, you can import an existing pool using it, or manually add one to an Unraid pool then re-import it.

    • Thanks 1
    Link to comment

    I have a ZFS pool (6 x 10TB RAIDZ1) that was created on 6.10.x using the old plugin. It has one dataset. Pool name animzfs, dataset Media. On 6.11.5, the pool mounts fine and the data is all accessible. I've tried upgrading to 6.12 RC2 three times now and my pool won't import properly.

     

    It creates the mountpoint and seems to mount the dataset, but no files/folders are visible when I browse to it at /mnt/animzfs/Media. There is an error during the mount that reports the pool/dataset is already mounted. What's odd is that the drive usage appears to be OK, but I can't see any of the folders or files. Permissions are listed as 99:100 for all folders/files when checked under 6.11.5.

     

    On 6.11.5 it's seen properly and the data is all visible - just a single 'TV' folder in the root of the dataset, with lots of subfolders and files. Permissions are listed as 99:100 for all folders/files when checked under 6.11.5. I've created a bug report and @JorgeB has thankfully been assisting me. The URL is here:

     

     

    I do have all the data on the pool backed up, so worst case is I do the upgrade to 6.12 RC2 and then destroy the old pool, re-create it and then restore my data from my backups. Alas that's about 35TB of data so it'll take a couple of days. I'd like to try and figure it out rather than destroy and re-create the pool, but I'm not making much headway.

     

    If someone using 6.12 RC2 has successfully imported a pool from the old plugin, could you please share the results of the command:

     

    zfs get all

     

    I'm trying to figure out if there's a way to re-organize the pool under 6.11.5 so that it will import properly under 6.12 RC2 or later. If anyone can supply me with the output of that command, hopefully it may reveal the issue that's preventing me from being able to see the data.

     

    Thanks in advance!

     

     

    Link to comment
    5 minutes ago, AgentXXL said:

    Thanks in advance!

    You didn't reply to my last post in the bug report thread.

    Link to comment
    11 minutes ago, JorgeB said:

    You didn't reply to my last post in the bug report thread.

     

    I haven't tried again because it's pretty apparent it will give me the same result. Plus I've had other more important issues to work on, like replacing the air conditioner for the server room.

     

    As both of us expect that manually mounting with the same commands that unRAID uses will fail, I'd like to find out what's set differently so that a single mount command works. That's why I want to compare the output of `zfs get all` with someone who's been able to successfully import a pool created with the old plugin. I tried importing it under Ubuntu a few months ago and it had no problems with seeing the data.

     

    Why does unRAID do two mount commands? When I mounted it under Ubuntu it was a single command, and when I tried the same thing under unRAID on 6.12 with a single mount command, it would also see the TV folder and all sub-folder and files.

     

    Anyhow, I'm now ready to try it again under 6.12 RC2, but I'd first like to do that comparison to see if my pool is organized differently.

    Link to comment
    Just now, AgentXXL said:

    Why does unRAID do two mount commands?

    Because Unraid first sets the mountpoint for the root dataset, in case the pools comes with a different mountpoint, then mounts the root dataset only, and only then all other datasets are mounted, whatever the problem with your pool is it's not just because it was created with the plugin, since other users don't have issues, and I would still like to see that output as asked, because if the dataset wasn't mounted before, and Unraid only mounts the root dataset first, it doesn't make sense that it then complains it was already mounted.

    Link to comment
    1 hour ago, JorgeB said:

    Because Unraid first sets the mountpoint for the root dataset, in case the pools comes with a different mountpoint, then mounts the root dataset only, and only then all other datasets are mounted, whatever the problem with your pool is it's not just because it was created with the plugin, since other users don't have issues, and I would still like to see that output as asked, because if the dataset wasn't mounted before, and Unraid only mounts the root dataset first, it doesn't make sense that it then complains it was already mounted.

     

    Updated with the results in the bug report thread. And as expected, I got the same result. I would still like to compare my pool structure with someone who was successful so I'd still appreciate a look at the results for zfs get all.

     

    Link to comment
    On 4/4/2023 at 2:08 PM, Octalbush said:

    I understand that special vdevs are not supported via GUI, however I would really like to add a metadata vdev as I work with tons of tiny files. Can I create the zpool using the CLI and use it in Unraid somehow? I just don't want a future update to break my pool.

    I have the same problem with rc2 but i didn't make a new config or using zfs, just upgraded and the drives are not spinning down anymore. Even when forced with a minute or so there up again. Went back to 6.11.5 and the problems is gone drives are spinning down again.

    Edited by Pepreal
    Link to comment
    19 minutes ago, Pepreal said:

    I have the same problem with rc2 but i didn't make a new config or using zfs, just upgraded and the drives are not spinning down anymore. Even when forced with a minute or so there up again. Went back to 6.11.5 and the problems is gone drives are spinning down again.

    zfs is unrelated to this, at least so far, try doing a new config with v6.11.5 then upgrade again.

    Link to comment
    8 minutes ago, JorgeB said:

    zfs is unrelated to this, at least so far, try doing a new config with v6.11.5 then upgrade again.

    I don't use zfs and i didn't make a new config i just upgrade from 6.11.5 to rc2. I can try to make a new config and upgrade to rc2 again to see if that makes a different.

    Link to comment

    Hey all, has anyone experienced OS freezes? Once unraid is freezed; i can't login to unraid GUI nor SSH in for a safely shutdown. My cpu is AMD, and i already set Power Supply Idle Control to typical current idle as well as disabling C-States globally. No issue running 6.11.5 with this setting.

     

    Wing

    Link to comment
    4 hours ago, Pepreal said:

    I have the same problem with rc2 but i didn't make a new config or using zfs, just upgraded and the drives are not spinning down anymore. Even when forced with a minute or so there up again. Went back to 6.11.5 and the problems is gone drives are spinning down again.

    Thanks it seems to fix my problem with rc2. All drive are spinning down

    Edited by Pepreal
    • Like 1
    Link to comment
    On 3/21/2023 at 5:44 PM, LordShad0w said:

    Is the ability to expand and collapse items in the various menus on dashboard and other areas permanently removed? 

    I rather liked being able to collapse things like the processor and memory graphs. It seems that both RC 1 and now 2 have that feature removed.

    I never heard back on this. Anyone?

    Link to comment
    4 hours ago, LordShad0w said:

    I never heard back on this. Anyone?

    You can still hide parts on cpu including the graph. But now you can remove/remove panels 6.12 instead.

    Link to comment

    What´s can be a procedure for adding ZIL and SLOG cache to a ZFS pool? I ran this command:

     

    zpool add poolname log device

     

    created partitions, but it can't be mounted. I ran the command when the drives ware unformatted in Unassigned Devices. I then tried to include the drives in the Pool, but that didn't work. 

     

    Any tips and comments are much appreciated.

     

     

    Screenshot 2023-04-26 at 19.09.56.png

    Edited by frodr
    Link to comment
    39 minutes ago, frodr said:

    What´s can be a procedure for adding ZIL and SLOG cache to a ZFS pool? I ran this command:

    With the array started add the device(s) to the pool using the command line, then stop the array and you need to re-import the pool, easiest way is to unassign all the pool devices, start array, stop array, re-assign all pool devices (now including the new vdev(s)), start array, pool should be imported, note that zfs should always be on partition #1.

    • Thanks 1
    Link to comment
    1 hour ago, JorgeB said:

    With the array started add the device(s) to the pool using the command line, then stop the array and you need to re-import the pool, easiest way is to unassign all the pool devices, start array, stop array, re-assign all pool devices (now including the new vdev(s)), start array, pool should be imported, note that zfs should always be on partition #1.

     

    I guess I stumbled somewhere. From status:

     

     

     

    What I did:

    - Command: zpool add poolname log mirror nvme-drive1 nvme-drive2

    - Followed the steps (I think), but I was forced to reformat and ended up with what is in the picture.

     

    Maybe start all over again, or try to fix current pool? Challenge ChatGPT....

     

    Screenshot 2023-04-26 at 21.43.10.png

    Link to comment
    14 hours ago, frodr said:

    Maybe start all over again, or try to fix current pool?

    You added the NVMe devices as part of the main pool, see here for how to do this.

     

     

    • Thanks 1
    Link to comment
    On 3/22/2023 at 12:10 AM, JorgeB said:

    I have one of those and It's working for me, please create a new post in the general support forum and post the diagnostics, I should see it but you can ping me.

     

    04:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
        Subsystem: Super Micro Computer Inc X8SIL [15d9:0605]
        Kernel driver in use: e1000e
        Kernel modules: e1000e
    05:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
        Subsystem: Super Micro Computer Inc X8SIL [15d9:0605]
        Kernel driver in use: e1000e
        Kernel modules: e1000e

     

     

    I am having the same issue reported by Andreas Laubert. X8SIL motherboard. Everytime I try a stable version past 6.11.5 it fails to find Eth0. I tried 6.12.6 today.

    This post seems to be related but is intermittent for them whereas it is consistent for me.
    Tried warm and cold boots from IPMI and it does not help.

    https://bbs.archlinux.org/viewtopic.php?id=284341

    Link to comment
    1 hour ago, cylon said:

    I am having the same issue reported by Andreas Laubert. X8SIL motherboard. Everytime I try a stable version past 6.11.5 it fails to find Eth0. I tried 6.12.6 today.

    Strange, it still works for me with v6.12.6, just updated that server this weekend.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.