• Unraid OS version 6.12.0-rc2 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc2 2023-03-18

    Bug fixes

    • networking: fix nginx recognizing IP address from slow dhcp servers
    • VM Manager: let page load even when PCI devices appear missing or are misassigned
    • webgui: Dashboard fixes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • zfs: fixed issue importing ZFS pools with additional top-level sections besides the root section

    ZFS Pools

    • Add scheduled trimming of ZFS pools.
    • Replace Scrub button with Clear button when pool is in error state.

    Base Distro

    • php: version 8.2.4

    Linux kernel

    • version 6.1.20
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver

    Misc

    • save current PCI bus/device information in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • Like 12
    • Thanks 3



    User Feedback

    Recommended Comments



    37 minutes ago, trott said:

    I only has this one,  so it is working on ZFS pools now?

    it essentially does `fstrim -a`so any trimmable filesystem on the system is trimmed.

    Link to comment
    13 minutes ago, Kilrah said:

    it essentially does `fstrim -a`so any trimmable filesystem on the system is trimmed.

    zfs can only be trimmed with

    zpool trim pool_name

    so it does that for any zfs pool, it's still being optimized to trim only flash based pools.

    • Thanks 1
    Link to comment
    7 hours ago, Andreas Laubert said:

    Hello,

    in my aged supermicro X8 SIL-F the onboard network cards are not working (Intel NIC 82574L). Downgraded to 6.11.5 and working.

    Regards

           Andreas

    I have one of those and It's working for me, please create a new post in the general support forum and post the diagnostics, I should see it but you can ping me.

     

    04:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
        Subsystem: Super Micro Computer Inc X8SIL [15d9:0605]
        Kernel driver in use: e1000e
        Kernel modules: e1000e
    05:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
        Subsystem: Super Micro Computer Inc X8SIL [15d9:0605]
        Kernel driver in use: e1000e
        Kernel modules: e1000e

     

    Link to comment
    4 minutes ago, jamesk543 said:

    Not able to import ZFS pool.

    Where was the pool created? That looks like a TrueNAS pool, and if yes those cannot be imported for now.

    Link to comment
    35 minutes ago, JorgeB said:

    Where was the pool created? That looks like a TrueNAS pool, and if yes those cannot be imported for now.

    Come to think of it, yes it was! Totally forgot about that since I have been using the ZFS Unraid plugin to manage it lately. Thanks!

    Link to comment

    I do currently already have a 6 drive raidz1 pool which i have created in truenas scale.

     

    Will I be able to import this pool into Unraid, if this pool is encrypted via the native zfs encryption?

    Link to comment
    1 hour ago, greenflash24 said:

    Will I be able to import this pool into Unraid, if this pool is encrypted via the native zfs encryption?

    Not at the moment, neither TrueNAS create spools nor zfs native encryption are currently supported, keep an eye on future release notes.

     

    Note that for now native encryption should work if you manually load the key for the dataset and then mount it after array start, e.g.:

     

    Encrypted datasets won't be mounted at array start:

    Mar 21 16:49:33 Tower emhttpd: shcmd (709): /usr/sbin/zfs mount cache/encrypted
    Mar 21 16:49:33 Tower root: cannot mount 'cache/encrypted': encryption key not loaded

     

    Manually load the key and mount the dataset:

    root@Tower:~# zfs load-key -r cache/encrypted
    Enter passphrase for 'cache/encrypted':
    1 / 1 key(s) successfully loaded
    root@Tower:~# zfs mount cache/encrypted

     

     

    • Like 1
    Link to comment

    does RC2 support snapshots yet? i don't see it in the released note.

    Would it be possible creating snapshots through command line?

    Edited by winglam
    Link to comment
    1 hour ago, winglam said:

    does RC2 support snapshots yet? i don't see it in the released note.

    As far as I can see the ZFS master plugin works fine with 6.12 to add snapshot/dataset info/management on the Main page.

    Link to comment

    Is the ability to expand and collapse items in the various menus on dashboard and other areas permanently removed? 

    I rather liked being able to collapse things like the processor and memory graphs. It seems that both RC 1 and now 2 have that feature removed.

    Link to comment

    I ran into the problems with all VM's gone on the VM-tab, but I can still se and edit them on the dashboard, all but one, when I try to edit my mac-os VM the edit-page is just blank.

    I can not remove the VM either, wich I could with some of the others.

     

    I ran the update- chech before I updated and it said I had one incompatible plugin gpu-statistic, wich I then removed before update.

     

    What would be my next step to getting my VM-tab back?

     

    unraid-diagnostics-20230322-0535.zip

    Link to comment
    3 hours ago, Koenig said:

    What would be my next step to getting my VM-tab back?

    Looks like your Macinthebox XML is invalid.

     

    MacinaboxCatalina 4e457b8a-256e-45db-acf3-cdbb8783e743 MacOS Catalina   8388608 8388608   4 hvm /mnt/virtual_machines/domains/MacinaboxCatalina/ovmf/OVMF_CODE.fd /mnt/virtual_machines/domains/MacinaboxCatalina/ovmf/OVMF_VARS.fd           destroy restart restart /usr/local/sbin/qemu                                                                              

    virsh undefine MacinaboxCatalina will remove the VM Definition. 

    Link to comment
    4 hours ago, Koenig said:

    I can not remove the VM either, wich I could with some of the others.

     

    1 hour ago, SimonF said:

    virsh undefine MacinaboxCatalina will remove the VM Definition.

    Macinabox VMs have nvram and virsh refuses to remove them unless you add --nvram to the command.

     

    So

    virsh undefine --nvram MacinaboxCatalina

     

    Maybe unraid could add it to the "remove vm" command. 

    Link to comment

    Hello everybody, Amazon brings me my RMA Asus board back and i want to use ZFS in my Unraid Data Array and i have a few questions about it. Iam in the process of selling my mixed Array to a unified Array so i have 4x 12TB Space, right now iam at 3x12TB. My Array in its now state looks like this. i have backed up all my important Data to another drive of course.

     

    12TB Parity (valid as of today)

    12 TB disk 1 with all my Storage Data

     

    12TB not connected right now, bad SATA Ports on old Mainboard.

     

    first: Can i resilver my Array with my 1x12TB parity drive after converting the 12TB disk 1 to ZFS?

    second: if not the Parity drive still will do its thing after converting to zfs but from a fresh start?

     

    Iam quite sure its second

     

    did anyone tryed this? or am i the first to tell?

     

    Edited by domrockt
    Link to comment
    46 minutes ago, domrockt said:

    first: Can i resilver my Array with my 1x12TB parity drive after converting the 12TB disk 1 to ZFS?

    No, at the moment any filesystem conversion needs to be done manually, copy the data off the disk, re-format with the new fs, copy back.

     

    47 minutes ago, domrockt said:

    second: if not the Parity drive still will do its thing after converting to zfs but from a fresh start?

    Not quite sure what you are asking, but parity works the same with any filesystem used for the array, doesn't matter if it's xfs, btrfs or zfs.

     

    • Thanks 1
    Link to comment
    2 minutes ago, JorgeB said:

    No, at the moment any filesystem conversion needs to be done manually, copy the data off the disk, re-format with the new fs, copy back.

     

    Not quite sure what you are asking, but parity works the same with any filesystem used for the array, doesn't matter if it's xfs, btrfs or zfs.

     

    1) thought so, thx.

    2) You understood me, perfect thx.

    Link to comment
    2 hours ago, Kilrah said:

     

    Macinabox VMs have nvram and virsh refuses to remove them unless you add --nvram to the command.

     

    So

    virsh undefine --nvram MacinaboxCatalina

     

    Maybe unraid could add it to the "remove vm" command. 

    Thank you!

     

    Now I have my VM-tab back.

     

    Link to comment
    5 hours ago, Kilrah said:

    virsh undefine --nvram MacinaboxCatalina

     

    Maybe unraid could add it to the "remove vm" command. 

    Would there be any harm in using the -nvram switch on a VM that didn't "need" it?

    Link to comment

    i am struggelingt to mount pool

    1. exsting part is 4 HDD with one chache drive

     

    this alone works fine

     

    then i add zfs pool, after creation i cannot mount pool ... all device shows are unformated ??

     

     

     

     

     

     

     

    tower-diagnostics-20230322-1710.zip

    Edited by Dtrain
    add Diagnostic
    Link to comment
    2 minutes ago, Dtrain said:

    then i add zfs pool, after creation i cannot mount pool ... all device shows are unformated ??

    Please create a bug report and post the diagnostics.

    Link to comment
    19 minutes ago, Dtrain said:

    diagmostics added

    I also asked for you to create a bug report, but in any case, devices are not being decrypted, stop the array, click on each array disk and cache and change filesystem from xfs to xfs-encrypted, then start array.

     

    Also note that you don't need to do a new config to add a new pool, zfs or other fs.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.