• Unraid OS version 6.12.0-rc2 available


    limetech

    Please refer to the 6.12.0-rc1 topic for a general overview.

     


    Version 6.12.0-rc2 2023-03-18

    Bug fixes

    • networking: fix nginx recognizing IP address from slow dhcp servers
    • VM Manager: let page load even when PCI devices appear missing or are misassigned
    • webgui: Dashboard fixes:
      • lock the Dashboard completely: Editing/moving only becomes possible when unlocking the page
      • An empty column is refilled when the respective tiles are made visible again, no need to reset everything
      • added a visual "move indicator" on the Docker and VM page, to make clearer that rows can be moved now.
      • change cursor shape when moving is enabled
      • use tile title as index
    • webgui: fix issue displaying Attributes when temperature display set to Fahrenheit
    • zfs: fixed issue importing ZFS pools with additional top-level sections besides the root section

    ZFS Pools

    • Add scheduled trimming of ZFS pools.
    • Replace Scrub button with Clear button when pool is in error state.

    Base Distro

    • php: version 8.2.4

    Linux kernel

    • version 6.1.20
    • CONFIG_X86_AMD_PSTATE: AMD Processor P-State driver

    Misc

    • save current PCI bus/device information in file '/boot/previous/hardware' upon Unraid OS upgrade.
    • Like 12
    • Thanks 3



    User Feedback

    Recommended Comments



    20 hours ago, hot22shot said:

     

    Got my 5W back, PCIe ACS override was at disabled, I put it back to both so that powertop could do its magic.

     

     

    it has to be added to the default entry in your syslinux.cfg. You can edit it by clicking on your flash drive in the Main dashboard.

    A question on that, does "amd_pstate=passive" not work without ACS override enabled?

     

    I don't have it enabled, and if I were to enable it it would probably mess up my hardware passthroughs.

     

    I tried to google it but I couldn't rellay find any definitive answer so I'm going to ask you as you seem to know things about this - I have an AMD 3970X, would I benefit anything if I added "amd_pstate=passive" to syslinux.cfg?

    Link to comment

    @limetech Will the Unraid 6.12 also update the Memtest86+ version to the latest version available?
     

    The Memtest86+ (v5.x) version that is shipped with Unraid 6.11 is not capable of identifying the manufacturer of my RAM memories (Netac);


    The latest version of Memtest86+ (v6.10) already correctly identifies it, In other words, Memtest86+ 6.x does more things than 5.x series


    from https://github.com/memtest86plus/memtest86plus#origins :
     

    Quote

     

    Memtest86+ v6.00 was based on PCMemTest, which was a fork and rewrite of the earlier Memtest86+ v5, which in turn was a fork of MemTest-86. The purpose of the PCMemTest rewrite was to:
     

    • make the code more readable and easier to maintain
    • make the code 64-bit clean and support UEFI boot
    • fix failures seen when building with newer versions of GCC
       

    In the process of creating PCMemTest, a number of features of Memtest86+ v5 that were not strictly required for testing the system memory were dropped. In particular, no attempt was made to measure the cache and main memory speed, or to identify and report the DRAM type. These features were added back and expanded in Memtest86+ v6.0 to create a unified, fully-featured release.
     

     

     

    Link to comment
    58 minutes ago, luzfcb said:

    Will the Unraid 6.12 also update the Memtest86+ version to the latest version available?

    It already includes the latest version that is licensed for 3rd party redistribution.

     

    Newer versions must be directly downloaded from the original website.

    Link to comment
    9 minutes ago, JonathanM said:

    It already includes the latest version that is licensed for 3rd party redistribution.


    Great. I asked the question because there is no information about Memtest86+ in the Release Notes for 6.12.0-rc1 and 6.12.0-rc2

    Link to comment
    9 minutes ago, luzfcb said:


    Great. I asked the question because there is no information about Memtest86+ in the Release Notes for 6.12.0-rc1 and 6.12.0-rc2

    Yeah, probably because there really isn't any updated info, it is what it is.

     

    What I'd LIKE to see is a way to have the end user download the files, and copy them to the USB stick, with a custom boot option. A talented developer could probably come up with a plugin to do it, but since you can't run it without rebooting anyway, the extra steps of downloading the new version and making a memtest USB stick isn't really that huge of a deal. Developing and supporting a plugin that alters the Unraid USB stick like that is probably too much work and risk for too little benefit.

    Link to comment
    1 hour ago, JonathanM said:

    It already includes the latest version that is licensed for 3rd party redistribution.

     

    Newer versions must be directly downloaded from the original website.

    Actually, the GitHub repo linked (https://github.com/memtest86plus/memtest86plus) is GPL2.0 and could be included ( @limetech)

     

    Would solve at least one major issue with the current version where it won't work with UEFI booting...

    • Thanks 1
    • Upvote 2
    Link to comment
    On 4/3/2023 at 8:14 PM, JorgeB said:

    Yeah, I'm manually changing mine to 1M, with raidz especially I see better performance, though not with mirrors/single device, in the future there should be an option to change it using the GUI.

    Short question: How can I do this manually for my raidz1 pool? And when do I have to do this, i.e., can this be adjusted after creating the pool in the GUI, or even after some data is placed on the pool?

    Link to comment
    15 minutes ago, Squid said:

    Actually, the GitHub repo linked (https://github.com/memtest86plus/memtest86plus) is GPL2.0 and could be included ( @limetech)

     

    Would solve at least one major issue with the current version where it won't work with UEFI booting...

    I can see  where this would mean that anyone who wanted to use the new version of memtst86plus would have to rebuild the boot/flash drive.   Someone might be able to write a script to do this...

    Link to comment
    10 hours ago, Woosah said:

    Short question: How can I do this manually for my raidz1 pool?

    zfs set recordsize=1M <pool name>

    or

    zfs set recordsize=1M <pool name/dataset name>

    if you want it just for that dataset

     

    10 hours ago, Woosah said:

    And when do I have to do this, i.e., can this be adjusted after creating the pool in the GUI, or even after some data is placed on the pool?

    It can be done at any time, but it will only affect data written (or re-written) after the change.

    • Thanks 1
    Link to comment
    On 4/5/2023 at 10:06 AM, Bizquick said:

    I really like how the ZFS array is coming along in this release. I just wonder how much we can expect to see in gui for ZFS? Like if we will get snap shot and scrub scheduling. Or if we are going to see what we have right now and be done.

    But currently I would like to see a little bit more like being able to make the main array a Raid z1 or z2 pool.

     

    That kind of makes things a little bit hard for me. I certainly like how the main array is very adjustable. And using what I'm asking for in the main array. Kind of goes against the name of the product. I really like unraid because its the easiest approach for setting up dockers and VM's and those things are very solid for what I use.  One thing I that is making me kind of roll back and not use ZFS as the main array. The ZFS memory requirements. And I don't understand why it doesn't flush out the memory much. Example if i run backups to another array. It will use a bunch of extra ram and then the server will sit idle for like 18 hours and the memory doesn't go down. I mean 18 hours and still it never used anything you copied over. only thing that worked for me was to reboot after a data backup. Well anyway this is giving me some other ideas and maybe get off this ZFS native request now.

    Link to comment
    10 hours ago, Bizquick said:

    The ZFS memory requirements. And I don't understand why it doesn't flush out the memory much. Example if i run backups to another array. It will use a bunch of extra ram and then the server will sit idle for like 18 hours and the memory doesn't go down. I mean 18 hours and still it never used anything you copied over.

    The zfs memory usage is really just an optimization. If your set the memory to be used to be low, it would cache less and still be OK. The simple rule is to give it any memory that you won't need for anything else, but it's perfectly fine if you give it lesser and it will behave just like any other filesystem that doesn't cache data in RAM

    https://news.ycombinator.com/item?id=11897571

    Link to comment
    10 hours ago, Bizquick said:

    And I don't understand why it doesn't flush out the memory much.

    That's how the zfs ARC works, you can set it to a smaller size, see the release notes, though it will release the RAM if needed for something else.

    Link to comment
    9 hours ago, JorgeB said:

    That's how the zfs ARC works, you can set it to a smaller size, see the release notes, though it will release the RAM if needed for something else.

    Ok that makes some sense now. I just don't get why they left ZFS take so much by default. I mean physical raid card now days they build them out to have 8Gigs NVRAM Cache. And depending on how a user sets that up. It might not use all the ram space or it might. I get that ZFS is a different file system and its doing a bit more functions. But it makes me think about this for a minute. Because I don't think they are using ECC ram on Raid cards. and when it writes data to the array. its basically doing the same thing. Just a much smaller bucket and more controlled.  Well yeah I'll have to force the ram size so it doesn't slow down something else just in case.

    Link to comment
    34 minutes ago, Bizquick said:

    I mean physical raid card now days they build them out to have 8Gigs NVRAM Cache

    ZFS doesn't use the RAM for the same purpose as those raid cards. It's not just a write cache. It is more closer to how linux uses buffer/cache. Look at output of free in linux and you will see it assigning most of the spare RAM to cache/buffer. Doesn't mean that memory is not usable for other things when needed

    Link to comment
    10 hours ago, Dtrain said:

    when adding an external ZFS Source the internal ZFS Pool is shown as unformated but the Data is there and can be read it write.

    There are known issues with zfs-encrypted pools, try again once rc3 is out.

    Link to comment

    Migrated from a 6.11.5 server which had a ZFS plugin and ZPOOL mounted at the root, so I could use shares on the ZFS zpool (mount -R /mnt/zfs /mnt/disk1 run at start of array).

     

    Upgrade went well, just had to import the radiz1 zpool (zpool import zfs) and it was all here and happily running

     

    Had to uninstall ZFS the companion plugin, as it was preventing the dashboard to show. ONce removed, everything is running as before, with my ZFS pool as my "main" array.

     

    Only downside to that was to have to have VMs and dockers only on the cache, but that'w what I hope to change with 6.12 now that we can have zpools as cache :)

    Link to comment
    22 hours ago, Bizquick said:

    Ok that makes some sense now. I just don't get why they left ZFS take so much by default. I mean physical raid card now days they build them out to have 8Gigs NVRAM Cache. And depending on how a user sets that up. It might not use all the ram space or it might. I get that ZFS is a different file system and its doing a bit more functions. But it makes me think about this for a minute. Because I don't think they are using ECC ram on Raid cards. and when it writes data to the array. its basically doing the same thing. Just a much smaller bucket and more controlled.  Well yeah I'll have to force the ram size so it doesn't slow down something else just in case.

     

    You can easily limit ZFS ARC memory usage

     

    This will force ZFS to "only" use 8GB RAM

     

    echo 8589934592 >> /sys/module/zfs/parameters/zfs_arc_max

     

     

    Link to comment

    Migrating my XFS array to ZFS.  I'm seeing datasets created, but it seems random.  So my question is should the Array have data sets at a disk level.  For example:

     

    /mnt/disk2

    /mnt/disk2/data

     

    /mnt/disk3

     

    Both disks have a data directory and I do not understand why "/mnt/disk2/data" was created.

    Edited by uzos5ixi
    Link to comment
    15 minutes ago, uzos5ixi said:

    I'm seeing datasets created, but it seems random.

    Shares created using the GUI will create a dataset, if you move the data manually with a tool like cp or rsync that creates the folder, or create one manually with mkdir, it will not be a dataset.

    Link to comment
    Just now, uzos5ixi said:

    As you add disk drives or create pools will the datasets be automatically created for the shares.

    If the share needs to start using a new disk and Unraid creates it it will be a dataset.

    • Like 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.