Jump to content
  • Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3


    User Feedback

    Recommended Comments



    To expand on my quoted text in the OP, this beta version brings forth more improvements to using a folder for the docker system instead of an image.  The notable difference is that now the GUI supports setting a folder directly.  The key to using this however is that while you can choose the appropriate share via the GUI's dropdown browser, you must enter in a unique (and non-existant) subfolder for the system to realize you want to create a folder image (and include a trailing slash).  If you simply pick an already existing folder, the system will automatically assume that you want to create an image.  Hopefully for the next release, this behaviour will be modified and/or made clearer within the docker GUI.

    • Like 4
    • Thanks 4

    Share this comment


    Link to comment
    Share on other sites

    Need some help wrapping my head around the XFS option. Here's what I *think* I know, if someone can correct or affirm, I'd appreciate it 😀

     

    Let's play True or False....

     

    • brtfs has traditionally been the cache drive file system of preference if using Docker. T/F ?
       
    • brtfs was the preference because it supported Copy on Write COW, which was helpful for de-duplication of Docker images. T / F ?
       
    • XFS with the new-ish (2018?) reflink=1 that is now the default format option for XFS in UnRAID also enables COW. T / F ?
       
    • Moving forward I can now select XFS as my cache drive option where Docker images are stored. Or the new Docker folder is pointed @Squid and can gain all the benefits I had before using brtfs but with the trust and stability that XFS brings. T / F ?

     

    Thanks for playing!

     

     

    Share this comment


    Link to comment
    Share on other sites

    g morning, after updating to .25 following in the buttom line

     

     

     Array Started•Warning: file_get_contents(/sys/block/ROOTFS/queue/rotational): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/unassigned.devices/include/lib.php on line 635

     

    all looking good (also UAD), just this error line came up

     

    logs attached

     

    alsserver-syslog-20200713-0429.zipalsserver-diagnostics-20200713-0631.zip

    Share this comment


    Link to comment
    Share on other sites

    Where is the 1MiB offset option located?

    I remember seeing an option in the GUI to pick partition offset in the past but can't see it anymore for some reasons.

    Share this comment


    Link to comment
    Share on other sites
    4 minutes ago, testdasi said:

    Where is the 1MiB offset option located?

    There's no option AFAIK, any unpartitioned non rotational device will be partitioned starting on the 1MbB boundary, and I see the default partition format setting was removed, which is understandable since no one should be currently formatting disk with unaligned partitions.

    Share this comment


    Link to comment
    Share on other sites

    I have been having this issue sporadically since upgrading to beta22, my disks are usually between 39 and 42C even during a parity check, but somehow sometimes they start to get really hot and there's no activity or change in the system to justify that.

    image.thumb.png.f8c7e278d4756c35f35c41bcdbf8e75c.png

    The only way to "fix" this is to shutdown the machine and let it rest for 15 minutes, then after the next boot everything is fine for days. I can't explain it as the machine is basically idle at the moment, fans are spinning as usual.

    image.thumb.png.7815b5f38b9d61db9fc7a2675279046d.png

     

    Attached my diagnostic here.

    mammuth-diagnostics-20200713-1002.zip

    Share this comment


    Link to comment
    Share on other sites
    17 minutes ago, bubbl3 said:

    but somehow sometimes they start to get really hot and there's no activity or change in the system to justify that.

    The temps come from the disks themselves, so they are really getting hot, when it happens again check your cooling, you might have a problem with one or more fans.

    Share this comment


    Link to comment
    Share on other sites
    12 minutes ago, johnnie.black said:

    The temps come from the disks themselves, so they are really getting hot, when it happens again check your cooling, you might have a problem with one or more fans.

    If the issue was the cooling that would be an easy fix, fans look fine both in software and from inspection. I just can't explain this.

     

    Even ramping fans at 100% changes nothing, they are getting hot, but I just don't know why. Maybe for some reason they are never spinning down?

     

    P.S. i mistakenly reported johnnie's reply, sorry!

    Edited by bubbl3

    Share this comment


    Link to comment
    Share on other sites
    31 minutes ago, bubbl3 said:

    The only way to "fix" this is to shutdown the machine and let it rest for 15 minutes,

    Temperatures displayed in the GUI are coming directly from what a disk itself reports.

    It looks like you have some cooling issue.

     

    Share this comment


    Link to comment
    Share on other sites

    @bonienl this is not a new machine, has been running for months, no change has been made to it apart upgrading unraid and all fans are spinning 100%. It will be fine for days running everything, even parity checks and then suddenly disks get hot when the machine is basically idling. This is not my first rodeo, been working on servers for 25 years and can't explain what's happening, even the disk i/o is low.

    Share this comment


    Link to comment
    Share on other sites
    bonienl

    Posted (edited)

    There isn't much I can do, the GUI does report the correct temperatures.

     

    You can set the tunable "poll attributes" to do faster updates of temperature readings, which may help you in your investigation.

     

     

    Edited by bonienl

    Share this comment


    Link to comment
    Share on other sites
    5 minutes ago, bonienl said:

    There isn't much I can do, the GUI does report the correct temperatures.

     

    Is it possible that some process is keeping them spinning? If I spin them down they do cool off.

     

    This actually never happened on 6.8.3, i may try downgrade, would avoid as is probably gonna be a pain with the VMs, but if there's no other choice...

    Edited by bubbl3

    Share this comment


    Link to comment
    Share on other sites
    7 minutes ago, bubbl3 said:

    Is it possible that some process is keeping them spinning?

    Hmm, beta25 has a revised implementation of SMART monitoring.

     

    I'll have a look at your diagnostics (and perhaps @limetech needs to look too)

     

    • Like 1

    Share this comment


    Link to comment
    Share on other sites
    27 minutes ago, bubbl3 said:

    Even ramping fans at 100% changes nothing, they are getting hot, but I just don't know why. Maybe for some reason they are never spinning down?

    Even if there's an issue with spin down, and there might be if there were SMART changes, disks should never overheat if left always spun up, that suggests insufficient cooling.

    Share this comment


    Link to comment
    Share on other sites

    How is your spin-down delay set?

     

    In the logging I don't see any disks being spun down.

     

    Share this comment


    Link to comment
    Share on other sites
    2 minutes ago, johnnie.black said:

    Even if there's an issue with spin down, and there might be if there were SMART changes, disks should never overheat if left always spun up, that suggests insufficient cooling.

    I would agree, but why they stay max 42C during the parity sync, with all the disks active.

    Share this comment


    Link to comment
    Share on other sites
    1 minute ago, bonienl said:

    How is your spin-down delay set?

     

    In the logging I don't see any disks being spun down.

     

    No, it's set to default, should I manually set it?

    Share this comment


    Link to comment
    Share on other sites
    1 minute ago, bubbl3 said:

    but why they stay max 42C during the parity sync, with all the disks active.

    That doesn't make much sense, they shouldn't get hotter than that just idling, or even during continued use.

    Share this comment


    Link to comment
    Share on other sites

    Try a short time as test, e.g. 15 minutes.

    See Settings -> Disk Settings -> Default spin down delay

    Share this comment


    Link to comment
    Share on other sites
    8 minutes ago, bonienl said:

    Try a short time as test, e.g. 15 minutes.

    See Settings -> Disk Settings -> Default spin down delay

    Done, let's see how it goes.

    Share this comment


    Link to comment
    Share on other sites

    @bonienl temps are going down, but temps didn't update in the UI since boot 25 minutes ago. For example I can see both SDH and SDI have much lower temps than reported in the UI.

    image.thumb.png.9901da477af85a2329e692937f26eb1c.png

    image.png.b07c58792deec8d1ed2491869fee4d0b.png

     

    EDIT: took 40 mins for the UI to update, is that normal?

    image.thumb.png.5c850bb78dab39276bc55277225b32a5.png

     

    EDIT2: not sure what is going on, they updated again 20 mins later
    image.png.0e7710f67296e94822e79ff4abc4f9fa.png

    Not sure how setting spin down to 15 mins made temps lower so fast. Never had temps issues before as I said, not even during parity check they got so hot.

    Edited by bubbl3

    Share this comment


    Link to comment
    Share on other sites
    6 hours ago, Lev said:

    Let's play True or False....

    I would answer FALSE to all of them.

     

    I think you're confusing the format of the file system used on the cache (and now in multiple pools) with the format used by the docker image.

    • Like 2

    Share this comment


    Link to comment
    Share on other sites
    37 minutes ago, bubbl3 said:

    took 40 mins for the UI to update, is that normal?

    Time is determined by the value of the poll_attributes setting.

    • Like 1

    Share this comment


    Link to comment
    Share on other sites

     

    8 hours ago, Lev said:

    brtfs has traditionally been the cache drive file system of preference if using Docker. T/F ?

    False.  BTRFS is the default file system for the cache drive because the system allows you to easily expand from a single cache drive to be a multiple device pool.  If you're only running a single cache drive (and have no immediate plans to upgrade to a multi-device pool), XFS is the "recommended" filesystem by many users (including myself)

    8 hours ago, Lev said:

    brtfs was the preference because it supported Copy on Write COW, which was helpful for de-duplication of Docker images. T / F ?

    The docker image required CoW because docker required it.  Think of the image akin to mounting an ISO image on your Windows box.  The image was always formatted as BTRFS, regardless of the underlying filesystem.  IE: You can store that image file on XFS, BTRFS, ReiserFS, or via UD ZFS, NTFS etc.

     

    8 hours ago, Lev said:

    Moving forward I can now select XFS as my cache drive option where Docker images are stored. Or the new Docker folder is pointed @Squid and can gain all the benefits I had before using brtfs but with the trust and stability that XFS brings. T / F ?

    More or less true.  As said, you've always been able to have an XFS cache drive and the image stored on it.

     

     

    The reason for the slightly different mounting options for an image is to reduce the unnecessary amount of writes to the docker.img file.  There won't be a big difference (AFAIK) if you choose a docker image formatted as btrfs or XFS.

     

    But, as I understand it any write to a loopback (ie: image file) is always going to incur extra IO to the underlying filesystem by its very nature.  Using a folder instead of an image completely removes those excess writes.

     

    You can choose to store the folder on either a BTRFS device or an XFS device.  The system will consume the same amount of space on either, because docker via overlay2 will properly handle duplicated layers etc between containers when it's on an XFS device.

     

    BTRFS as the docker.img file does have some problems.  If it fills up to 100%, the it doesn't recover very gracefully, and usually requires a delete of the image and then recreating it and reinstalling your containers (a quick and painless procedure)

     

    IMO, choosing a folder for the storage lowers my aggravation level in the forum because by it's nature, there is no real limit to the size that it takes (up to the size of the cache drive), so the recurring issues of "image filling up" for some users will disappear.   (And as a side note, this is how the system was originally designed in the very early 6.0 betas)

     

    There are just a couple of caveats with the folder method which is detailed in the OP (my quoted text).  

    1. Cache only share.  Simply referencing /mnt/cache/someShare/someFolder/ within the GUI isn't good enough.
    2. Ideally within its own separate share (not necessary, but decreases the possibility of ever running new perms against the share)
    3. The limitations on this first revision of the GUI supporting folders, that doesn't make how you do it exactly intuitive.  Will get improved by the next rev though.
    4. Get over the fact that you can't view or modify any of the files (not that you ever need to) within the folder via SMB.  Just don't export it so that it doesn't drive your OCD nuts.

     

    There is also still some glitches in the GUI when you use the folder method.  Notably, while you can stop the docker service, you cannot re-enable it via the GUI (Settings - docker).  (You have to edit the docker.cfg file and reenable the service there, and then stop / start the array)

    • Like 1
    • Thanks 3

    Share this comment


    Link to comment
    Share on other sites

    Just to provide an update, been almost 4 hours now and my temps have been back to what I am used to:

    image.thumb.png.185329992787523231a0b6c9e781eaa4.png

    The only change made was to set the spin down time from default to 15 mins, I guess I could probably go for something less aggressive, but will wait couple of days.

    Thanks @bonienl for the help.

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.