Jump to content
  • Unraid OS version 6.9.0-beta22 available


    limetech

    Welcome (again) to 6.9 release development!

     

    This release marks hopefully the last beta before moving to -rc phase.  The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements.  With that in mind, the obligatory disclaimer:

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    That said, here's what's new in this release...

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications HAS to be up to date to install languages.  Versions of CA prior to 2020.05.12 will not even load on this release.  As of this writing, the current version of CA is 2020.06.13a.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    Unfortunately, none of the out-of-tree drivers compile with this kernel.  In particular, these drivers are omitted:

    • Highpoint RocketRaid r750
    • Highpoint RocketRaid rr3740a
    • Tehuti Networks tn40xx

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    Also now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  For example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     

    Version 6.9.0-beta22 2020-06-16

     

    Caution! This is beta sofware, consider using on test servers only.

     

    Base distro:

    • aaa_base: version 14.2
    • aaa_elflibs: version 15.0 build 23
    • acl: version 2.2.53
    • acpid: version 2.0.32
    • apcupsd: version 3.14.14
    • at: version 3.2.1
    • attr: version 2.4.48
    • avahi: version 0.8
    • bash: version 5.0.017
    • beep: version 1.3
    • bin: version 11.1
    • bluez-firmware: version 1.2
    • bridge-utils: version 1.6
    • brotli: version 1.0.7
    • btrfs-progs: version 5.6.1
    • bzip2: version 1.0.8
    • ca-certificates: version 20191130 build 1
    • celt051: version 0.5.1.3
    • cifs-utils: version 6.10
    • coreutils: version 8.32
    • cpio: version 2.13
    • cpufrequtils: version 008
    • cryptsetup: version 2.3.3
    • curl: version 7.70.0
    • cyrus-sasl: version 2.1.27
    • db48: version 4.8.30
    • dbus: version 1.12.18
    • dcron: version 4.5
    • devs: version 2.3.1 build 25
    • dhcpcd: version 8.1.9
    • diffutils: version 3.7
    • dmidecode: version 3.2
    • dnsmasq: version 2.81
    • docker: version 19.03.11
    • dosfstools: version 4.1
    • e2fsprogs: version 1.45.6
    • ebtables: version 2.0.11
    • eject: version 2.1.5
    • elvis: version 2.2_0
    • etc: version 15.0
    • ethtool: version 5.7
    • eudev: version 3.2.5
    • file: version 5.38
    • findutils: version 4.7.0
    • flex: version 2.6.4
    • floppy: version 5.5
    • fontconfig: version 2.13.92
    • freetype: version 2.10.2
    • fuse3: version 3.9.1
    • gawk: version 4.2.1
    • gd: version 2.2.5
    • gdbm: version 1.18.1
    • genpower: version 1.0.5
    • getty-ps: version 2.1.0b
    • git: version 2.27.0
    • glib2: version 2.64.3
    • glibc-solibs: version 2.30
    • glibc-zoneinfo: version 2020a build 1
    • glibc: version 2.30
    • gmp: version 6.2.0
    • gnutls: version 3.6.14
    • gptfdisk: version 1.0.5
    • grep: version 3.4
    • gtk+3: version 3.24.20
    • gzip: version 1.10
    • harfbuzz: version 2.6.7
    • haveged: version 1.9.8
    • hdparm: version 9.58
    • hostname: version 3.23
    • htop: version 2.2.0
    • icu4c: version 67.1
    • inetd: version 1.79s
    • infozip: version 6.0
    • inotify-tools: version 3.20.2.2
    • intel-microcode: version 20200609
    • iproute2: version 5.7.0
    • iptables: version 1.8.5
    • iputils: version 20190709
    • irqbalance: version 1.6.0
    • jansson: version 2.13.1
    • jemalloc: version 4.5.0
    • jq: version 1.6
    • keyutils: version 1.6.1
    • kmod: version 27
    • lbzip2: version 2.5
    • lcms2: version 2.10
    • less: version 551
    • libaio: version 0.3.112
    • libarchive: version 3.4.3
    • libcap-ng: version 0.7.10
    • libcgroup: version 0.41
    • libdaemon: version 0.14
    • libdrm: version 2.4.102
    • libedit: version 20191231_3.1
    • libestr: version 0.1.11
    • libevent: version 2.1.11
    • libfastjson: version 0.99.8
    • libffi: version 3.3
    • libgcrypt: version 1.8.5
    • libgpg-error: version 1.38
    • libgudev: version 233
    • libidn: version 1.35
    • libjpeg-turbo: version 2.0.4
    • liblogging: version 1.0.6
    • libmnl: version 1.0.4
    • libnetfilter_conntrack: version 1.0.8
    • libnfnetlink: version 1.0.1
    • libnftnl: version 1.1.7
    • libnl3: version 3.5.0
    • libpcap: version 1.9.1
    • libpciaccess: version 0.16
    • libpng: version 1.6.37
    • libpsl: version 0.21.0
    • librsvg: version 2.48.7
    • libseccomp: version 2.4.3
    • libssh2: version 1.9.0
    • libssh: version 0.9.4
    • libtasn1: version 4.16.0
    • libtirpc: version 1.2.6
    • libunistring: version 0.9.10
    • libusb-compat: version 0.1.5
    • libusb: version 1.0.23
    • libuv: version 1.34.0
    • libvirt-php: version 0.5.5
    • libvirt: version 6.4.0
    • libwebp: version 1.1.0
    • libwebsockets: version 3.2.2
    • libx86: version 1.1
    • libxml2: version 2.9.10
    • libxslt: version 1.1.34
    • libzip: version 1.7.0
    • lm_sensors: version 3.6.0
    • logrotate: version 3.16.0
    • lshw: version B.02.17
    • lsof: version 4.93.2
    • lsscsi: version 0.31
    • lvm2: version 2.03.09
    • lz4: version 1.9.1
    • lzip: version 1.21
    • lzo: version 2.10
    • mc: version 4.8.24
    • miniupnpc: version 2.1
    • mpfr: version 4.0.2
    • nano: version 4.9.3
    • ncompress: version 4.2.4.6
    • ncurses: version 6.2
    • net-tools: version 20181103_0eebece
    • nettle: version 3.6
    • network-scripts: version 15.0 build 9
    • nfs-utils: version 2.1.1
    • nghttp2: version 1.41.0
    • nginx: version 1.16.1
    • nodejs: version 13.12.0
    • nss-mdns: version 0.14.1
    • ntfs-3g: version 2017.3.23
    • ntp: version 4.2.8p14
    • numactl: version 2.0.11
    • oniguruma: version 6.9.1
    • openldap-client: version 2.4.49
    • openssh: version 8.3p1
    • openssl-solibs: version 1.1.1g
    • openssl: version 1.1.1g
    • p11-kit: version 0.23.20
    • patch: version 2.7.6
    • pciutils: version 3.7.0
    • pcre2: version 10.35
    • pcre: version 8.44
    • php: version 7.4.7 (CVE-2019-11048)
    • pixman: version 0.40.0
    • pkgtools: version 15.0 build 33
    • pm-utils: version 1.4.1
    • procps-ng: version 3.3.16
    • pv: version 1.6.6
    • qemu: version 5.0.0
    • qrencode: version 4.0.2
    • reiserfsprogs: version 3.6.27
    • rpcbind: version 1.2.5
    • rsync: version 3.1.3
    • rsyslog: version 8.2002.0
    • samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704)
    • sdparm: version 1.11
    • sed: version 4.8
    • sg3_utils: version 1.45
    • shadow: version 4.8.1
    • shared-mime-info: version 2.0
    • smartmontools: version 7.1
    • spice: version 0.14.1
    • sqlite: version 3.32.2
    • ssmtp: version 2.64
    • sudo: version 1.9.0
    • sysfsutils: version 2.1.0
    • sysvinit-scripts: version 2.1 build 31
    • sysvinit: version 2.96
    • talloc: version 2.3.1
    • tar: version 1.32
    • tcp_wrappers: version 7.6
    • tdb: version 1.4.3
    • telnet: version 0.17
    • tevent: version 0.10.2
    • traceroute: version 2.1.0
    • tree: version 1.8.0
    • ttyd: version 20200606
    • usbredir: version 0.7.1
    • usbutils: version 012
    • utempter: version 1.2.0
    • util-linux: version 2.35.2
    • vbetool: version 1.2.2
    • vsftpd: version 3.0.3
    • wget: version 1.20.3
    • which: version 2.21
    • wireguard-tools: version 1.0.20200513
    • wsdd: version 20180618
    • xfsprogs: version 5.6.0
    • xkeyboard-config: version 2.30
    • xorg-server: version 1.20.8
    • xterm: version 356
    • xz: version 5.2.5
    • yajl: version 2.1.0
    • zlib: version 1.2.11
    • zstd: version 1.4.5

    Linux kernel:

    • version 5.7.2
    • CONFIG_WIREGUARD: WireGuard secure network tunnel
    • CONFIG_IP_SET: IP set support
    • CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors
    • enabled additional hwmon native drivers
    • enabled additional hyperv drivers
    • firmware added:
    • BCM20702A1-0b05-180a.hcd
    • out-of-tree driver status:
    • igb: using in-tree version
    • ixgbe: using in-tree version
    • r8125: using in-tree version
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)

    Management:

    • AFP support removed
    • Multiple pool support added
    • Multi-language support added
    • avoid sending spinup/spindown to non-rotational devices
    • get rid of 'system' plugin support (never used)
    • integrate PAM
    • integrate ljm42 vfio-pci script changes
    • webgui: turn off username autocomplete in login form
    • webgui: Added new display setting: show normalized or raw device identifiers
    • webgui: Add 'Portuguese (pt)' key map option for libvirt
    • webgui: Added "safe mode" one-shot safemode reboot option
    • webgui: Tabbed case select window
    • webgui: Updated case icons
    • webgui: Show message when too many files for browsing
    • webgui: Main page: hide Move button when user shares are not enabled
    • webgui: VMs: change default network model to virtio-net
    • webgui: Allow duplicate containers different icons
    • webgui: Allow markdown within container descriptions
    • webgui: Fix Banner Warnings Not Dismissing without reload of page
    • webgui: Network: allow metric value of zero to set no default gateway
    • webgui: Network: fix privacy extensions not set
    • webgui: Network settings: show first DNSv6 server
    • webgui: SysDevs overhaul with vfio-pci.cfg binding
    • webgui: Icon buttons re-arrangement
    • webgui: Add update dialog to docker context menu
    • webgui: Update Feedback.php
    • webgui: Use update image dialog for update entry in docker context menu
    • webgui: Task Plugins: Providing Ability to define Display_Name

    Edited by limetech

    • Like 22
    • Thanks 7


    User Feedback

    Recommended Comments



    9 hours ago, alturismo said:

    hi, may someone else came accross stopping the array and it hangs

     

    i tried to install a macinabox install for some purpose, so docker created etc ... 

     

    now, when i hit stop array, mashine stays very long at

     

    image.png.a4c6d9ef2f8891f684974027bb265160.png

     

    looks like unmounting a UD disk takes very long, also UD shares etc, log attached

    alsserver-syslog-20200620-0909.zip 33.79 kB · 1 download

    It looks like the remote server is only accepting SMB1 connections.

     

    Please post your issue in the Unassigned Devices forum and post full diagnostics.

    Share this comment


    Link to comment
    Share on other sites

    Today I noticed that my log directory is full, I've saw that my log is spammed with GSO errors looks like this:

     

    Jun 21 07:21:33 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66
    Jun 21 07:21:33 chipsServer kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jun 21 07:21:33 chipsServer kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jun 21 07:21:33 chipsServer kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jun 21 07:21:33 chipsServer kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Jun 21 07:21:33 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66
    Jun 21 07:21:33 chipsServer kernel: tun: 40 b7 92 0b 00 ea ff ff 00 10 00 00 00 00 00 00 @...............
    Jun 21 07:21:33 chipsServer kernel: tun: c0 6d 79 17 00 ea ff ff 00 10 00 00 00 00 00 00 .my.............
    Jun 21 07:21:33 chipsServer kernel: tun: 80 43 50 1e 00 ea ff ff 00 10 00 00 00 00 00 00 .CP.............
    Jun 21 07:21:33 chipsServer kernel: tun: 00 e0 44 1b 00 ea ff ff 00 10 00 00 00 00 00 00 ..D.............
    Jun 21 07:21:33 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 39, hdr_len 66
    Jun 21 07:21:33 chipsServer kernel: tun: 00 5a ee 01 00 ea ff ff 00 00 00 00 00 80 00 00 .Z..............
    Jun 21 07:21:33 chipsServer kernel: tun: 00 60 0a a8 00 00 00 00 00 80 00 00 00 00 00 00 .`..............
    Jun 21 07:21:33 chipsServer kernel: tun: 80 e7 d0 01 00 ea ff ff 00 00 00 00 00 20 00 00 ............. ..
    Jun 21 07:21:33 chipsServer kernel: tun: 00 e0 0a a8 00 00 00 00 00 20 00 00 00 00 00 00 ......... ......
    Jun 21 07:21:37 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66
    Jun 21 07:21:37 chipsServer kernel: tun: 00 08 ed 13 00 ea ff ff 00 10 00 00 00 00 00 00 ................
    Jun 21 07:21:37 chipsServer kernel: tun: c0 77 ab 1c 00 ea ff ff 00 10 00 00 00 00 00 00 .w..............
    Jun 21 07:21:37 chipsServer kernel: tun: c0 e4 d2 09 00 ea ff ff 00 10 00 00 00 00 00 00 ................
    Jun 21 07:21:37 chipsServer kernel: tun: 80 3c c7 12 00 ea ff ff 00 10 00 00 00 00 00 00 .<..............
    Jun 21 07:21:37 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66
    Jun 21 07:21:37 chipsServer kernel: tun: c0 d6 4e 0e 00 ea ff ff 00 10 00 00 00 00 00 00 ..N.............
    Jun 21 07:21:37 chipsServer kernel: tun: c0 2e 67 16 00 ea ff ff 00 10 00 00 00 00 00 00 ..g.............
    Jun 21 07:21:37 chipsServer kernel: tun: 40 75 89 1e 00 ea ff ff 00 10 00 00 00 00 00 00 @u..............
    Jun 21 07:21:37 chipsServer kernel: tun: c0 69 39 0f 00 ea ff ff 00 10 00 00 00 00 00 00 .i9.............
    Jun 21 07:21:37 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 39, hdr_len 66
    Jun 21 07:21:37 chipsServer kernel: tun: 00 03 18 1c 00 ea ff ff 00 00 00 00 00 10 00 00 ................
    Jun 21 07:21:37 chipsServer kernel: tun: 00 50 c6 a6 00 00 00 00 00 10 00 00 00 00 00 00 .P..............
    Jun 21 07:21:37 chipsServer kernel: tun: 80 d9 87 08 00 ea ff ff 00 00 00 00 00 10 00 00 ................
    Jun 21 07:21:37 chipsServer kernel: tun: 00 60 c6 a6 00 00 00 00 00 10 00 00 00 00 00 00 .`..............
    Jun 21 07:21:55 chipsServer kernel: tun: unexpected GSO type: 0x0, gso_size 39, hdr_len 66
    Jun 21 07:21:55 chipsServer kernel: tun: 65 3a 38 31 20 66 5f 68 61 6e 64 6c 65 3a 64 34 e:81 f_handle:d4
    Jun 21 07:21:55 chipsServer kernel: tun: 39 34 33 39 31 64 30 30 30 30 30 30 30 30 66 62 94391d00000000fb
    Jun 21 07:21:55 chipsServer kernel: tun: 64 31 35 34 37 36 0a 69 6e 6f 74 69 66 79 20 77 d15476.inotify w
    Jun 21 07:21:55 chipsServer kernel: tun: 64 3a 34 61 61 35 20 69 6e 6f 3a 31 61 34 39 63 d:4aa5 ino:1a49c

     

    Can somebody help?

    I'm on the latest unRaid 6.9.0beta22 and use a Mellanox ConnectX-2 Single Port 10GbE SFP+ MNPA19-XTR Card with a 10Gbit SFP+ Module

    Share this comment


    Link to comment
    Share on other sites

    You have any VMs using virtio for the network bridge? Change the network from virtio to virtio-net.

    Share this comment


    Link to comment
    Share on other sites
    7 minutes ago, david279 said:

    You have any VMs using virtio for the network bridge? Change the network from virtio to virtio-net.

    I use 'br0' for all my VM's and half of my Docker Containers.

    Share this comment


    Link to comment
    Share on other sites
    1 minute ago, ich777 said:

    I use 'br0' for all my VM's and half of my Docker Containers.

    I mean the model type for the ethernet bridge

     

    <interface type='bridge'>
          <mac address='52:54:00:36:3e:6d'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>

    I should look like that.

    • Thanks 2

    Share this comment


    Link to comment
    Share on other sites
    ich777

    Posted (edited)

    23 minutes ago, david279 said:

    I should look like that.

    Thank you will try that ;)

     

    EDIT: Thank you @david279 that seems to solve the problem ;)

    Edited by ich777

    Share this comment


    Link to comment
    Share on other sites

    Just opened a bug report for the missing QLGE driver, should be in-tree but has been moved to staging is the hardware is EOL. The drivers seems to work when I recompile the kernel in Ubuntu 20.04, would be great to have it included in the 6.9 kernel 🙏

    Share this comment


    Link to comment
    Share on other sites

    With the new cache pools being an option, more people will be messing with it I am sure.

     

    Is it possible to add or change the raid 5/6 settings to: metadata RAID1 c3/c4 (aka, 3 or 4 copies of metadata) and data in RAID5/6 respectively?

     

    From reading with metadata in c3/c4, raid5/6 have proven pretty stable recently with the only real risk noted to be the files actively being written but virtually no chance of complete file system failure. Stable enough I would consider messing with it anyways.

    Edited by TexasUnraid
    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    Beta22 already? Man you guys are working fast. And that's a major release, kudos to the diligence done in testing and fixes!

    Share this comment


    Link to comment
    Share on other sites

    Hey guys, been away awhile as I wanted ZFS.

    I saw in the release notes it speaks about pools. 

     

    Does this mean that you can pool drives to make like 2 or 3 pools so that you get better throughput when transferring data? Or doing massive uploads that are larger than cache?

     

    Just wondering if there is a bit more info on this part?

    Sorry I havent read all 5 pages.

    Share this comment


    Link to comment
    Share on other sites

    Basically, when you specify the Use cache setting for any user share, you get to choose which cache pool that user share uses when it uses cache. It still works the same as cache, just different caches. So you can have separate caches for separate purposes.

    Share this comment


    Link to comment
    Share on other sites
    On 6/20/2020 at 4:02 AM, qwijibo said:

    User Shares set to Cache Only are now showing as All Files Protected in the Shares tab.

    I only have the 1 cache drive. Are cache shares now part of parity? 

    No it's a bug.

    Share this comment


    Link to comment
    Share on other sites
    On 6/19/2020 at 1:18 PM, limetech said:

    Took a while but found the patches submitted upstream for kernel 5.8:

     

    https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?id=0d14f06cd6657ba3446a5eb780672da487b068e7

    and

    https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git/commit/?id=5727043c73fdfe04597971b5f3f4850d879c1f4f

     

    These patches will not work against 5.7 (or earlier) kernels as-is because the file they patch has several other changes.

     

    However I see what they're doing and adding this to our 5.7 kernel is pretty easy BUT how crucial is it add this now?  Meaning, from your forum link people have added several other patches besides this set - have to say, not much time to analyze all those as well.  If this set of patches by itself will be useful to a alot of folks we can do it, otherwise I'd say wait until 5.8 kernel is released.

     

    I added those patches for next release.

    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    may also still open here, exit in webterminal does reopen a new session instead closing it, 99 of 100 at least ...

     

    some small glitches i "feel", hitting stop on my win10 vm's doesnt stop them anymore, i have to either remote login and shutdown or use virsh shutdown ...

    Share this comment


    Link to comment
    Share on other sites
    17 hours ago, trurl said:

    Basically, when you specify the Use cache setting for any user share, you get to choose which cache pool that user share uses when it uses cache. It still works the same as cache, just different caches. So you can have separate caches for separate purposes.

    I'm super curious what the reasoning was behind adding this. What's an appropriate use case? Why would I want to separate what cache goes where? There is no sarcasm I'm genuinely curious.

    Share this comment


    Link to comment
    Share on other sites

    One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache.

     

    It is also a way of adding more then 30 drives if someone needed that.

     

    A second cache pool could be used as a more "classic" NAS with raid and apparently possible ZFS support in the future, really pushing into freeNAS territory there.

     

    Or simply setup cache pools based on usage and speed needs. For example a scratch drive that doesn't need reduntacy with a raid 0 setup on less trustworthy drives.

     

    another high speed cache with NVME drives for working projects.

     

    Then a high stability pool for normal writes to the array caching using raid1 and very good drives that has very low chance of failure.

     

    Just the first things that came to mind, If they make it, people will find uses for it that is for sure.

     

    For example this makes a tiered storage system fairly easy to implement in the future, this is a use case I would use for sure.

     

    Tired storage will move recently / frequently used data to faster storage pools and less used or old data to slower tiers automatically.

    Edited by TexasUnraid
    • Like 3

    Share this comment


    Link to comment
    Share on other sites
    38 minutes ago, TexasUnraid said:

    One use case I will be using it for off the bat would be having a separate cache for docker and appdata formatted as XFS to prevent the 10x - 100x inflated writes that happen with a BTRFS cache.

    Do you have more info on this?  Currently using BTRFS RAID10 for all my caching, including docker/appdata.

    Edited by Dephcon

    Share this comment


    Link to comment
    Share on other sites
    1 hour ago, Dephcon said:

    Do you have more info on this?  Currently using BTRFS RAID10 for all my caching, including docker/appdata.

    Whole lot of information in this thread:

     

    Basically people are seeing writes many many times higher then it should be when docker / appdata is stored on a BTRFS drive.

     

    Some are reporting TB's written every day (think someone said 20TB a day!), the SSD's overheating due to all the writes and using up SSD's warranty write period in a matter of months.

     

    I didn't have the extreme writes some had but with docker and appdat on the btrfs drive I was seeing 7GB/hour and climbing over time.

     

    Moving docker and appdata to an XFS drive dropped writes to ~200mb/hour and holding steady / dropping over time.

    • Like 1
    • Thanks 1

    Share this comment


    Link to comment
    Share on other sites

    Not sure if anyone posted this already but i installed the beta a couple days ago and im getting a strange behavior with dockers.

    First time half the dockers running just didnt work anymore out of the blue giving me errors of read-only file system. Restarted the server and everything worked.

    Recently i tried to update Handbrake and in the middle of the update gave up errors of read only file system.

    The docker now is present in the list with a questionmark and a "not available" on the version column and i cannot remove/reinstall it. It gives me a generic server error.  I tried even to remove it from the terminal:

    root@UNRAIDSRV:~# docker rmi cf94ba0a9bd0
    Error response from daemon: open /var/lib/docker/image/btrfs/.tmp-repositories.json462281471: read-only file system
    

    Or reinstall it to no avail

     

    Not sure if this could be the beta build or something else.

     

    One more thing: if i go in Settings -> Dockers and i do a Scrub it just doesnt start. Duration 0, status aborted with or without correct file system errors

    Edited by exico

    Share this comment


    Link to comment
    Share on other sites
    30 minutes ago, exico said:

    Not sure if this could be the beta build or something else.

    This is likely a general support issue, please start a thread in the general support forum and don't forget to include the diagnostics: Tools -> Diagnostics

    Share this comment


    Link to comment
    Share on other sites

    I dont think its a general support issue... i explain:

     

    I had the time to restore the usb backup i made before installing the beta 22.

    Now i can delete the docker and with the beta i couldnt and now the scrub works instead of stopping with 0 seconds and aborted as status.

     

    Share this comment


    Link to comment
    Share on other sites
    42 minutes ago, exico said:

    I dont think its a general support issue

    In any case you should post your diagnostics from this release if you want help/want to help.

    Share this comment


    Link to comment
    Share on other sites

    Anyone getting Kernel Security Check Failure when trying to boot from Q35 5.0 with this update on a Windows 10 VM?

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.