• Unraid OS version 6.9.0-beta22 available


    limetech

    Welcome (again) to 6.9 release development!

     

    This release marks hopefully the last beta before moving to -rc phase.  The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements.  With that in mind, the obligatory disclaimer:

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    That said, here's what's new in this release...

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications HAS to be up to date to install languages.  Versions of CA prior to 2020.05.12 will not even load on this release.  As of this writing, the current version of CA is 2020.06.13a.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    Unfortunately, none of the out-of-tree drivers compile with this kernel.  In particular, these drivers are omitted:

    • Highpoint RocketRaid r750
    • Highpoint RocketRaid rr3740a
    • Tehuti Networks tn40xx

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    Also now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  For example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     

    Version 6.9.0-beta22 2020-06-16

     

    Caution! This is beta sofware, consider using on test servers only.

     

    Base distro:

    • aaa_base: version 14.2
    • aaa_elflibs: version 15.0 build 23
    • acl: version 2.2.53
    • acpid: version 2.0.32
    • apcupsd: version 3.14.14
    • at: version 3.2.1
    • attr: version 2.4.48
    • avahi: version 0.8
    • bash: version 5.0.017
    • beep: version 1.3
    • bin: version 11.1
    • bluez-firmware: version 1.2
    • bridge-utils: version 1.6
    • brotli: version 1.0.7
    • btrfs-progs: version 5.6.1
    • bzip2: version 1.0.8
    • ca-certificates: version 20191130 build 1
    • celt051: version 0.5.1.3
    • cifs-utils: version 6.10
    • coreutils: version 8.32
    • cpio: version 2.13
    • cpufrequtils: version 008
    • cryptsetup: version 2.3.3
    • curl: version 7.70.0
    • cyrus-sasl: version 2.1.27
    • db48: version 4.8.30
    • dbus: version 1.12.18
    • dcron: version 4.5
    • devs: version 2.3.1 build 25
    • dhcpcd: version 8.1.9
    • diffutils: version 3.7
    • dmidecode: version 3.2
    • dnsmasq: version 2.81
    • docker: version 19.03.11
    • dosfstools: version 4.1
    • e2fsprogs: version 1.45.6
    • ebtables: version 2.0.11
    • eject: version 2.1.5
    • elvis: version 2.2_0
    • etc: version 15.0
    • ethtool: version 5.7
    • eudev: version 3.2.5
    • file: version 5.38
    • findutils: version 4.7.0
    • flex: version 2.6.4
    • floppy: version 5.5
    • fontconfig: version 2.13.92
    • freetype: version 2.10.2
    • fuse3: version 3.9.1
    • gawk: version 4.2.1
    • gd: version 2.2.5
    • gdbm: version 1.18.1
    • genpower: version 1.0.5
    • getty-ps: version 2.1.0b
    • git: version 2.27.0
    • glib2: version 2.64.3
    • glibc-solibs: version 2.30
    • glibc-zoneinfo: version 2020a build 1
    • glibc: version 2.30
    • gmp: version 6.2.0
    • gnutls: version 3.6.14
    • gptfdisk: version 1.0.5
    • grep: version 3.4
    • gtk+3: version 3.24.20
    • gzip: version 1.10
    • harfbuzz: version 2.6.7
    • haveged: version 1.9.8
    • hdparm: version 9.58
    • hostname: version 3.23
    • htop: version 2.2.0
    • icu4c: version 67.1
    • inetd: version 1.79s
    • infozip: version 6.0
    • inotify-tools: version 3.20.2.2
    • intel-microcode: version 20200609
    • iproute2: version 5.7.0
    • iptables: version 1.8.5
    • iputils: version 20190709
    • irqbalance: version 1.6.0
    • jansson: version 2.13.1
    • jemalloc: version 4.5.0
    • jq: version 1.6
    • keyutils: version 1.6.1
    • kmod: version 27
    • lbzip2: version 2.5
    • lcms2: version 2.10
    • less: version 551
    • libaio: version 0.3.112
    • libarchive: version 3.4.3
    • libcap-ng: version 0.7.10
    • libcgroup: version 0.41
    • libdaemon: version 0.14
    • libdrm: version 2.4.102
    • libedit: version 20191231_3.1
    • libestr: version 0.1.11
    • libevent: version 2.1.11
    • libfastjson: version 0.99.8
    • libffi: version 3.3
    • libgcrypt: version 1.8.5
    • libgpg-error: version 1.38
    • libgudev: version 233
    • libidn: version 1.35
    • libjpeg-turbo: version 2.0.4
    • liblogging: version 1.0.6
    • libmnl: version 1.0.4
    • libnetfilter_conntrack: version 1.0.8
    • libnfnetlink: version 1.0.1
    • libnftnl: version 1.1.7
    • libnl3: version 3.5.0
    • libpcap: version 1.9.1
    • libpciaccess: version 0.16
    • libpng: version 1.6.37
    • libpsl: version 0.21.0
    • librsvg: version 2.48.7
    • libseccomp: version 2.4.3
    • libssh2: version 1.9.0
    • libssh: version 0.9.4
    • libtasn1: version 4.16.0
    • libtirpc: version 1.2.6
    • libunistring: version 0.9.10
    • libusb-compat: version 0.1.5
    • libusb: version 1.0.23
    • libuv: version 1.34.0
    • libvirt-php: version 0.5.5
    • libvirt: version 6.4.0
    • libwebp: version 1.1.0
    • libwebsockets: version 3.2.2
    • libx86: version 1.1
    • libxml2: version 2.9.10
    • libxslt: version 1.1.34
    • libzip: version 1.7.0
    • lm_sensors: version 3.6.0
    • logrotate: version 3.16.0
    • lshw: version B.02.17
    • lsof: version 4.93.2
    • lsscsi: version 0.31
    • lvm2: version 2.03.09
    • lz4: version 1.9.1
    • lzip: version 1.21
    • lzo: version 2.10
    • mc: version 4.8.24
    • miniupnpc: version 2.1
    • mpfr: version 4.0.2
    • nano: version 4.9.3
    • ncompress: version 4.2.4.6
    • ncurses: version 6.2
    • net-tools: version 20181103_0eebece
    • nettle: version 3.6
    • network-scripts: version 15.0 build 9
    • nfs-utils: version 2.1.1
    • nghttp2: version 1.41.0
    • nginx: version 1.16.1
    • nodejs: version 13.12.0
    • nss-mdns: version 0.14.1
    • ntfs-3g: version 2017.3.23
    • ntp: version 4.2.8p14
    • numactl: version 2.0.11
    • oniguruma: version 6.9.1
    • openldap-client: version 2.4.49
    • openssh: version 8.3p1
    • openssl-solibs: version 1.1.1g
    • openssl: version 1.1.1g
    • p11-kit: version 0.23.20
    • patch: version 2.7.6
    • pciutils: version 3.7.0
    • pcre2: version 10.35
    • pcre: version 8.44
    • php: version 7.4.7 (CVE-2019-11048)
    • pixman: version 0.40.0
    • pkgtools: version 15.0 build 33
    • pm-utils: version 1.4.1
    • procps-ng: version 3.3.16
    • pv: version 1.6.6
    • qemu: version 5.0.0
    • qrencode: version 4.0.2
    • reiserfsprogs: version 3.6.27
    • rpcbind: version 1.2.5
    • rsync: version 3.1.3
    • rsyslog: version 8.2002.0
    • samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704)
    • sdparm: version 1.11
    • sed: version 4.8
    • sg3_utils: version 1.45
    • shadow: version 4.8.1
    • shared-mime-info: version 2.0
    • smartmontools: version 7.1
    • spice: version 0.14.1
    • sqlite: version 3.32.2
    • ssmtp: version 2.64
    • sudo: version 1.9.0
    • sysfsutils: version 2.1.0
    • sysvinit-scripts: version 2.1 build 31
    • sysvinit: version 2.96
    • talloc: version 2.3.1
    • tar: version 1.32
    • tcp_wrappers: version 7.6
    • tdb: version 1.4.3
    • telnet: version 0.17
    • tevent: version 0.10.2
    • traceroute: version 2.1.0
    • tree: version 1.8.0
    • ttyd: version 20200606
    • usbredir: version 0.7.1
    • usbutils: version 012
    • utempter: version 1.2.0
    • util-linux: version 2.35.2
    • vbetool: version 1.2.2
    • vsftpd: version 3.0.3
    • wget: version 1.20.3
    • which: version 2.21
    • wireguard-tools: version 1.0.20200513
    • wsdd: version 20180618
    • xfsprogs: version 5.6.0
    • xkeyboard-config: version 2.30
    • xorg-server: version 1.20.8
    • xterm: version 356
    • xz: version 5.2.5
    • yajl: version 2.1.0
    • zlib: version 1.2.11
    • zstd: version 1.4.5

    Linux kernel:

    • version 5.7.2
    • CONFIG_WIREGUARD: WireGuard secure network tunnel
    • CONFIG_IP_SET: IP set support
    • CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors
    • enabled additional hwmon native drivers
    • enabled additional hyperv drivers
    • firmware added:
    • BCM20702A1-0b05-180a.hcd
    • out-of-tree driver status:
    • igb: using in-tree version
    • ixgbe: using in-tree version
    • r8125: using in-tree version
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)

    Management:

    • AFP support removed
    • Multiple pool support added
    • Multi-language support added
    • avoid sending spinup/spindown to non-rotational devices
    • get rid of 'system' plugin support (never used)
    • integrate PAM
    • integrate ljm42 vfio-pci script changes
    • webgui: turn off username autocomplete in login form
    • webgui: Added new display setting: show normalized or raw device identifiers
    • webgui: Add 'Portuguese (pt)' key map option for libvirt
    • webgui: Added "safe mode" one-shot safemode reboot option
    • webgui: Tabbed case select window
    • webgui: Updated case icons
    • webgui: Show message when too many files for browsing
    • webgui: Main page: hide Move button when user shares are not enabled
    • webgui: VMs: change default network model to virtio-net
    • webgui: Allow duplicate containers different icons
    • webgui: Allow markdown within container descriptions
    • webgui: Fix Banner Warnings Not Dismissing without reload of page
    • webgui: Network: allow metric value of zero to set no default gateway
    • webgui: Network: fix privacy extensions not set
    • webgui: Network settings: show first DNSv6 server
    • webgui: SysDevs overhaul with vfio-pci.cfg binding
    • webgui: Icon buttons re-arrangement
    • webgui: Add update dialog to docker context menu
    • webgui: Update Feedback.php
    • webgui: Use update image dialog for update entry in docker context menu
    • webgui: Task Plugins: Providing Ability to define Display_Name

    Edited by limetech

    • Like 23
    • Thanks 7



    User Feedback

    Recommended Comments



    2 hours ago, 1812 said:

    but the web gui on the main tab shows 26TB free…. 

    It's a known issue, I already made a request for this to be corrected, for now you need to subtract parity size from the total free space displayed, it's why I'm still using UD for my pools since I have multiple raid 5 pools most with different disks sizes and it's not practical to always been doing mental calculations.

     

    2 hours ago, 1812 said:

    Also, clicking spin down underneath the pool doesn't seem to work. this new pool has nothing use it. Same issue with another single disk pool, no spin down, even when using the spin down button at the bottom of the web gui.

    That's another known issue, I also reported the same before.

    • Thanks 1
    Link to comment
    13 minutes ago, johnnie.black said:

    It's a known issue, I already made a request for this to be corrected, for now you need to subtract parity size from the total free space displayed, it's why I'm still using UD for my pools since I have multiple raid 5 pools most with different disks sizes and it's not practical to always been doing mental calculations.

     

    That's another known issue, I also reported the same before.

     

    It appears i'm just late to the party on everything today… ¯\_(ツ)_/¯ 

    Link to comment
    Quote

     A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

    If I could have this with full ZFS support on the array that would be perfect! I realize it's not really "unraid" at that point but I have outgrown the JBOD parity unraid has. Pluz ZFS is looking to become more like unraid with vdev expansion.

     

    As of now I have all my main storage disks on a 3x 6 disk raidz2 pool as well as a couple of ZFS mirrors and some (20tb) in my unraid array for scratch storage. If I could have it where I could have my zfs pool as an array with another zfs pool as its cache that would be ideal and add features for my use case. I'd love to keep the newly added data in the SSD cache and have the mover handle offloading it dynamically to the spinning rust pool. That would be so coo! 100% looking forward to the day this becomes possible.

    • Like 1
    Link to comment
    On 6/28/2020 at 9:37 AM, _rogue said:

    If I could have this with full ZFS support on the array that would be perfect! 

    ...

    Pluz ZFS is looking to become more like unraid with vdev expansion.

    I mostly agree ... a ZFS RAIDZ array would be almost perfect.  I like everything a ZFS has to offer and the tools that support the function.  Bundle that with an Unraid style interface for common array tasks for file versioning, scrubbing, and resilvering ... 🔥!

     

    The #1 place I think ZFS still needs some more time the oven is, as @_rogue pointed out, vdev expansion.  All indicators point to that being a priority for the project devs, so maybe ZFS implementation for an Unraid 7.0 release target?  Soon™

     

    One issue I see with incorporating ZFS as the "main Unraid array" is how it handles the parity in a ZFS RAIDZ1 implementation; it's just different from how Unraid does it today.  While a Unraid array stores parity information on the parity disk(s), a ZFS RAIDZ stores necessary parity throughout the array.  Also, the way ZFS caches reads and writes is different and can require a LOT of RAM for big arrays.  I'm obviously oversimplifying here, but that fact remains, the way it works is a fundamental shift from the current Unraid state.  Is this better ... or worse?  I think that's subjective.  However given the ZFS baked-in features such as snapshots, block checksums to protect from bitrot, and native copy on write ... I'll think I'd deal with the few downsides.

     

    -JesterEE

    Link to comment
    On 6/24/2020 at 2:46 PM, Dazog said:

    AFP is Apple’s own protocol which was originally developed for Classic macOS, and has been widely supported by networked devices including storage such as NAS. But its use has now been deprecated for six years, since OS X 10.9 Mavericks. The last release was of AFP 3.4 over seven years ago

     

    Time to move on from hardware that old.. 

     

    I understand, and agree, most of my hardware is spanking new. And for most people they should, of course, be running current hardware and software. But over 50% of my income comes from DVD Authoring, and the last version of Mac OS that reliably runs DVD Studio Pro is 10.6.8. And working locally is a massive headache with content and resources being processed on new and old machines (and the entire reason I'm using a server). 

    And while 10.6.8 supports SMB there's a massive bug. If you "Move" data or do anything similar, the OS completes the task instantly without actually doing the move and then it deletes the source. I lost several projects in a single move action. (Thank heavens for backups, though they were a bit out of date). Also AFP is much more reliable when there's resource fork data, which again is more common in older apps and MacOS's. 

    I'm not asking to enable it by default or even make it visible by default. But keep it around for those of us that need it. And I guess that begs the question, can I add that kind of low level driver back in myself if it gets removed? I'm not familiar with that level of customization.

     

    Edited by jeremyn
    Link to comment

    Seems like it's time to get a better DVD software, I'm quite confident that is still possible.  Or dual boot windows, or run linux, or just about anything.  Anyway, I'm not trying to argue with your decisions so I can think of two solutions:

     

    1 - I assume you will be able to use NFS in alternate to AFP if you don't wish to use SMB. 

    2 - If you want to add drivers and things, this is the way to do it.

     

    Link to comment
    2 hours ago, JesterEE said:

    The #1 place I think ZFS still needs some more time the oven is, as @_rogue pointed out, vdev expansion.  All indicators point to that being a priority for the project devs, so maybe ZFS implementation for an Unraid 7.0 release target?  Soon™

    I wouldn't be holding it up for that.  There's still a ton of use cases.  Cache drive mirrors is one and the functionality that provides for backups, Virtual Machines and dockers is immense.   And also ZFS is better at telling you when there is corrupted data even in a single drive implementation of it, or when a dual drive or better it will repair it for you and let you know.  I'd love to be able to convert my docker.img file to ZFS and have another option for a mirrored cache drive than btrfs.

     

    2 hours ago, JesterEE said:

    One issue I see with incorporating ZFS as the "main Unraid array" is how it handles the parity in a ZFS RAIDZ1 implementation; it's just different from how Unraid does it today.  While a Unraid array stores parity information on the parity disk(s), a ZFS RAIDZ stores necessary parity throughout the array. 

    Well yes, but again why hold it up because it doesn't fit into unraid's main array?  I'm running a ZFS mirror for my critical data alongside a standard unraid array and it's amazing.

    2 hours ago, JesterEE said:

    Also, the way ZFS caches reads and writes is different and can require a LOT of RAM for big arrays.

    This is no longer an issue, the whole Gigabytes per terrabyte of disk is a completely incorrect formula that seems to live on through ether legend.  It's been possible for a long time to run on very small amounts of memory.  The main thing that trips people up is the ZIL, which slows everything down, eats memory if you do it wrong and should be disabled for most use cases.  Which really is ZFS's main adoption problem - High entry criteria due to complex descriptions of what everything does.  I mean they could have just called the ZIL a write cache and then explained why it's different and how it works compared to other caches.

     

    2 hours ago, JesterEE said:

    I'm obviously oversimplifying here, but that fact remains, the way it works is a fundamental shift from the current Unraid state.  Is this better ... or worse?  I think that's subjective.  However given the ZFS baked-in features such as snapshots, block checksums to protect from bitrot, and native copy on write ... I'll think I'd deal with the few downsides.

    Yeah true.  Each has primary advantages and a few disadvantages.

     

    Unraid's primary advantages are that it lets you use differing size disks and it lets you power down inactive disks due to it not writing in stripes.  It's primary disadvantage is that it will only read from a single disk which results in quite a lot of performance degradation when compared to a standard raid array.  But for the right use case, it's extremely effective e.g. media storage with a lot of streaming.

     

    ZFS advantages are it's self healing and the ton of nice features built in for VM's, dockers and backups and is relatively fast due to the way it reads and the differing raid options you can create depending on your needs (like most raid arrays).   It's disadvantages in this case will be it won't spin down single drives, doesn't really let you use differing sized drives and adding disks (not increasing disk size) can't easily be done.

     

    Whether unraid allow a single ZFS disk in their unraid array is up to them, but I think the advantages for certain use cases in other areas are huge.

    This is why I have both.  Unraid for storage of minor accessed files, ZFS for critical data, VM's and dockers.

     

    Sorry for long post - but didn't want ZFS to be misunderstood in this thread!

    Edited by Marshalleq
    Link to comment
    18 minutes ago, Marshalleq said:

    This is why I have both.  Unraid for storage of minor accessed files, ZFS for critical data, VM's and dockers.

     

    This is how i've been running as well. Using the ZFS Plugin with unassigned drives as a mirrored nvme ZFS pool just for Docker/VM/ISO. All the shares and backups are on the main UnRAID array.  Best of both worlds really where you dont require high synchronous reads for general data. 

    • Like 1
    Link to comment

    Personally I think there is a place for the current system + ZFS.

     

    First thing as an outsider that just wants something that works, the only feature that freenas offers that really peaks my interest is ZFS and it's related features (snapshotting primarily).

     

    Offering multiple arrays with both classic unraid and a ZFS/BTRFS pool side by side could really be handy. Although you could also just use a cache pool to do the same thing in most use cases.

     

    For basic NAS use unraid's setup is the best option IMHO, most basic NAS's are built out of old mixed hardware and unraid is perfect for this setup, better then a classic raid setup in many ways as well.

     

    As you upgrade though, there comes a time where running a more classic BTRFS/ZFS raid setup really starts to make more sense. For example if a file error is detected it can be repaired on the fly, the only way unraid can do this is a complete parity check far as I know, that is well over 24hours to find what could be a small error in many cases.

     

    Having the option to upgrade and expand into more enterprise technology on the same platform offers real value to people like me, it lets me know I will not outgrow this software.

     

    The number one feature I miss in unraid is easily snapshots, both disk and VM snapshots. Even windows offers this. In fact the only reason I decided to go with unraid is this guy offering a script to get basic snapshots working and the possibility of him or others turning it into a full fledged plugin (yes, I am using BTRFS for my array):

     

    Without that I would of gone with another option, I would really like to see the ability to manage snapshots built into unraid.

     

    I actually did end up just installing ESXI as a VM in unraid and will use that for my VM needs, almost entirely due to the lack of VM snapshots. The primary reason I use VM's is to do or test something stupid, revert to a snapshot and try it again etc.

     

    TLDR: I also would love to see ZFS added to unraid along with snapshots in BTRFS.

    Edited by TexasUnraid
    • Like 1
    Link to comment

    For me ZFS pools with be a great option mostly for the cache pools, especially since they now can have up 30 devices, I don't see LT making the array also a pool, Unraid name would stop making sense, and it wouldn't be a thing I would want anyway since being able to fully use different capacity disks in the array and don't loosing the whole array even if you lose more disks that parity can recover are some of Unraid's biggest selling points for me, though besides being able to use ZFS with cache pools I would still welcome ZFS as file system option with the way the array works currently, i.e. every data disk as an individual filesystem.

    • Like 1
    Link to comment
    49 minutes ago, Marshalleq said:

    1 - I assume you will be able to use NFS in alternate to AFP if you don't wish to use SMB. 

    2 - If you want to add drivers and things, this is the way to do it.

    Thank you for that info. That's reassuring there'll be a path forward for us. I have no desire to scrap my unraid server, and I really don't want to be stuck with an old version years from now. 

    <off topic tangent> Yeah, there's no way in #^& I'm moving away from DVD Studio Pro. As somebody who's been doing DVD authoring since 1997, and have thousands of titles under my belt, I can say without hesitation, there's no better authoring program than DVD Studio Pro and I've used them all. Scenarist is the only other "Hollywood grade / replication ready" authoring software and due to its lack of a useful abstraction layer, it would literally take me 2 - 4 times longer to author a the kind of complex titles we create. I believe my company is probably the last company in the world offering boutique and unique DVD and Blu-ray features and menus in our authoring, no cookie cutter crap. Even Hollywood is cookie cutter-ing most of their titles now. </off topic tangent>

    Link to comment

    While everyone is mentioning to update the VM network drive to virtio-net, I thought I would also mention an issue I was having.

     

    I was having major problems with apps becoming unresponsive in Linux VM's when trying to save to SMB shares mounted from my UnRAID host. When I checked dmesg, I was seeing errors such as this:

    CIFS VFS: Close unmatched open

     

    With the upgraded Linux kernel, don't forget to change your /etc/fstab mounts on any Linux VM's to cifs vers=3.0.
    Errors stopped after making the change and remounting the shares. 

     

    Also for security, removing CIFS SMB 1.0 Support from Programs & Features on any Windows machines that access your UnRAID shares. 

     

     

    Link to comment

    A bug I am running into, small file transfers over SMB are stupidly slow. It can take 1-2 seconds per 10kb file just to delete them vs doing hundreds of files a second with a windows SMB share.

     

    I was testing and found that using NFS I got the expected performance (although security seems really weak with NFS), so the issue is not hardware or filesystem related and must be samba related.

    Edited by TexasUnraid
    Link to comment

    Upgraded from 6.8.3 successfully and all is working but I am getting "kernel: Unknown ioctl 1976" in the syslogs. Still tracking what causing this. Reverting to 6.8.3 does not show that error.

     

    Update:

    The "kernel: Unknown ioctl 1976" in the syslog is related to the openvmtools_compiled plugin. The following errors does not occur when the openvm plugin is removed.

    Edited by SCSI
    Link to comment

    Has any official documentation been released for pools.

    Lot of experimenting but no "rules" that I can find.

    Is the unRaid parity drive at all limiting the size of the individual drives in pools, etc..?

    Or are pools totally independent from the array?

    Link to comment
    49 minutes ago, whwunraid said:

    Or are pools totally independent from the array?

    Same as the current cache pool, they are independent filesystems.

    • Like 1
    Link to comment
    On 6/25/2020 at 6:21 AM, jeremyn said:

    > AFP support has been removed.

    Please don't do this! I have multiple older Macs that connect (this is required due to the work I do and the need to support OLD apps that only run on older Macs running older MacOS versions), and SMB has serious bugs (data loss level of bugs) on these older machines. AFP support is THE biggest reason I'm super happy to have gone with Unraid. Totally understandable to hide it for the majority of users, but don't remove it completely.

    Maybe you could use a docker container as a bridge? A quick search comes up with this one already created - https://github.com/cptactionhank/docker-netatalk

    Edited by unabletoconnect
    Link to comment

    Where do the share configs go for the new unraid version?

    When i upgrade to 6.9-22 all my shares are gone...

    Link to comment
    4 minutes ago, DarkMan83 said:

    Where do the share configs go for the new unraid version?

    When i upgrade to 6.9-22 all my shares are gone...

    Even without configs (which are where they've always been) you should still have shares, since the shares are simply the top level folders on cache and array, and if there are no cfg files for a share it has default settings.

     

    So you are having some other problem, probably better suited to its own topic in General Support. Post there and be sure to include your diagnostics.

    Link to comment
    14 hours ago, trurl said:

    Even without configs (which are where they've always been) you should still have shares, since the shares are simply the top level folders on cache and array, and if there are no cfg files for a share it has default settings.

     

    So you are having some other problem, probably better suited to its own topic in General Support. Post there and be sure to include your diagnostics.

    Don't worry, i fixed it! Had to rename all Share CFG's on the flash, so they get recreated!

    Link to comment

    Anyone noticed SMB performance drop when accessing through a Windows VM?

     

    All my disk shares (i.e. bypassed shfs) dropped to 160MB/s or so (from GB/s with NVMe).

    This can be reliably reproduced just by changing between 6.8.3 and 6.9.0-beta22 + change VM adapter to virtio-net.

    Link to comment
    12 hours ago, DarkMan83 said:

    Don't worry, i fixed it! Had to rename all Share CFG's on the flash, so they get recreated!

    That seems an unlikely fix. Whether or not the .cfg files for your shares exist, you should still have shares, so it is unclear whether or not you have actually fixed anything. Also seems extremely unlikely to have anything at all to do with this release.

     

    Don't be surprised if you still have some problem you haven't correctly diagnosed. So, if you continue to have problems, post about it in 

    On 7/2/2020 at 12:47 PM, trurl said:

    its own topic in General Support. Post there and be sure to include your diagnostics.

     

     

    Link to comment

    @limetech is there any way to repeatably trigger the unexpected GSO error for testing purposes?

     

    I found running <model type='virtio'/> + machine='pc-q35-5.0' also stop the errors for VM use without the performance penalty of virtio-net.

    I have been dumping a few TB of data from VM to a share in my backup job so I reckon I should receive plenty of errors by now but I haven't. Hence, wondering if there's anyway to trigger the error manually just to be sure that it's not coincidental.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.