• Unraid OS version 6.9.0-beta22 available


    limetech

    Welcome (again) to 6.9 release development!

     

    This release marks hopefully the last beta before moving to -rc phase.  The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements.  With that in mind, the obligatory disclaimer:

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    That said, here's what's new in this release...

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications HAS to be up to date to install languages.  Versions of CA prior to 2020.05.12 will not even load on this release.  As of this writing, the current version of CA is 2020.06.13a.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    Unfortunately, none of the out-of-tree drivers compile with this kernel.  In particular, these drivers are omitted:

    • Highpoint RocketRaid r750
    • Highpoint RocketRaid rr3740a
    • Tehuti Networks tn40xx

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    Also now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  For example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     

    Version 6.9.0-beta22 2020-06-16

     

    Caution! This is beta sofware, consider using on test servers only.

     

    Base distro:

    • aaa_base: version 14.2
    • aaa_elflibs: version 15.0 build 23
    • acl: version 2.2.53
    • acpid: version 2.0.32
    • apcupsd: version 3.14.14
    • at: version 3.2.1
    • attr: version 2.4.48
    • avahi: version 0.8
    • bash: version 5.0.017
    • beep: version 1.3
    • bin: version 11.1
    • bluez-firmware: version 1.2
    • bridge-utils: version 1.6
    • brotli: version 1.0.7
    • btrfs-progs: version 5.6.1
    • bzip2: version 1.0.8
    • ca-certificates: version 20191130 build 1
    • celt051: version 0.5.1.3
    • cifs-utils: version 6.10
    • coreutils: version 8.32
    • cpio: version 2.13
    • cpufrequtils: version 008
    • cryptsetup: version 2.3.3
    • curl: version 7.70.0
    • cyrus-sasl: version 2.1.27
    • db48: version 4.8.30
    • dbus: version 1.12.18
    • dcron: version 4.5
    • devs: version 2.3.1 build 25
    • dhcpcd: version 8.1.9
    • diffutils: version 3.7
    • dmidecode: version 3.2
    • dnsmasq: version 2.81
    • docker: version 19.03.11
    • dosfstools: version 4.1
    • e2fsprogs: version 1.45.6
    • ebtables: version 2.0.11
    • eject: version 2.1.5
    • elvis: version 2.2_0
    • etc: version 15.0
    • ethtool: version 5.7
    • eudev: version 3.2.5
    • file: version 5.38
    • findutils: version 4.7.0
    • flex: version 2.6.4
    • floppy: version 5.5
    • fontconfig: version 2.13.92
    • freetype: version 2.10.2
    • fuse3: version 3.9.1
    • gawk: version 4.2.1
    • gd: version 2.2.5
    • gdbm: version 1.18.1
    • genpower: version 1.0.5
    • getty-ps: version 2.1.0b
    • git: version 2.27.0
    • glib2: version 2.64.3
    • glibc-solibs: version 2.30
    • glibc-zoneinfo: version 2020a build 1
    • glibc: version 2.30
    • gmp: version 6.2.0
    • gnutls: version 3.6.14
    • gptfdisk: version 1.0.5
    • grep: version 3.4
    • gtk+3: version 3.24.20
    • gzip: version 1.10
    • harfbuzz: version 2.6.7
    • haveged: version 1.9.8
    • hdparm: version 9.58
    • hostname: version 3.23
    • htop: version 2.2.0
    • icu4c: version 67.1
    • inetd: version 1.79s
    • infozip: version 6.0
    • inotify-tools: version 3.20.2.2
    • intel-microcode: version 20200609
    • iproute2: version 5.7.0
    • iptables: version 1.8.5
    • iputils: version 20190709
    • irqbalance: version 1.6.0
    • jansson: version 2.13.1
    • jemalloc: version 4.5.0
    • jq: version 1.6
    • keyutils: version 1.6.1
    • kmod: version 27
    • lbzip2: version 2.5
    • lcms2: version 2.10
    • less: version 551
    • libaio: version 0.3.112
    • libarchive: version 3.4.3
    • libcap-ng: version 0.7.10
    • libcgroup: version 0.41
    • libdaemon: version 0.14
    • libdrm: version 2.4.102
    • libedit: version 20191231_3.1
    • libestr: version 0.1.11
    • libevent: version 2.1.11
    • libfastjson: version 0.99.8
    • libffi: version 3.3
    • libgcrypt: version 1.8.5
    • libgpg-error: version 1.38
    • libgudev: version 233
    • libidn: version 1.35
    • libjpeg-turbo: version 2.0.4
    • liblogging: version 1.0.6
    • libmnl: version 1.0.4
    • libnetfilter_conntrack: version 1.0.8
    • libnfnetlink: version 1.0.1
    • libnftnl: version 1.1.7
    • libnl3: version 3.5.0
    • libpcap: version 1.9.1
    • libpciaccess: version 0.16
    • libpng: version 1.6.37
    • libpsl: version 0.21.0
    • librsvg: version 2.48.7
    • libseccomp: version 2.4.3
    • libssh2: version 1.9.0
    • libssh: version 0.9.4
    • libtasn1: version 4.16.0
    • libtirpc: version 1.2.6
    • libunistring: version 0.9.10
    • libusb-compat: version 0.1.5
    • libusb: version 1.0.23
    • libuv: version 1.34.0
    • libvirt-php: version 0.5.5
    • libvirt: version 6.4.0
    • libwebp: version 1.1.0
    • libwebsockets: version 3.2.2
    • libx86: version 1.1
    • libxml2: version 2.9.10
    • libxslt: version 1.1.34
    • libzip: version 1.7.0
    • lm_sensors: version 3.6.0
    • logrotate: version 3.16.0
    • lshw: version B.02.17
    • lsof: version 4.93.2
    • lsscsi: version 0.31
    • lvm2: version 2.03.09
    • lz4: version 1.9.1
    • lzip: version 1.21
    • lzo: version 2.10
    • mc: version 4.8.24
    • miniupnpc: version 2.1
    • mpfr: version 4.0.2
    • nano: version 4.9.3
    • ncompress: version 4.2.4.6
    • ncurses: version 6.2
    • net-tools: version 20181103_0eebece
    • nettle: version 3.6
    • network-scripts: version 15.0 build 9
    • nfs-utils: version 2.1.1
    • nghttp2: version 1.41.0
    • nginx: version 1.16.1
    • nodejs: version 13.12.0
    • nss-mdns: version 0.14.1
    • ntfs-3g: version 2017.3.23
    • ntp: version 4.2.8p14
    • numactl: version 2.0.11
    • oniguruma: version 6.9.1
    • openldap-client: version 2.4.49
    • openssh: version 8.3p1
    • openssl-solibs: version 1.1.1g
    • openssl: version 1.1.1g
    • p11-kit: version 0.23.20
    • patch: version 2.7.6
    • pciutils: version 3.7.0
    • pcre2: version 10.35
    • pcre: version 8.44
    • php: version 7.4.7 (CVE-2019-11048)
    • pixman: version 0.40.0
    • pkgtools: version 15.0 build 33
    • pm-utils: version 1.4.1
    • procps-ng: version 3.3.16
    • pv: version 1.6.6
    • qemu: version 5.0.0
    • qrencode: version 4.0.2
    • reiserfsprogs: version 3.6.27
    • rpcbind: version 1.2.5
    • rsync: version 3.1.3
    • rsyslog: version 8.2002.0
    • samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704)
    • sdparm: version 1.11
    • sed: version 4.8
    • sg3_utils: version 1.45
    • shadow: version 4.8.1
    • shared-mime-info: version 2.0
    • smartmontools: version 7.1
    • spice: version 0.14.1
    • sqlite: version 3.32.2
    • ssmtp: version 2.64
    • sudo: version 1.9.0
    • sysfsutils: version 2.1.0
    • sysvinit-scripts: version 2.1 build 31
    • sysvinit: version 2.96
    • talloc: version 2.3.1
    • tar: version 1.32
    • tcp_wrappers: version 7.6
    • tdb: version 1.4.3
    • telnet: version 0.17
    • tevent: version 0.10.2
    • traceroute: version 2.1.0
    • tree: version 1.8.0
    • ttyd: version 20200606
    • usbredir: version 0.7.1
    • usbutils: version 012
    • utempter: version 1.2.0
    • util-linux: version 2.35.2
    • vbetool: version 1.2.2
    • vsftpd: version 3.0.3
    • wget: version 1.20.3
    • which: version 2.21
    • wireguard-tools: version 1.0.20200513
    • wsdd: version 20180618
    • xfsprogs: version 5.6.0
    • xkeyboard-config: version 2.30
    • xorg-server: version 1.20.8
    • xterm: version 356
    • xz: version 5.2.5
    • yajl: version 2.1.0
    • zlib: version 1.2.11
    • zstd: version 1.4.5

    Linux kernel:

    • version 5.7.2
    • CONFIG_WIREGUARD: WireGuard secure network tunnel
    • CONFIG_IP_SET: IP set support
    • CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors
    • enabled additional hwmon native drivers
    • enabled additional hyperv drivers
    • firmware added:
    • BCM20702A1-0b05-180a.hcd
    • out-of-tree driver status:
    • igb: using in-tree version
    • ixgbe: using in-tree version
    • r8125: using in-tree version
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)

    Management:

    • AFP support removed
    • Multiple pool support added
    • Multi-language support added
    • avoid sending spinup/spindown to non-rotational devices
    • get rid of 'system' plugin support (never used)
    • integrate PAM
    • integrate ljm42 vfio-pci script changes
    • webgui: turn off username autocomplete in login form
    • webgui: Added new display setting: show normalized or raw device identifiers
    • webgui: Add 'Portuguese (pt)' key map option for libvirt
    • webgui: Added "safe mode" one-shot safemode reboot option
    • webgui: Tabbed case select window
    • webgui: Updated case icons
    • webgui: Show message when too many files for browsing
    • webgui: Main page: hide Move button when user shares are not enabled
    • webgui: VMs: change default network model to virtio-net
    • webgui: Allow duplicate containers different icons
    • webgui: Allow markdown within container descriptions
    • webgui: Fix Banner Warnings Not Dismissing without reload of page
    • webgui: Network: allow metric value of zero to set no default gateway
    • webgui: Network: fix privacy extensions not set
    • webgui: Network settings: show first DNSv6 server
    • webgui: SysDevs overhaul with vfio-pci.cfg binding
    • webgui: Icon buttons re-arrangement
    • webgui: Add update dialog to docker context menu
    • webgui: Update Feedback.php
    • webgui: Use update image dialog for update entry in docker context menu
    • webgui: Task Plugins: Providing Ability to define Display_Name

    Edited by limetech

    • Like 23
    • Thanks 7



    User Feedback

    Recommended Comments



    In this new release could we expect to see full Raid 10 speeds from 4 NVME drives in a separate cache pool? Or will the SMB overhead still affect it?

     

    As it stands using the original cache pool design there have been no significant speed differences using Cache: Yes, No or Prefer settings, on 10Gbe, NVME to NVME, Ram Disk to NVME, SSD to NVME transfers. 10Gbe connection verified using iperf3.

     

    If this is as intended and not an expected feature from unraid let me know and I'll move this to a feature request in the correct forum.

    Link to comment

    Please fix this issue on all Ryzen 3xxx series - seems like its happening on all chipset X370,X470,X570

     

     

    My small workaround

     

    AMD Starship/Matisse PCIe Dummy Function | Non-Essential Instrumentation (0c:00.0)

    Jul 5 13:02:30 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 1023ms after FLR; waiting
    Jul 5 13:02:32 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 2047ms after FLR; waiting
    Jul 5 13:02:35 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 4095ms after FLR; waiting
    Jul 5 13:02:40 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 8191ms after FLR; waiting
    Jul 5 13:02:50 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 16383ms after FLR; waiting
    Jul 5 13:03:07 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 32767ms after FLR; waiting
    Jul 5 13:03:42 unRAIDTower kernel: vfio-pci 0000:0c:00.0: not ready 65535ms after FLR; giving up
    Jul 5 13:03:43 unRAIDTower kernel: clocksource: timekeeping watchdog on CPU10: Marking clocksource 'tsc' as unstable because the skew is too large:
    Jul 5 13:03:43 unRAIDTower kernel: clocksource: 'hpet' wd_now: b4700ed2 wd_last: b3954a18 mask: ffffffff
    Jul 5 13:03:43 unRAIDTower kernel: clocksource: 'tsc' cs_now: 1d337ecfa60 cs_last: 1d337dd658c mask: ffffffffffffffff
    Jul 5 13:03:43 unRAIDTower kernel: tsc: Marking TSC unstable due to clocksource watchdog
    Jul 5 13:03:43 unRAIDTower kernel: TSC found unstable after boot, most likely due to broken BIOS. Use 'tsc=unstable'.
    Jul 5 13:03:43 unRAIDTower kernel: sched_clock: Marking unstable (510899129422, -8570651)<-(510996221197, -105679272)
    Jul 5 13:03:45 unRAIDTower kernel: clocksource: Switched to clocksource hpet
    Link to comment

    I'm going to downgrade soon after I collect a diagnostic but I was wondering if any of you have experienced the mover attempting to write to two drives at the same time? I also noticed it's the only drives that are encrypted in my array so far.

    Link to comment

    I notice that you now

    Quote

    avoid sending spinup/spindown to non-rotational devices

    but mechanical SATA hard drives assigned to pools don't automatically spin down either. I've tried changing the spin down time for each drive individually to 30 minutes (the global setting being 2 hours) but that had no effect.

     

    I can spin them down manually using the down arrow icon for that pool and they then stay spun down, as expected, but the Main and Dashboard pages continue to show them as active (though with no temperatures displayed):

     

    1284067006_ScreenShot2020-07-05at15_52_26.png.e6d3c437f48394fd07a0c7a30921b06d.png

     

     

    1823942715_ScreenShot2020-07-05at15_53_24.png.df3761bc6429b60efa5cf758a7ceea6e.png

     

    The command line confirms that they have indeed spun down:

    root@Lapulapu:~# hdparm -C /dev/sdf
    
    /dev/sdf:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdg
    
    /dev/sdg:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdh
    
    /dev/sdh:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdi
    
    /dev/sdi:
     drive state is:  standby
    root@Lapulapu:~# 

    They spin up when required and the temperatures are displayed again, as expected. I don't believe this is a controller issue because four other mechanical SATA hard drives that are also controlled by it and assigned to the array spin down after two hours idle, as they've always done.

     

    Link to comment
    4 hours ago, Jerky_san said:

    I'm going to downgrade soon after I collect a diagnostic but I was wondering if any of you have experienced the mover attempting to write to two drives at the same time? I also noticed it's the only drives that are encrypted in my array so far.

    This is a 'beta' topic - that means more information is needed than simply "anyone notice ...".

    At least needs diagnostics.zip, maybe an explanation of what's happening.

    Link to comment
    4 hours ago, John_M said:

    I notice that you now

    but mechanical SATA hard drives assigned to pools don't automatically spin down either. I've tried changing the spin down time for each drive individually to 30 minutes (the global setting being 2 hours) but that had no effect.

     

    I can spin them down manually using the down arrow icon for that pool and they then stay spun down, as expected, but the Main and Dashboard pages continue to show them as active (though with no temperatures displayed):

     

    1284067006_ScreenShot2020-07-05at15_52_26.png.e6d3c437f48394fd07a0c7a30921b06d.png

     

     

    1823942715_ScreenShot2020-07-05at15_53_24.png.df3761bc6429b60efa5cf758a7ceea6e.png

     

    The command line confirms that they have indeed spun down:

    
    root@Lapulapu:~# hdparm -C /dev/sdf
    
    /dev/sdf:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdg
    
    /dev/sdg:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdh
    
    /dev/sdh:
     drive state is:  standby
    root@Lapulapu:~# hdparm -C /dev/sdi
    
    /dev/sdi:
     drive state is:  standby
    root@Lapulapu:~# 

    They spin up when required and the temperatures are displayed again, as expected. I don't believe this is a controller issue because four other mechanical SATA hard drives that are also controlled by it and assigned to the array spin down after two hours idle, as they've always done.

     

    yes this is a bug, thanks for reporting.  If you don't mind, please repost here:

    https://forums.unraid.net/bug-reports/prereleases/

    • Thanks 1
    Link to comment

    I like the improvements to the cache. I'm wondering if there will be new features coming to the Virtual Machines. ^.^ 

    Link to comment
    On 7/5/2020 at 3:07 PM, limetech said:

    This is a 'beta' topic - that means more information is needed than simply "anyone notice ...".

    At least needs diagnostics.zip, maybe an explanation of what's happening.

    I noticed when I start my mover process it literally writes to two data drives at once. The reason I stated that I was going to downgrade was to do more research but was just throwing it out there wondering if anyone had observed a similar thing happening to them and as stated I'd get a diagnostic before I did it.

     

    All I do is start the array, click mover, and it beings writing to two data disks at once. If I tell the mover to stop it will eventually. At the very end as the mover is stopping it will suddenly only write to one drive again. Running unbalance makes it only write to a single data drive as well. Diagnostics were taken during a safe mode run. All dockers halted as well. The only correlation between the two drives that i could make is they are both not as full as the others and both have encryption.

     

    image.thumb.png.ee66c848edf4d87e80481ad88d456239.png

    Edited by Jerky_san
    Link to comment
    4 minutes ago, Jerky_san said:

    I noticed when I start my mover process it literally writes to two data drives at once

    Not clear why you think it shouldn't. In fact, you have a share with allocation set to Most Free, and many of your drives are very full. It shouldn't be surprising that it constantly switches between disks when it is moving that share.

    • Like 1
    Link to comment
    1 minute ago, trurl said:

    Not clear why you think it shouldn't. In fact, you have a share with allocation set to Most Free, and many of your drives are very full. It shouldn't be surprising that it constantly switches between disks when it is moving that share.

    So.. your saying that the mover writing to two data drives at the exact same time as seen in the picture is considered normal? It drags the write speed down to a crawl doing this. I don't remember it doing it in previous versions either?

    Link to comment
    Just now, Jerky_san said:

    the exact same time as seen in the picture

    The screen isn't refreshed sufficiently fast for you to claim "exact same time". And there is buffering to consider also. If allocation method is causing it to choose first one disk then another then writing to more than one disk during a brief interval is exactly what I would expect.

     

    4 minutes ago, Jerky_san said:

    I don't remember it doing it in previous versions either?

    Do the test if you don't remember.

    Link to comment
    3 hours ago, trurl said:

    The screen isn't refreshed sufficiently fast for you to claim "exact same time". And there is buffering to consider also. If allocation method is causing it to choose first one disk then another then writing to more than one disk during a brief interval is exactly what I would expect.

     

    Do the test if you don't remember.

    I mean this can run for hours and still be the same speed. It isn't a refresh issue. 

     

    There was even a thread specifically asking for this and johnnie.black stated exactly what i'm seeing. Slow write speeds. 

     

     

    Edited by Jerky_san
    Link to comment

    Try installing netdata docker, you can use that to see what is really going on in real time with the indivdual drives. Really helps figure issues like this out.

    Link to comment
    Just now, TexasUnraid said:

    Try installing netdata docker, you can use that to see what is really going on in real time with the indivdual drives. Really helps figure issues like this out.

    It's booted in safe mode with no dockers or plugins running. I specifically did this to make sure nothing was interfering with the tests

    Link to comment
    4 minutes ago, Jerky_san said:

    It's booted in safe mode with no dockers or plugins running. I specifically did this to make sure nothing was interfering with the tests

    Good move, although if you now see it reacts the same with or without plugins running, might be worth getting the netdata information to see if it is alternating between the drives or writing in parallel.

     

    Like was said above, turbo write is another good idea to test.

    Edited by TexasUnraid
    Link to comment
    3 minutes ago, trurl said:

    Have you tried it with turbo write as suggested in that thread?

    Yes I always use reconstruct write.

    Link to comment
    48 minutes ago, Jerky_san said:

    All I do is start the array, click mover, and it beings writing to two data disks at once. If I tell the mover to stop it will eventually. At the very end as the mover is stopping it will suddenly only write to one drive again.

    Like @trurlmentioned this is the result of having shares set to "most free" allocation method, parity writes will overlap making the performance considerably slower, note also that since v6.8.x "turbo write" will be disable once multiple array disk activity is detected making performance even worse, you'll see the same behavior with v6.8, I never recommend using most free as an allocation method precisely because of this performance issue, it can be a little better if used with some split level that avoids the constant disk switching for any new file.

    • Like 1
    Link to comment
    13 minutes ago, johnnie.black said:

    Like @trurlmentioned this is the result of having shares set to "most free" allocation method, parity writes will overlap making the performance considerably slower, note also that since v6.8.x "turbo write" will be disable once multiple array disk activity is detected making performance even worse, you'll see the same behavior with v6.8, I never recommend using most free as an allocation method precisely because of this performance issue, it can be a little better if used with some split level that avoids the constant disk switching for any new file.

    I'll set it to fill-up once it gets done rebooting and test. I guess it bases it off of free space and not % of free space? Also thank you for explaining..

    Edited by Jerky_san
    Link to comment
    23 minutes ago, Jerky_san said:

    I guess it bases it off of free space and not % of free space?

    Correct.

    Link to comment
    9 minutes ago, johnnie.black said:

    Correct.

    It does appear to have fixed it. It's very strange though. The files were each 10gb or so. I'd of expected it to take time to write it before switching to the other drive. Anyways thank you for your explanation and helping me.

    Link to comment
    11 minutes ago, Jerky_san said:

    It does appear to have fixed it. It's very strange though. The files were each 10gb or so. I'd of expected it to take time to write it before switching to the other drive. Anyways thank you for your explanation and helping me.

    It is probably happening because of the RAM buffer for writes. As soon as the last of the current file is written to that buffer, the client will start the transfer of the next file.  This will start the file allocation process which requires disk access (both read and write operations) on the other disk which (of course) is available because there are no pending operations for it.

     

    PS --- I see this all the time when I use ImgBurn to write an Bluray .iso to the server.  The data transfer to the server will stop about thirty seconds before ImgBurn receives back the message that the file write to the physical device is completed.

    Edited by Frank1940
    • Like 1
    Link to comment
    4 hours ago, Frank1940 said:

    It is probably happening because of the RAM buffer for writes. As soon as the last of the current file is written to that buffer, the client will start the transfer of the next file.  This will start the file allocation process which requires disk access (both read and write operations) on the other disk which (of course) is available because there are no pending operations for it.

     

    PS --- I see this all the time when I use ImgBurn to write an Bluray .iso to the server.  The data transfer to the server will stop about thirty seconds before ImgBurn receives back the message that the file write to the physical device is completed.

    I do have 32 gigs so it most definitely could read at least two if not three in. I am going to test one last thing because I think I realize whats going on. Only reason it matters to me is because I've had my array near packed before but I never remember this occurring in any other time. The two drives are nearly exactly the same amount of space free 1.96TB. So I wonder if it is doing what you said but only because they are the same fullness so it opens the gate to allow it. 

     

    Edit:

     

    So tested it using unbalance to make the two drives different free space sizes. It does indeed work like I always remembered it doing. Where it would write to one drive at a time. I guess somehow I used mover to move files that made the two drives the exact same free space which I guess makes the mover initiate two moves instead of just one move at a time. The files I guess were similar size or so close in size that it would bounce between the two drives since I have enough ram to easily cache multiple files from the cache drive. Once the drives were different enough free space though it appears(greater than 10gb difference) it appears it will just write to one till it reaches the same free space again. I am going to do what @johnnie.black recommended and do fill-up instead but that explains why I never saw it in previous versions. It must of just happened in the past week sometime.

     

    Edit 2:

     

    Thank you @trurl @TexasUnraid @johnnie.black @Frank1940 for your assistance in figuring it out

    Edited by Jerky_san
    Link to comment
    15 hours ago, Jerky_san said:

    do fill-up

    When using Fill-up allocation, or even just when you have disks that are very full, it is especially important to understand the Minimum Free setting.

     

    Unraid has no way to know how large a file will become when it chooses a disk for it. If a disk has more than Minimum, the disk can be chosen, even if the file won't fit.

     

    You should set Minimum Free to larger than the largest file you expect to write to the share.

    • Like 1
    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.