Jump to content
  • Unraid OS version 6.9.0-beta22 available


    limetech

    Welcome (again) to 6.9 release development!

     

    This release marks hopefully the last beta before moving to -rc phase.  The reason we still mark beta is because we'd like to get wider testing of new multiple-pool feature, as well as perhaps sneak in a couple more refinements.  With that in mind, the obligatory disclaimer:

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    That said, here's what's new in this release...

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Let's say you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now let's say you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this pool.  Now you have x2 1-device btrfs pools.  Upon array Start user might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications HAS to be up to date to install languages.  Versions of CA prior to 2020.05.12 will not even load on this release.  As of this writing, the current version of CA is 2020.06.13a.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    Unfortunately, none of the out-of-tree drivers compile with this kernel.  In particular, these drivers are omitted:

    • Highpoint RocketRaid r750
    • Highpoint RocketRaid rr3740a
    • Tehuti Networks tn40xx

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to install 2-factor authentication packages in a future release.

     

    Docker

    Updated to version 19.03.11

     

    Also now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

     

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  For example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     

    Version 6.9.0-beta22 2020-06-16

     

    Caution! This is beta sofware, consider using on test servers only.

     

    Base distro:

    • aaa_base: version 14.2
    • aaa_elflibs: version 15.0 build 23
    • acl: version 2.2.53
    • acpid: version 2.0.32
    • apcupsd: version 3.14.14
    • at: version 3.2.1
    • attr: version 2.4.48
    • avahi: version 0.8
    • bash: version 5.0.017
    • beep: version 1.3
    • bin: version 11.1
    • bluez-firmware: version 1.2
    • bridge-utils: version 1.6
    • brotli: version 1.0.7
    • btrfs-progs: version 5.6.1
    • bzip2: version 1.0.8
    • ca-certificates: version 20191130 build 1
    • celt051: version 0.5.1.3
    • cifs-utils: version 6.10
    • coreutils: version 8.32
    • cpio: version 2.13
    • cpufrequtils: version 008
    • cryptsetup: version 2.3.3
    • curl: version 7.70.0
    • cyrus-sasl: version 2.1.27
    • db48: version 4.8.30
    • dbus: version 1.12.18
    • dcron: version 4.5
    • devs: version 2.3.1 build 25
    • dhcpcd: version 8.1.9
    • diffutils: version 3.7
    • dmidecode: version 3.2
    • dnsmasq: version 2.81
    • docker: version 19.03.11
    • dosfstools: version 4.1
    • e2fsprogs: version 1.45.6
    • ebtables: version 2.0.11
    • eject: version 2.1.5
    • elvis: version 2.2_0
    • etc: version 15.0
    • ethtool: version 5.7
    • eudev: version 3.2.5
    • file: version 5.38
    • findutils: version 4.7.0
    • flex: version 2.6.4
    • floppy: version 5.5
    • fontconfig: version 2.13.92
    • freetype: version 2.10.2
    • fuse3: version 3.9.1
    • gawk: version 4.2.1
    • gd: version 2.2.5
    • gdbm: version 1.18.1
    • genpower: version 1.0.5
    • getty-ps: version 2.1.0b
    • git: version 2.27.0
    • glib2: version 2.64.3
    • glibc-solibs: version 2.30
    • glibc-zoneinfo: version 2020a build 1
    • glibc: version 2.30
    • gmp: version 6.2.0
    • gnutls: version 3.6.14
    • gptfdisk: version 1.0.5
    • grep: version 3.4
    • gtk+3: version 3.24.20
    • gzip: version 1.10
    • harfbuzz: version 2.6.7
    • haveged: version 1.9.8
    • hdparm: version 9.58
    • hostname: version 3.23
    • htop: version 2.2.0
    • icu4c: version 67.1
    • inetd: version 1.79s
    • infozip: version 6.0
    • inotify-tools: version 3.20.2.2
    • intel-microcode: version 20200609
    • iproute2: version 5.7.0
    • iptables: version 1.8.5
    • iputils: version 20190709
    • irqbalance: version 1.6.0
    • jansson: version 2.13.1
    • jemalloc: version 4.5.0
    • jq: version 1.6
    • keyutils: version 1.6.1
    • kmod: version 27
    • lbzip2: version 2.5
    • lcms2: version 2.10
    • less: version 551
    • libaio: version 0.3.112
    • libarchive: version 3.4.3
    • libcap-ng: version 0.7.10
    • libcgroup: version 0.41
    • libdaemon: version 0.14
    • libdrm: version 2.4.102
    • libedit: version 20191231_3.1
    • libestr: version 0.1.11
    • libevent: version 2.1.11
    • libfastjson: version 0.99.8
    • libffi: version 3.3
    • libgcrypt: version 1.8.5
    • libgpg-error: version 1.38
    • libgudev: version 233
    • libidn: version 1.35
    • libjpeg-turbo: version 2.0.4
    • liblogging: version 1.0.6
    • libmnl: version 1.0.4
    • libnetfilter_conntrack: version 1.0.8
    • libnfnetlink: version 1.0.1
    • libnftnl: version 1.1.7
    • libnl3: version 3.5.0
    • libpcap: version 1.9.1
    • libpciaccess: version 0.16
    • libpng: version 1.6.37
    • libpsl: version 0.21.0
    • librsvg: version 2.48.7
    • libseccomp: version 2.4.3
    • libssh2: version 1.9.0
    • libssh: version 0.9.4
    • libtasn1: version 4.16.0
    • libtirpc: version 1.2.6
    • libunistring: version 0.9.10
    • libusb-compat: version 0.1.5
    • libusb: version 1.0.23
    • libuv: version 1.34.0
    • libvirt-php: version 0.5.5
    • libvirt: version 6.4.0
    • libwebp: version 1.1.0
    • libwebsockets: version 3.2.2
    • libx86: version 1.1
    • libxml2: version 2.9.10
    • libxslt: version 1.1.34
    • libzip: version 1.7.0
    • lm_sensors: version 3.6.0
    • logrotate: version 3.16.0
    • lshw: version B.02.17
    • lsof: version 4.93.2
    • lsscsi: version 0.31
    • lvm2: version 2.03.09
    • lz4: version 1.9.1
    • lzip: version 1.21
    • lzo: version 2.10
    • mc: version 4.8.24
    • miniupnpc: version 2.1
    • mpfr: version 4.0.2
    • nano: version 4.9.3
    • ncompress: version 4.2.4.6
    • ncurses: version 6.2
    • net-tools: version 20181103_0eebece
    • nettle: version 3.6
    • network-scripts: version 15.0 build 9
    • nfs-utils: version 2.1.1
    • nghttp2: version 1.41.0
    • nginx: version 1.16.1
    • nodejs: version 13.12.0
    • nss-mdns: version 0.14.1
    • ntfs-3g: version 2017.3.23
    • ntp: version 4.2.8p14
    • numactl: version 2.0.11
    • oniguruma: version 6.9.1
    • openldap-client: version 2.4.49
    • openssh: version 8.3p1
    • openssl-solibs: version 1.1.1g
    • openssl: version 1.1.1g
    • p11-kit: version 0.23.20
    • patch: version 2.7.6
    • pciutils: version 3.7.0
    • pcre2: version 10.35
    • pcre: version 8.44
    • php: version 7.4.7 (CVE-2019-11048)
    • pixman: version 0.40.0
    • pkgtools: version 15.0 build 33
    • pm-utils: version 1.4.1
    • procps-ng: version 3.3.16
    • pv: version 1.6.6
    • qemu: version 5.0.0
    • qrencode: version 4.0.2
    • reiserfsprogs: version 3.6.27
    • rpcbind: version 1.2.5
    • rsync: version 3.1.3
    • rsyslog: version 8.2002.0
    • samba: version 4.12.3 (CVE-2020-10700, CVE-2020-10704)
    • sdparm: version 1.11
    • sed: version 4.8
    • sg3_utils: version 1.45
    • shadow: version 4.8.1
    • shared-mime-info: version 2.0
    • smartmontools: version 7.1
    • spice: version 0.14.1
    • sqlite: version 3.32.2
    • ssmtp: version 2.64
    • sudo: version 1.9.0
    • sysfsutils: version 2.1.0
    • sysvinit-scripts: version 2.1 build 31
    • sysvinit: version 2.96
    • talloc: version 2.3.1
    • tar: version 1.32
    • tcp_wrappers: version 7.6
    • tdb: version 1.4.3
    • telnet: version 0.17
    • tevent: version 0.10.2
    • traceroute: version 2.1.0
    • tree: version 1.8.0
    • ttyd: version 20200606
    • usbredir: version 0.7.1
    • usbutils: version 012
    • utempter: version 1.2.0
    • util-linux: version 2.35.2
    • vbetool: version 1.2.2
    • vsftpd: version 3.0.3
    • wget: version 1.20.3
    • which: version 2.21
    • wireguard-tools: version 1.0.20200513
    • wsdd: version 20180618
    • xfsprogs: version 5.6.0
    • xkeyboard-config: version 2.30
    • xorg-server: version 1.20.8
    • xterm: version 356
    • xz: version 5.2.5
    • yajl: version 2.1.0
    • zlib: version 1.2.11
    • zstd: version 1.4.5

    Linux kernel:

    • version 5.7.2
    • CONFIG_WIREGUARD: WireGuard secure network tunnel
    • CONFIG_IP_SET: IP set support
    • CONFIG_SENSORS_DRIVETEMP: Hard disk drives with temperature sensors
    • enabled additional hwmon native drivers
    • enabled additional hyperv drivers
    • firmware added:
    • BCM20702A1-0b05-180a.hcd
    • out-of-tree driver status:
    • igb: using in-tree version
    • ixgbe: using in-tree version
    • r8125: using in-tree version
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)

    Management:

    • AFP support removed
    • Multiple pool support added
    • Multi-language support added
    • avoid sending spinup/spindown to non-rotational devices
    • get rid of 'system' plugin support (never used)
    • integrate PAM
    • integrate ljm42 vfio-pci script changes
    • webgui: turn off username autocomplete in login form
    • webgui: Added new display setting: show normalized or raw device identifiers
    • webgui: Add 'Portuguese (pt)' key map option for libvirt
    • webgui: Added "safe mode" one-shot safemode reboot option
    • webgui: Tabbed case select window
    • webgui: Updated case icons
    • webgui: Show message when too many files for browsing
    • webgui: Main page: hide Move button when user shares are not enabled
    • webgui: VMs: change default network model to virtio-net
    • webgui: Allow duplicate containers different icons
    • webgui: Allow markdown within container descriptions
    • webgui: Fix Banner Warnings Not Dismissing without reload of page
    • webgui: Network: allow metric value of zero to set no default gateway
    • webgui: Network: fix privacy extensions not set
    • webgui: Network settings: show first DNSv6 server
    • webgui: SysDevs overhaul with vfio-pci.cfg binding
    • webgui: Icon buttons re-arrangement
    • webgui: Add update dialog to docker context menu
    • webgui: Update Feedback.php
    • webgui: Use update image dialog for update entry in docker context menu
    • webgui: Task Plugins: Providing Ability to define Display_Name

    Edited by limetech

    • Like 22
    • Thanks 7


    User Feedback

    Recommended Comments



      

    43 minutes ago, xl3b4n0nx said:

    I think that is GREAT feature and I will absolutely be using it. I can't stand when my cache fills and my dockers go nuts because the drive is out of space. However, that is not what i am referring to. That is a set of lateral cache pools. I am talking about vertical pools. 
     

    Ex:    500GB NVMe
                    |

                   V

          2 TB SATA SSD

                    |

                   V

          UnRAID Array

     

    Is this a possibility with this new multi-pool feature?

     

    As, xl3b4n0nx put it, this is what i am hoping for.  I Imagine the tiered caching setup as follows:

     

    array 1: JOBD SSD (Hot Pool)

    redundancy unimportant

    Data is moved when idle, on a timer (mover), or policy (last time accessed, ie hot data is never moved)

    For:

    • Appdata
    • Dockers
    • Important VMs

     

    array 2: r10/z2 BRTFS/ZFS HDD (Living Pool)

    redundancy possible

    Data is moved on a timer (mover), or policy (last time accessed, ie hot data is moved up)

    For:

    • Downloads
    • Testing VMs

     

    array 3: Unraid HDD (Cold Pool)

    redundancy paramount

    Data is moved by policy (last time accessed, ie hot data is moved up)

    For:

    • Keeping a redundant copy of everything

     

    In addition, proper SAS support would be a welcomed addition. I have never had Sata products last past a few years.

    Edited by rukiftw

    Share this comment


    Link to comment
    Share on other sites
    17 hours ago, trurl said:

    I only have one, so I have just named the pool "fast". Since there is no redundancy, I am thinking about putting appdata and system shares there since these are already backed up with the CA Backup plugin.

    Just finished setting this up, appdata and system shares now on my fast pool, dockers and VMs working as before.

     

    It was a real pleasure to see how quickly I was able to move plex appdata from cache pool to fast pool.

    Share this comment


    Link to comment
    Share on other sites
    2 minutes ago, rukiftw said:

    As, xl3b4n0nx put it, this is what i am hoping for.  I Imagine the tiered caching setup as follows:

     

    array 1: JOBD SSD (Hot Pool)

    redundancy unimportant

    Data is moved when idle, on a timer (mover), or policy (last time accessed, ie hot data is never moved)

     

    array 2: r10/z2 BRTFS/ZFS (Living Pool)

    redundancy possible

    Data is moved on a timer (mover), or policy (last time accessed, ie hot data is moved up)

     

    array 3: Unraid (Cold Pool)

    redundancy paramount

    Data is moved by policy (last time accessed, ie hot data is moved up)

    That is exactly what I am after. Because once I get a 10GbE connection setup I want NVMe drives to be able to service that speed and have SATA SSDs as a next tier to still maintain decent speed (4Gb-5Gb) if the NVMe gets filled.

     

    And if I can have a separate cache pool for dockers and system data running xfs for maximum stability. I won't mind running btrfs for my data cache so I can have redundancy.

    Share this comment


    Link to comment
    Share on other sites
    13 hours ago, Dazog said:

    @limetech will we see the 6.9 releases move to nginx 1.19

     

    Since 1.16 is now EOL?

    Yes or possibly 1.18.0

    • Thanks 1
    • Haha 1

    Share this comment


    Link to comment
    Share on other sites
    17 hours ago, exdox77 said:

    Not sure if anyone else is having this issue but my 4th core has been 100% since reboot.

     

    image.thumb.png.4b49f5b8b1590f6c0d9cb729565f89cb.pngimageproxy.php?img=&key=e5eec7c5c933ca16

    Yes, seeing the same issue, 1-2 threads will be pegged about 70-80% of the time since updating to the beta.

     

    My idle CPU usage was around ~5% on the stable version, it is now sitting around 15-20% but will settle down for a second every now and then.

    Edited by TexasUnraid

    Share this comment


    Link to comment
    Share on other sites
    3 hours ago, xl3b4n0nx said:

    I think that is GREAT feature and I will absolutely be using it. I can't stand when my cache fills and my dockers go nuts because the drive is out of space. However, that is not what i am referring to. That is a set of lateral cache pools. I am talking about vertical pools. 
     

    Ex:    500GB NVMe
                    |

                   V

          2 TB SATA SSD

                    |

                   V

          UnRAID Array

     

    Is this a possibility with this new multi-pool feature?

    ditto this x4, would love to see this feature added! This and snapshots are the 2 main features I miss from windows.

    Edited by TexasUnraid

    Share this comment


    Link to comment
    Share on other sites

    I am also still seeing the excessive writes mentioned earlier in the beta. About 3-6x more then they should be anytime docker is put on a btrfs drive as measured by the LBA written smart output from the SSD itself. The worst part is that the amount of writes seems to climb over time.

     

    If it is on an XFS drive, everything works as expected and writes are in correct.

    Share this comment


    Link to comment
    Share on other sites
    22 minutes ago, TexasUnraid said:

    Yes, seeing the same issue, 1-2 threads will be pegged about 70-80% of the time since updating to the beta.

     

    My idle CPU usage was around ~5% on the stable version, it is now sitting around 15-20% but will settle down for a second every now and then.

    Disable WSD under SMB settings.

     

    Does it fix it?

    Share this comment


    Link to comment
    Share on other sites
    15 hours ago, tjb_altf4 said:

    @limetechAny thoughts on expanding current drive cap for pools/array,

     

    Max for unRAID array is 30 devices (2 parity, 28 data), for pools also 30 devices; max number of pools 35 - that ought to be enough for anybody 😁

     

    I can see perhaps wanting more than 35 pools, but seriously, what's the use case for larger arrays/pools?

     

    Share this comment


    Link to comment
    Share on other sites
    9 hours ago, Ruato said:

     

    Hi,

    Do you already have an idea or have decided how will the Wireguard functionality differ from the current Wireguard plugin feature? That is, will there be an easy to use configuration view as for the plugin or..?

    Thank you for all your hard work!

     

    That line in the change log is referencing that the wireguard kernel module has been merged into the Linux kernel.

    Share this comment


    Link to comment
    Share on other sites
    6 hours ago, johnnie.black said:

    You just need to create a new pool, assign the device and update the local paths, note that you'll need to create a new top level folder to be the share, e.g., if you had it as:

     

    /mnt/disks/downloads it would be shared by UD on SMB as "downloads", now if you create a new pool e.g.:

     

    /mnt/mynewpool and need to create a new top level folder called "downloads" and move everything inside that folder, local maps need to be corrected, from "/mnt/disks/downloads" to "/mnt/mynewpool/downloads" anything/anyone accessing the share with SMB can use the original mapping (//tower/downloads)

     

    This will work however, it is better to use the UI to create the share.  That is, after creating your pool, go to the Shares page and create a share named 'downloads'.  Set 'use cache pool' to "cache-only" and select your new pool name.

     

    Share this comment


    Link to comment
    Share on other sites
    4 hours ago, xl3b4n0nx said:

    I think that is GREAT feature and I will absolutely be using it. I can't stand when my cache fills and my dockers go nuts because the drive is out of space. However, that is not what i am referring to. That is a set of lateral cache pools. I am talking about vertical pools. 
     

    Ex:    500GB NVMe
                    |

                   V

          2 TB SATA SSD

                    |

                   V

          UnRAID Array

     

    Is this a possibility with this new multi-pool feature?

    No.

    • Like 1

    Share this comment


    Link to comment
    Share on other sites

    may some questions before trying the beta

     

    i currently use

    1 cache drive (nvme 1tb) xfs formatted

    1 UD device where my VM's sit, nvme 500gb xfs formatted

    4 drives in array all xfs formatted

     

    when i now upgrade, does this collide with the current setup ? do cache drives have to be btrfs or am i still good to go ? reason why xfs, my cache drive was btrfs before and crashed 2x completely with complete data loss ... since xfs im good. when i read correctly this should still be fine as single drive pool.

     

    Next would be, change my current UD drive to cache2 (sep single drive cache pool for my VM's), also possible as its a seperated pool with single formatted xfs drive ?

    or rather keep it UD

     

    VM's need network change so the kernel error would be gone, when changing this (webui) also probably all manual changes still gone or are they persistent now ? (sample cpu pinning)

     

    VFiO PCIe config should be obsolete then when using the new setting feature, uninstall plugin before update or doesnt matter ?

     

    last for now, new kernel fixes nested virt ? ... win 10 wsl2 subsystem activation lets VM crash badly, any changes done there in webui or still same procedure to activate ?

     

    for some notes thanks ahead

    Share this comment


    Link to comment
    Share on other sites

    Cache pools can be any of the supported formats if there is only one drive in the pool.    If you want a multi-drive pool then it has to be a BTRFS variant

    Share this comment


    Link to comment
    Share on other sites
    20 hours ago, limetech said:

    227 comments(!) in that topic.  is there a tldr?

    Since we got your attention on this issue, here's two recent reddit threads discussing the issue:

    Unraid is unusable for me because of the docker excessive writes bug destroying my cache ssd. Is nobody bothered by this?

    Btrfs and cache writes...a question for those who switched.

    It appears most people have fixed the issue by switching from btrfs to xfs for the cache drive.

    Using iotop I can still (after switching to xfs recently) see about 1mb/min writes to cache on an idle server. Thats close to 1.5Gb per day without doing anything.

    Share this comment


    Link to comment
    Share on other sites

    may another question about VM and virtio network, i see this correct that they have to be on their own subnet ?

     

    its weird here now due my vm's are on 192.168.122.x while my regular net is 192.168.1.x

     

    i never added or changed anything on virtio, so its default i guess.

     

    is this a must have ? cause i guess i ll run into issues when i try to rdp to my VM from non unraid mashines.

     

    ### Update, confirmed, i cant reach any win10 vm from either guac or laptop in LAN Network 192.168.1.x .... so, this is still open when i read correctly that the kernel errors appear when using the same network as dockers do ?

    Edited by alturismo

    Share this comment


    Link to comment
    Share on other sites

    ok, seems i was too fast, i guess its another issue why this happens

     

    when i read changelog again ;)

     

    webgui: VMs: change default network model to virtio-net

     

    this virtio-net does not exist here, i only have those

     

    image.thumb.png.b7a2a8181e228cd44ad392ed3701229b.png

     

    so i have chosen the virbr0 which is 192.168.122.0/24, whatever this is about cause i never added it.

     

    so, how can i add this virtio-net to unraid ?

     

    as note, when i add a new vm i also dont have the option, its still default to br0 and i can choose from br0, br1 or virbr0 only.

    Edited by alturismo

    Share this comment


    Link to comment
    Share on other sites
    2 hours ago, limetech said:

     

    Max for unRAID array is 30 devices (2 parity, 28 data), for pools also 30 devices; max number of pools 35 - that ought to be enough for anybody 😁

     

    I can see perhaps wanting more than 35 pools, but seriously, what's the use case for larger arrays/pools?

     

    No that's great, I thought we were still working with an overall system cap of 30 devices that we would need to split across array and multiple pools.

    Thanks for the clarification!

    Share this comment


    Link to comment
    Share on other sites
    42 minutes ago, alturismo said:

    ok, seems i was too fast, i guess its another issue why this happens

     

    when i read changelog again ;)

     

    webgui: VMs: change default network model to virtio-net

     

    this virtio-net does not exist here, i only have those

     

    image.thumb.png.b7a2a8181e228cd44ad392ed3701229b.png

     

    so i have chosen the virbr0 which is 192.168.122.0/24, whatever this is about cause i never added it.

     

    so, how can i add this virtio-net to unraid ?

     

    as note, when i add a new vm i also dont have the option, its still default to br0 and i can choose from br0, br1 or virbr0 only.

    Just go into the xml for the vm find the network section and edit virtio to virtio-net. 

    Share this comment


    Link to comment
    Share on other sites
    3 hours ago, Dazog said:

    Disable WSD under SMB settings.

     

    Does it fix it?

    Just tried that, doesn't seem to of changed much. If anything it seems a little worse but could just be from restarting the array.

    Share this comment


    Link to comment
    Share on other sites
    23 minutes ago, david279 said:

    Just go into the xml for the vm find the network section and edit virtio to virtio-net. 

    You should just be able to Edit VM and click Update.

    Share this comment


    Link to comment
    Share on other sites

    Did the fonts change? Everything looks really nice or I'm blind... either way great job team. 

    Share this comment


    Link to comment
    Share on other sites
    6 hours ago, limetech said:

    You should just be able to Edit VM and click Update.

    if its meant to just update something and then its changed in the background, ok, that seemed to work.

    Share this comment


    Link to comment
    Share on other sites

    Great to see this newer kernel finally through isn't it - well done everyone!  

     

    So I started using the new k10temp module in this kernel, and I'm now being told my CPU / MB temp is at 94 degrees C under load.  As far as everything I've read, AMD threadripper 1950X doesn't really get that hot (I've got a triple FAN water cooler on it, and it runs under load 24x7 so I'm keen to see whether my temps before were wrong, or if the new ones are wrong or something in between.  Anyone else comment on their experience with temps on threadripper?  I am using the the system temp plugin though - I assume that's still required... ?

     

    Thanks.

    Share this comment


    Link to comment
    Share on other sites



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.