• Unraid OS version 6.9.0-beta29 available


    limetech

    Back in the saddle ... Sorry for the long delay in publishing this release.  Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development.  That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come.  Thanks to everyone for their help and patience during this time.

    Cheers,

    -Tom

     

    IMPORTANT: This is Beta software.  We recommend running on test servers only!

     

    KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel.  However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e.  Typically looks like this on System Devices page:

    Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)

    The problem is that devices are no longer recognized.  There are already bug reports pertaining to this issue:

    https://bugzilla.kernel.org/show_bug.cgi?id=209177

    https://bugzilla.redhat.com/show_bug.cgi?id=1878332

     

    We have reached out to the maintainer to see if a fix can be expedited, however we feel that we can neither revert back to 5.7 kernel nor hold the release due to this issue.  We are monitoring and will publish a release with fix asap.

     

    ANOTHER known issue: we have added additional btrfs balance options:

    • raid1c3
    • raid1c4
    • and modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1).

     

    However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile.  The solution is to simply run the same balance again.  We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default.  For now, it's left as-is.

     

    THE PRIMARY FOCUS of this release is to put tools in place to help users migrate data off SSD-based pools so that those devices may be re-partitioned if necessary, and then migrate the data back.

     

    What are we talking about?  For several years now, storage devices managed by Unraid OS are formatted with an "Unraid Standard Partition Layout".  This layout has partition 1 starting at offset 32KiB from the start of the device, and extending to the end of the device.  (For devices with 512-byte sectors, partition 1 starts in sector 64; for 4096-byte sector size devices, partition 1 starts in sector 8.)  This layout achieves maximum storage efficiency and ensures partition 1 starts on a 4096-byte boundary.

     

    Through user reports and extensive testing however, we have noted that many modern SSD devices, in particular Samsung EVO, do not perform most efficiently using this partition layout, and the devices seem to write far more than one would expect, and with SSD, one wants to minimize writes to SSD as much as possible.

     

    The solution to the "excessive SSD write" issue is to position partition 1 at offset 1MiB from the start of the device instead of at 32KiB.  The will both increase performance and decrease writes with affected devices.  Do you absolutely need to re-partition your SSD's?  Probably not depending on what devices you have.  Click on a device from Main, scroll down to Attributes and take a look at Data units written.  If this is increasing very rapidly then you probably would benefit by re-partitioning.

     

    Note: if you have already (re)Formatted using previous 6.9-beta release, for SSD smaller than 2TiB the proper partition layout will appear like this on the Device Information page:

    Partition format:   MBR: 1MiB-aligned

    For SSD larger than 2TiB:

    Partition format:   GPT: 1MiB-aligned

     

    Here's what's in this release to help facilitate re-partitioning of SSD devices:

     

    An Erase button which appears in the Device Information page.

     

    The Erase button may be used to erase (delete) content from a volume. A volume is either the content of an unRAID array data disk, or the content of a pool. In the case of an unRAID disk, only that device is erased; in the case of a multiple-device pool ALL devices of the pool are erased.

    The extent of Erase varies depending on whether the array is Stopped, or Started in Maintenance mode (if started in Normal mode, all volume Erase buttons are disabled).

    Started/Maintenance mode: in this case the LUKS header (if any) and any file system within partition 1 is erased. The MBR (master boot record) is not erased.

    Stopped - in this case, unRAID array disk volumes and pool volumes are treated a little differently:

    • unRAID array disk volumes - if Parity and/or Parity2 is valid, then operation proceeds exactly as above, that is, content of only partition 1 is erased but the MBR (master boot record) is left as-is; but, if there is no valid parity, then the MBR is also erased.
    • Pool volumes - partition 1 of all devices within the pool are erased, and then the MBR is also erased.


    The purpose of erasing the MBR is to permit re-partitioning of the device if required.  Upon format, Unraid OS will position partition 1 at 32KiB for HDD devices and at 1MiB for SSD devices.

     

    Note that erase does not overwrite the storage content of a device, it simply clears the LUKS header if present (which effectively makes the device unreadable), and file system and MBR signatures.  A future Unraid OS release may include the option of overwriting the data.

     

    Additional "Mover" capabilities.

     

    Since SSD pools are commonly used to store vdisk images, shfs/mover is now aware of:

    • sparse files - when a sparse file is moved from one volume to another, it's sparseness is preserved
    • NoCOW attribute - when a file or directory in a btrfs volume has the NoCOW attribute set, the attribute is preserved when the file or directory is moved to another btrfs volume.

     

    Note that btrfs subvolumes are not preserved.  A future Unraid OS release may include preservation of btrfs subvolumes.

     

    Ok how do I re-partition my SSD pools?

     

    Outlined here are two basic methods:

    1. "Mover" method - The idea is to use the Mover to copy all data from the pool to a target device in the unRAID array.  Then erase all devices of the pool, and reformat.  Finally use the Mover to copy all the data back.
    2. "Unassign/Re-assign" method - The idea here is, one-by-one, remove a device from a btrfs pool, balance the pool with reduced device count, then re-assign the device back to the pool, and balance pool back to include the device.  This works because Unraid OS will re-partition new devices added to an existing btrfs pool.  This method is not recommended for a pool with more than 2 devices since the first balance operation may be write-intensive, and writes are what we're trying to minimize.  Also it can be tricky to determine if enough free space really exists after removing a device to rebalance the pool.  Finally, this method will introduce a time window where your data is on non-redundant storage.

     

    No matter which method, if you have absolutely critical data in the pool we strongly recommend making an independent backup first (you are already doing this right?).

     

     

    Mover Method

    This procedure presumes a multi-device btrfs pool containing one or more cache-only or cache-prefer shares.

     

    1. With array Started, stop any VM's and/or Docker applications which may be accessing the pool you wish to re-partition.  Make sure no other external I/O is targeting this pool.

     

    2. For each share on the pool, go to the Share Settings page and make some adjustments:

    • change from cache-only (or cache-prefer) to cache-yes
    • assign an array disk or disks via Include mask to receive the data.  If you  wish to preserve the NoCOW attribute (Copy-on-write set to No) on files and directories, these disks should be formatted with btrfs.  Of course ensure there is enough free space to receive the data.

     

    3. Now go back to Main and click the Move button.  This will move the data of each share to the target array disk(s).

     

    4. Verify no data left on the pool, Stop array, click on the pool and then click the Erase button.

     

    5. Start the array and the pool should appear Unformatted - go ahead and Format the pool (this is what will re-write the partition layout).

     

    6. Back to Share Settings page; for each above share:

    • change from cache-yes to cache-prefer

     

    7. On Main page click Move button.  This will move data of each share back to the pool.

     

    8. Finally, back to Share Settings page; for each share:

    • change from cache-prefer back to cache-only if desired

     

    Unassign/Re-assign Method

    1. Stop array and unassign one of the devices from your existing pool; leave device unassigned.
    2. Start array.  A balance will take place on your existing pool.  Let the balance complete.
    3. Stop array.  Re-assign the device, adding it back to your existing pool.
    4. Start array.  The added device will get re-partitioned and a balance will start moving data to the new device.  Let the balance complete.
    5. Repeat steps 1-4 for the other device in your existing pool.

     

    Whats happening here is this:

    At the completion of step 2, btrfs will 'delete' the missing device from the volume and wipe the btrfs signature from it.

    At the beginning of step 4, Unraid OS will re-partition the new device being added to an existing pool.

     

    I don't care about preserving data in the pool.  In this case just Stop array, click on the pool and then click Erase.  Start array and Format the pool - done.  Useful to know: when Linux creates a file system in an SSD device, it will first perform a "blkdiscard" on the entire partition.  Similarly, "blkdisard" is initiated on partition 1 on a new device added to an existing btrfs pool.

     

    What about array devices?  If you have SSD devices in the unRAID array the only  way to safely re-partition those devices is to either remove them from the array, or remove parity devices from the array.  This is because re-partitioning will invalidate parity.  Note also the volume size will be slightly smaller.

     


     

    Version 6.9.0-beta29 2020-09-27 (vs -beta25)

    Base distro:

    • at-spi2-core: version 2.36.1
    • bash: version 5.0.018
    • bridge-utils: version 1.7
    • brotli: version 1.0.9
    • btrfs-progs: version 5.6.1
    • ca-certificates: version 20200630
    • cifs-utils: version 6.11
    • cryptsetup: version 2.3.4
    • curl: version 7.72.0 (CVE-2020-8231)
    • dbus: version 1.12.20
    • dnsmasq: version 2.82
    • docker: version 19.03.13
    • ethtool: version 5.8
    • fribidi: version 1.0.10
    • fuse3: version 3.9.3
    • git: version 2.28.0
    • glib2: version 2.66.0 build 2
    • gnutls: version 3.6.15
    • gtk+3: version 3.24.23
    • harfbuzz: version 2.7.2
    • haveged: version 1.9.13
    • htop: version 3.0.2
    • iproute2: version 5.8.0
    • iputils: version 20200821
    • jasper: version 2.0.21
    • jemalloc: version 5.2.1
    • libX11: version 1.6.12
    • libcap-ng: version 0.8
    • libevdev: version 1.9.1
    • libevent: version 2.1.12
    • libgcrypt: version 1.8.6
    • libglvnd: version 1.3.2
    • libgpg-error: version 1.39
    • libgudev: version 234
    • libidn: version 1.36
    • libpsl: version 0.21.1 build 2
    • librsvg: version 2.50.0
    • libssh: version 0.9.5
    • libvirt: version 6.6.0 (CVE-2020-14339)
    • libxkbcommon: version 1.0.1
    • libzip: version 1.7.3
    • lmdb: version 0.9.26
    • logrotate: version 3.17.0
    • lvm2: version 2.03.10
    • mc: version 4.8.25
    • mpfr: version 4.1.0
    • nano: version 5.2
    • ncurses: version 6.2_20200801
    • nginx: version 1.19.1
    • ntp: version 4.2.8p15 build 2
    • openssl-solibs: version 1.1.1h
    • openssl: version 1.1.1h
    • p11-kit: version 0.23.21
    • pango: version 1.46.2
    • php: version 7.4.10 (CVE-2020-7068)
    • qemu: version 5.1.0 (CVE-2020-10717, CVE-2020-10761)
    • rsync: version 3.2.3
    • samba: version 4.12.7 (CVE-2020-1472)
    • sqlite: version 3.33.0
    • sudo: version 1.9.3
    • sysvinit-scripts: version 2.1 build 35
    • sysvinit: version 2.97
    • ttyd: version 1.6.1
    • util-linux: version 2.36
    • wireguard-tools: version 1.0.20200827
    • xev: version 1.2.4
    • xf86-video-vesa: version 2.5.0
    • xfsprogs: version 5.8.0
    • xorg-server: version 1.20.9 build 3
    • xterm: version 360
    • xxHash: version 0.8.0

    Linux kernel:

    • version 5.8.12
    • kernel-firmware: version kernel-firmware-20200921_49c4ff5
    • oot: Realtek r8152: version 2.13.0
    • oot: Tehuti tn40xx: version 0.3.6.17.3

    Management:

    • btrfs: include 'discard=async' mount option
    • emhttpd: avoid using remount to set additional mount options
    • emhttpd: added wipefs function (webgui 'Erase' button)
    • shfs: move: support spares files
    • shfs: move: preserve ioctl_iflags when moving between same file system types
    • smb: remove setting 'aio' options in smb.conf, use samba defaults
    • webgui: Update noVNC to v1.2.0
    • webgui: Docker: more intuitive handling of images
    • webgui: VMs: more intuitive handling of image selection
    • webgui: VMs: Fixed: rare cases vdisk defaults to Auto when it should be Manual
    • webgui: VMs: Fixed: Adding NICs or VirtFS mounts to a VM is limited
    • webgui: VM manager: new setting "Network Model"
    • webgui: Added new setting "Enable user share assignment" to cache pool
    • webgui: Dashboard: style adjustment for server icon
    • webgui: Update jGrowl to version 1.4.7
    • webgui: Fix ' appearing
    • webgui: VM Manager: add 'virtio-win-0.1.189-1' to VirtIO-ISOs list
    • webgui: Prevent bonded nics from being bound to vfio-pci too
    • webgui: better handling of multiple nics with vfio-pci
    • webgui: Suppress WG on Dashboard if no tunnels defined
    • webgui: Suppress Autofan link on Dashboard if plugin not installed
    • webgui: Detect invalid session and logout current tab
    • webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication
    • webgui: Fix notifications continually reappearing
    • webgui: Support links on notifications
    • webgui: Add raid1c3 and raid1c4 btrfs pool balance options.
    • webgui: For raid6 btrfs pool data profile use raid1c3 metadata profile.
    • webgui: Permit file system configuration when array Started for Unmountable volumes.
    • webgui: Fix not able to change parity check schedule if no cache pool present
    • webgui: Disallow "?" in share names
    • webgui: Add customizable timeout when stopping containers

    Edited by limetech

    • Like 6
    • Thanks 6



    User Feedback

    Recommended Comments



    But - as far as I read in the change log there was no change regarding boot modus. 

    I'm running in legacy mode fine since months ago and - as far as i remember - booting in legacy is recommended for some cases and I needed to change it in the past from UEFI to make "something" (gaming vm?!) work. 
    So that (hopefully) can't be the solution. 

    Link to comment
    2 minutes ago, Maddeen said:

    But - as far as I read in the change log there was no change regarding boot modus. 

    I'm running in legacy mode fine since months ago and - as far as i remember - booting in legacy is recommended for some cases and I needed to change it in the past from UEFI to make "something" (gaming vm?!) work. 
    So that (hopefully) can't be the solution. 

    Usually this problem is triggered by a Linux kernel change as happened between beta 25 and 29.

     

    The loading bzroot issue hit me way back on version 6.5.0 of unRAID.  My board had been booting fine in legacy mode for years, then a particular unraid/linux kernel version change made it stick on boot at loading bzroot.  Booting UEFI fixed that.  Other have experienced the same. 

    • Like 1
    Link to comment

    There have been issues for years with SMB failing and ever since Ive used unraid I could not transfer large files over the network, it would crash and unraid would crash half the time.  I updated to the latest beta just as a last resort before dumping my server into the ocean and what do you know?? SMB is fixed for me on the last beta.  So for anyone else with SMB issues, try the beta and report back, I transferred 150GB at once after installing the beta and no hiccups at line speed. 

    Link to comment
    20 minutes ago, Hoopster said:

    Usually this problem is triggered by a Linux kernel change as happened between beta 25 and 29.

     

    The loading bzroot issue hit me way back on version 6.5.0 of unRAID.  My board had been booting fine in legacy mode for years, then a particular unraid/linux kernel version change made it stick on boot at loading bzroot.  Booting UEFI fixed that.  Other have experienced the same. 

    Ahhh ok ... indee that could be a cause. 

    But just for me to understand - whats the difference between booting with/without GUI regarding to the boot modus. 
    Sorry that I ask, but thats not understandable for me .... In my opinion the GUI option should also cause the same error.. :)

    Thank you for any details. 

    Link to comment
    41 minutes ago, Maddeen said:

    But just for me to understand - whats the difference between booting with/without GUI regarding to the boot modus

    I don't know the details, but obviously the way GUI mode loads is different than regular command line mode.  The very same thing happened to me when I had the problem; GUI mode would boot, other modes would not.

    • Like 1
    Link to comment

    Sorry if this isn’t the best place to ask but I have a Kernel question. Kernel 5.8 to my knowledge is supposed to include code for amd_energy to be monitored by hwmon. Do you know if this code is included in the unraid kernel?

     

    I got hwmon working via prom/node-exporter and Prometheus but can’t find anything related to amd_energy. Trying to determine if it’s the kernel lacking or the exporter.

    Link to comment

    Updated from Beta 25 to Beta 29. 0 issues so far.  System continues to be stable.  Four VM's (2 VNC, 2 dedicated GPU).  Maybe a dozen docker containers ranging from Tomcat, SQL Server, Letsencrypt, and a number of others.  The only issue so far impacting me is the lack of daily email. I saw that there was a hot fix for that, but I haven't applied it yet.  Wanted to make sure that things were running well first.

     

     

     

    Link to comment

    Mine is stable too, same conditions, couple vms, some with gpu passtrough, some with vnc and some twenty dockers running, only issue now is the novnc connection not working, all other issues are gonne and not a single crash till now.

     

    Edited by btagomes
    Link to comment

    Hi guys,
    after updating from Unraid 6.8.3 to 6.9-beta29 I have problems with my VMs.

    My VMs do not use a virtual disk as file, they use a physical disk. When I try to start a VM after upgrading from Unraid I get the following error:

    Unable to get devmapper targets for /dev/disk/by-id/ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E3TPLV4F: No such file or directory

     

     

    Fehler.PNG.cb448d2e672c2a7849f886cd6d07b2f4.PNG

     

    VM.xml  VM.log ls-dev-by-id.txt

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Unraid 6.9-Beta29

    Fractal Define 7 XL • Asrock Taichi Ultimate X470 • AMD Ryzen 9 3900X • 64GB Kingston DDR4 ECC RAM

    AQUANTIA 10 Gigabit LAN • NVIDIA GTX 1660 Super 4GB

    VM & Docker: 2x 1TB Samsung 870 QVO SSD • Cache: 2x 1TB Crucial P1 1TB NVME • Parity: 1x4TB • Data: 4x3TB, 1x4TB (16TB)

    Docker Container (23 running), VMs (2 running) 24h/7.

    Edited by Thorsten
    Link to comment

    Safari can't load VNC on VMs in b29 but was working fine in b25. 

     

    Quote

    noVNC encountered an error:

     

    Script error.

     

    • Thanks 1
    Link to comment
    1 hour ago, Thorsten said:

    Hi guys,
    after updating from Unraid 6.8.3 to 6.9-beta29 I have problems with my VMs.

    My VMs do not use a virtual disk as file, they use a physical disk. When I try to start a VM after upgrading from Unraid I get the following error:

    Unable to get devmapper targets for /dev/disk/by-id/ata-WDC_WD40EZRX-00SPEB0_WD-WCC4E3TPLV4F: No such file or directory

     

     

    Fehler.PNG.cb448d2e672c2a7849f886cd6d07b2f4.PNG

     

    VM.xml 5.73 kB · 0 downloads   VM.log 2.98 kB · 0 downloads ls-dev-by-id.txt 2.35 kB · 0 downloads

    ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

    Unraid 6.9-Beta29

    Fractal Define 7 XL • Asrock Taichi Ultimate X470 • AMD Ryzen 9 3900X • 64GB Kingston DDR4 ECC RAM

    AQUANTIA 10 Gigabit LAN • NVIDIA GTX 1660 Super 4GB

    VM & Docker: 2x 1TB Samsung 870 QVO SSD • Cache: 2x 1TB Crucial P1 1TB NVME • Parity: 1x4TB • Data: 4x3TB, 1x4TB (16TB)

    Docker Container (23 running), VMs (2 running) 24h/7.

    Yep same here! It's a kernel issue from what i know. 3 choices go to beta 25, use network mapped drives or switch them to vdisks. Mine drives only had cctv on them so just formatted and created vdisks.

    Link to comment
    On 10/3/2020 at 4:17 PM, Jclendineng said:

    There have been issues for years with SMB failing and ever since Ive used unraid I could not transfer large files over the network, it would crash and unraid would crash half the time.  I updated to the latest beta just as a last resort before dumping my server into the ocean and what do you know?? SMB is fixed for me on the last beta.  So for anyone else with SMB issues, try the beta and report back, I transferred 150GB at once after installing the beta and no hiccups at line speed. 

    Strange never had a issue smb file transfers on any version, is that 1 file 150gb or multi files? 

    Edited by turnipisum
    Link to comment

    Maybe i missed something with those docker problems.

    When i try to get the bubuntux/nordvpn docker running correctly on first setup all is fine.

    If i try to change some settings (regardless of which setting of the ccontainer) it couldn't recreate this container.

    After that it can't be started, it is marked for deletion.

    But on deletion it gives an error too.

    I don't know if this behaviour is related to this beta.

     

    I will try to restart the docker service...

    This is only FYI.

    • Thanks 1
    Link to comment
    2 hours ago, Thorsten said:

    My VMs do not use a virtual disk as file, they use a physical disk.

    23 minutes ago, turnipisum said:

    Yep same here! It's a kernel issue from what i know. 3 choices go to beta 25, use network mapped drives or switch them to vdisks.

     

     

    Link to comment

    Updated from 6.9.0-BETA1 to 6.9.0-BETA29, no issues with the update process and 1hr of stable uptime.

     

    I've been distracted for 6 months and haven't followed this beta series as closely as I normally would. Other than the known issues and SSD reformatting discussed in the first post are there any changes/gotcha's I need to pay attention to?

    Link to comment

    running stable beta29 with desktop main pc Gaming Windows VM, host-passthrough issue with ryzen is fixed but anyway i still using host-model as I notice more performance (or at least is what i see with cinebench and aida64)

    Link to comment
    1 hour ago, Denisson said:

    I'm curious why host-passthrough isn't the default given the extra performance?

    what do you mean by"host-passthrough" here?

    Link to comment
    1 hour ago, Denisson said:

    Just to add to my VNC error above. This is the specific error:

     

     

    Screen Shot 2020-10-05 at 10.42.40 AM.png

    I saved a link to this post to have someone look at vnc on safari but it would be much better to open bug report.

    Link to comment
    On 9/29/2020 at 11:02 AM, JorgeB said:

    df is still working correctly for me:

     

    
    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdf1       1.4T  3.6M  930G   1% /mnt/cache

     

    Actually it's not.  3.6M + 930G is not equal 1.4T

     

    Up to and including beta29 code uses 'stat -f', ie, statvfs(), to fetch Total, Free, and Available.  In all file systems other than btrfs Free==Available.  What's used by webGUI is only Total and Free, it gets Used = Total - Free (like any reasonable person would think it should work).

     

    Starting with next release, emhttpd exports

    fsSize - same as 'size' reported by 'df' and 'Total' reported by 'stat -f' (f_blocks)

    fsFree - same as 'avail' reported by 'df' and 'Available' reported by 'stat -f' (f_bavail)

    fsUsed= same as 'used' reported by 'df' and 'Total'-'Free' reported by 'stat -f' (f_blocks - f_bfree) - note this is how 'df' calculates 'used'.

     

    Seems to work except there are cases where "Used" + "Free" displayed on Main do not equal "Size".

    • Like 2
    Link to comment
    3 hours ago, limetech said:

    I saved a link to this post to have someone look at vnc on safari but it would be much better to open bug report.

    Can I just add, VNC on Safari (from unraid specifically) has never worked for me until Mac OS Big Sur (currently in beta).  I used to have to open a separate Firefox browser, copy the link from safari over and it would work.

     

    I tested it on multiple computers installs and things and could never get it to work before.

     

    Big Sur is still in beta and their browser does seem to have a few minor issues here and there, but clearly there's a new engine or something behind it - never seen a new version of Mac OS change the browser quite so much before.

     

    But vnc works on unraid so I'm pretty excited about that.  Hope it helps someone.

    Link to comment
    On 9/30/2020 at 11:08 PM, limetech said:

    We don't use raw disk passthrough - we use vdisks for everything.  Certain functions like this are being tested via these beta releases.  Stuff happens, example: the mpt3sas driver issue introduced into Linux 5.8 kernel which has been "stable" for 12 patch releases now.

    So what does this mean? Is it desired behaviour or a bug and gets a fix some time? Upgraded from beta 25 to 29 and now facing the problem with two passthrough disks.. Can you recommend me a solution to use the disks again? Do you suggest to use vdisk instead and if yes, can I convert to vdisk?

     

    Thanks for your kind help!

    Link to comment
    35 minutes ago, glockmane said:

    So what does this mean? Is it desired behaviour or a bug and gets a fix some time? Upgraded from beta 25 to 29 and now facing the problem with two passthrough disks.. Can you recommend me a solution to use the disks again? Do you suggest to use vdisk instead and if yes, can I convert to vdisk?

     

    Thanks for your kind help!

    We reverted libvirt back to v6.5.0 for next release so that should fix the issue.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.