• 6.8.0 RC1+RC4 corrupted QCOW2 vdisks on XFS! warning "unraid qcow2_free_clusters failed: Invalid argument" propably due compressed QCOW2 files


    bastl
    • Solved Minor

    Edit: retested with RC6

     

    Installing VMs on XFS array drives workes fine, same for BTRFS cache drive. No corruption found on the qcow2 vdisks so far with the same testings as before. Already existing qcow2 images with compression which got corrupted before in RC1-4 are shown no issues so far. Will have a look at it the next couple days. Compressing an uncompressed qcow2 also not producing corrupted vdisks. Looks like the patches on qemu 4.1.1 fixed my issues.

     

    ------------------------------------------------------------------------------

     

    EDIT: Edited the title for better understanding of the issue. Main issue is that qcow2 vdisks hosted on xfs formatted drives won't allow to install the guest os without issues. Installation will fail or lead to corrupted installs. Existing images can also be affected by this. Some reports about ext4 also effected and the warnings I've got using compressed qcow2 files on btrfs might be related to this. Affected qemu version is 4.1. Using RAW images should be fine. 

     

     

    First of all I did the the update from 6.7.1 to the 6.8.0RC1 on saturday. Everything went fine i thought. Except of some qemu arguments preventing 2 VMs with GPU passthrough to boot up (root-port-fix). Nothing else changed. No errors in the server logs. As on every weekday morning an extra Win7 VM started up automatically. Fine so far. On thuesday after an software update I had to restart the VM and it won't came back online. The VM showed some weired error I never saw before and after some searches on the web it was clear the file system corrupted somehow. I restored the vdisk from an backup and it booted back up. This time I didn't installed any update or used it like normal for office stuff. It idled for a couple minutes and I noticed the following errors in the VM logs.

     

     "unraid qcow2_free_clusters failed: Invalid argument"

    1088646363_invalidargument.thumb.png.9a69fe9aec53158beb2a63da69bc6ec2.png

     

    Restarting this time worked, even if it feels a bit slower as usual but the shown errors quickly counting up. Inside the VM I didn't noticed any performance degredations or errors so far. Looked like a false positive. Rebootet the VM again and it won't startup. I toke the vdisk and attached it to another VM and fired up chkdsk and it found hundreds of file system errors, trying to recover them to the point where either chkdsk finished with unrecoverable errors or it frooze completly.

     

    Time to check the other VMs I'am using with an qcow2 vdisk. And what a surprise a Linux Mint VM also showed this error after a couple minutes running. A played around a bit with the xml and removed a couple tweaks. Removed "discard='unmap'" "numatune memory mode='strict' nodeset='0'" and tried again. Same error. Everything else in the xml is on default and runs for almost 2 years now. I tried reverting back to different vdisks from back to september. All files after running a couple minutes showed some errors. The Win7 VM once reported a unreadable file and crashed, the next try on first boot everything fine. Some reboots where fine, some frooze, some reported filesystem corruptions. I tried it with different types of VMs, OVMF, seabios, Q35-3.0, i440fx-3.0 doesn't matter, always the same issue. The only thing that is the same on all VMs is that they use qcow2 as disk image format!?

     

    All the VMs are hosted on an single BTRFS NVME cache drive. I've even tried the vdisk for the Win7 VM sitting on the array. Same issue, after a couple minutes the errors popping up. I than tried various different backups back till march. Only VMs with directly passed through ssd/hdd are not affected by this.

     

    Is there anything I can try to prevent the vdisk corruption?

     

    Below are 2 xml files and the diagnostics from the server running since saturday.

     

    Win7 i440fx VM

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Win7_Outlook</name>
      <uuid>0b67611b-12b3-d0fd-c02b-055394dd34dc</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 7" icon="windows7.png" os="windows7"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='4'/>
        <vcpupin vcpu='1' cpuset='20'/>
        <vcpupin vcpu='2' cpuset='5'/>
        <vcpupin vcpu='3' cpuset='21'/>
      </cputune>
      <numatune>
        <memory mode='strict' nodeset='0'/>
      </numatune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='4' threads='1'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/>
          <source file='/mnt/user/VMs/Win7_Outlook/WIN7_OUTLOOK.qcow2'/>
          <target dev='hdc' bus='scsi'/>
          <boot order='1'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Acronis/AcronisMedia.117iso.iso'/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <controller type='scsi' index='0' model='virtio-scsi'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='ide' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:64:a8:e2'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <address type='usb' bus='0' port='1'/>
        </input>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='de'>
          <listen type='address' address='0.0.0.0'/>
        </graphics>
        <video>
          <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </video>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </memballoon>
      </devices>
    </domain>

    Mint Q35 VM

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='7'>
      <name>Mint</name>
      <uuid>065a6081-e954-0913-370d-b6001262fb61</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Debian" icon="linux-mint.png" os="debian"/>
      </metadata>
      <memory unit='KiB'>8388608</memory>
      <currentMemory unit='KiB'>8388608</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='6'/>
        <vcpupin vcpu='1' cpuset='22'/>
        <vcpupin vcpu='2' cpuset='7'/>
        <vcpupin vcpu='3' cpuset='23'/>
      </cputune>
      <numatune>
        <memory mode='strict' nodeset='0'/>
      </numatune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-3.0'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/065a6081-e954-0913-370d-b6001262fb61_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='4' threads='1'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/>
          <source file='/mnt/user/VMs/Mint/vdisk1.img'/>
          <backingStore/>
          <target dev='hdc' bus='scsi'/>
          <boot order='1'/>
          <alias name='scsi0-0-0-2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='2'/>
        </disk>
        <controller type='usb' index='0' model='nec-xhci' ports='15'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='scsi' index='0' model='virtio-scsi'>
          <alias name='scsi0'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <alias name='pci.6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <alias name='pci.7'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <alias name='pci.8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='pci' index='9' model='pcie-to-pci-bridge'>
          <model name='pcie-pci-bridge'/>
          <alias name='pci.9'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:fd:86:8a'/>
          <source bridge='br0'/>
          <target dev='vnet1'/>
          <model type='virtio'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/1'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/1'>
          <source path='/dev/pts/1'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-7-Mint/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='tablet' bus='usb'>
          <alias name='input0'/>
          <address type='usb' bus='0' port='3'/>
        </input>
        <input type='mouse' bus='ps2'>
          <alias name='input1'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input2'/>
        </input>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x08' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x04fc'/>
            <product id='0x0003'/>
            <address bus='3' device='4'/>
          </source>
          <alias name='hostdev2'/>
          <address type='usb' bus='0' port='1'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x0a12'/>
            <product id='0x0001'/>
            <address bus='3' device='3'/>
          </source>
          <alias name='hostdev3'/>
          <address type='usb' bus='0' port='2'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>

    unraid-diagnostics-20191016-1208.zip




    User Feedback

    Recommended Comments



    2 minutes ago, limetech said:

    We can downgrade qemu from 4.1.x back to 4.0.x - think that will solve it?

    I'm willing to test that. Even on vacation checking on wedding venues and from remoting in via Splashtop (if released after tomorrow). :)

    Link to comment

    i used the qcow option straight on the unraid drop box and upon upgrading to rc4 both my linux vm’s corrupted within minutes of each other.

     

    i then reinstalled ubuntu twice after that and on both occasions the installation corrupted, on one of the occasions i wasn’t even able to boot the vm once!  the other one started, but then hung and rebooting it was a no go.

     

    I downgraded back to 6.7.2 and reinstalled ubuntu and it’s been rock solid with the same configuration parameters.

     

    i’m now very nervous of installing 6.8 again, i don’t think i will be trying another rc or final version until there’s enough reports out there that this isn’t something that is affecting lots of other people.  

     

    with regards to my system, nothing exotic, it’s a ryzen processor, 64 gig ram, seagate ssd cache and a 16 port lsi card is what my spinning drives are connected to.  my vms are stored on the spinning discs and not on the cache ssds.

    Link to comment
    2 hours ago, limetech said:

    Can't hold back 6.8 release because of this.  Compressed qcow2 has never been an option in our VM manager.

    I know compressed qcow2 isn't an option via the gui, but the issue without compression everything on default where the install of an os will fail or corrupt existing images when booting them from an xfs storage can and will have an impact on the data integrity of the users. I'am worried about that situation.

     

    New 1TB drive is up and running fine so far as new cache drive. I will test tomorrow how the smaller drive as UD will handle VMs when formatted xfs. I've only tested qcow2-vdisks on array drives so far because this is the only xfs storage I have to play with.

    Link to comment
    1 hour ago, Fizzyade said:

    i used the qcow option straight on the unraid drop box and upon upgrading to rc4 both my linux vm’s corrupted within minutes of each other.

    Please state the file system that hosted the qcow drives.

    Link to comment

    From all the bug reports for qemu 4.1 I have read, the most reports are qcow2 on xfs getting corrupted. No fix for this for now. Only solution is to use RAW images or revert back to unraid 6.7.2. There are also reports from people having issues on ext4 file systems. I hope the issue not beeing able to install a guest os in a qcow2 and the issues I've seen for existing qcow2 files are correlating and will be fixed in 4.1.1.

     

    Until this is released I hope @limetech will downgrade to qemu 4.0 in the next RC builds which isn't affected by these bugs.

    • Thanks 1
    Link to comment

    I run qcow2 (uncompressed) on xfs with 6.8.0-rc4 and don't have this issue. Tested with both Mac and Windows installations.

     

    Wonder what's the diff between my VM and the ones that failed in here.

    Link to comment

    @testdasi single XFS disk or on a array disk?

     

    Most of the time during the install the errors will appear, sometimes only at the first boot and sometimes after a couple minutes of uptime. It's hard to see if the vdisk is corrupted. With "qemu-img check vdisk.img" sometimes it shows some errors, sometimes not. But in all my cases I noticed it inside the VMs, randomly crashes, programs not starting or showing weird errors.

    Link to comment
    22 minutes ago, bastl said:

    @testdasi single XFS disk or on a array disk?

     

    Most of the time during the install the errors will appear, sometimes only at the first boot and sometimes after a couple minutes of uptime. It's hard to see if the vdisk is corrupted. With "qemu-img check vdisk.img" sometimes it shows some errors, sometimes not. But in all my cases I noticed it inside the VMs, randomly crashes, programs not starting or showing weird errors.

    Single xfs disk (my cache disk to be exact).

     

    The only thing I would imagine to be unusual about my config is I point my images to /mnt/cache (instead of the default /mnt/user). I can't remember anything else.

     

    I even tried a feature update from an out-dated Windows VM to 1903 (which used 26/32GB vdisk space with lots of IO during installation) and still no corruption.

    Link to comment

    @testdasi Do you use some extra commands to generate your initial qcow2 vdisk or directly via the ui? From all my testing and reading to the bug reports in some situations these errors won't show up on small vdisks or on fully preallocated disks for example. Or in my case if in the xml the cache option for the vdisk is set to 'none" the error also won't happen. Default is cache='writeback'

     

    I tried a couple different isos and os. It always prevents me from installing the guest OS if I use a vdisk on my array (xfs). No matter which OS I try, template I use or what maschine type or RAM I set. Always the same. Abort or boot errors later.

     

    If you have your array also xfs formatted, please try to install a small linux distro on a 20G qcow2 vdisk for example with everything on default in Unraid.

    Link to comment

    I have a long-existing Windows 10 VM running with three qcow2 drives on a Samsung 960 EVO nvme drive formatted with XFS and it had no issue running for several days on RC4 and the qcow2 files do not seem to be corrupted after reverting back to the stable release.

     

    I never tried creating a VM on RC4 with a small vdisk, never smaller than 120G, but the OS install never completed successfully while the qcow2 file was on a XFS. I must've tried a couple dozen times using Windows 10 & Ubuntu.

    Link to comment

    @jbartlett Try it on a slower drive if you have the time for, maybe the array itself. It looks like it is an IO related issue. Quickly setup a default Linux vm and click trough the installer and see what happens. 😉

    Link to comment
    18 minutes ago, bastl said:

    @jbartlett Try it on a slower drive if you have the time for, maybe the array itself. It looks like it is an IO related issue. Quickly setup a default Linux vm and click trough the installer and see what happens. 😉

    I think I did try that once on an array XSD spinner, I'll do it again to make sure.

     

    I just (apparently) successfully installed an Ubuntu server on a 2 GB (minimum spec) qcow2 and while the install did not display any errors, booting it show a "no such file or directory" when checking the root file system at boot and then segfaulted/hung.

    Edited by jbartlett
    Link to comment

    What I'am understand from 4.0 to 4.1 they added a couple commits to increase the performance of qcow2 and how the block data and meta information are passed through to the underlying file system. Looks like on devices with low iops some writes are getting dropped. I yesterday had a install on a nvme hung once. At the same time I had a script running in the backround compressing and backing up all my vdisks from that nvme. Lots of IO in the backround and tada, error occurs. I wasn't able to test more today, but all tests yesterday on XFS array hdds showed the same errors. Guest installs aborted or unbootable or corrupted installs . Mint, PopOS, Fedora, Windows none were usable afterwards.

    Link to comment
    1 hour ago, bastl said:

    @jbartlett Try it on a slower drive if you have the time for, maybe the array itself. It looks like it is an IO related issue. Quickly setup a default Linux vm and click trough the installer and see what happens. 😉

    I don't have a RC4 system running a parity drive so I dropped in a decade old 300GB drive that read 62MB/sec at the start and tried that with Ubuntu desktop on a 20GB (min spec) qcow2. Installer crashed.

    Edited by jbartlett
    Link to comment
    3 hours ago, limetech said:

    There is some movement on this issue:

    https://bugs.launchpad.net/qemu/+bug/1847793/comments/11

     

    thank god I'm not going crazy.  

     

    What is the plan? are you going to rollback to 4.1 in the rc releases or include the xfs patch?
     

    right now i’m so glad i didn’t boot my Windows VM’s which although have everything critical in a git repo, it takes quite a while to reinstall every piece of software that i use for development.

    Link to comment
    9 minutes ago, limetech said:

    We're going to roll back qemu to version 4.0.1

    perfect. i think this is the right solution,
     

    patiently waiting for the next rc now!

    Link to comment
    1 hour ago, limetech said:

    We're going to roll back qemu to version 4.0.1

    Not that this is going to affect me, but it does bring something up.  VM Machine type q35-4.1 will not be available under 4.0.1, so any VM created as type 4.1 will automatically fail to launch.  Would it be possible to add in a machine type of q35 (machine type q35 is an alias for 4.1 under qemu 4.1, 4.0 under v4.0 etc) which would then select the latest version available.  (Or does that bring about complications if a VM was created under an earlier version and then the machine type gets upgraded with the next unRaid version)

    Edited by Squid
    Link to comment
    5 hours ago, Squid said:

    Not that this is going to affect me, but it does bring something up.  VM Machine type q35-4.1 will not be available under 4.0.1, so any VM created as type 4.1 will automatically fail to launch.  Would it be possible to add in a machine type of q35 (machine type q35 is an alias for 4.1 under qemu 4.1, 4.0 under v4.0 etc) which would then select the latest version available.  (Or does that bring about complications if a VM was created under an earlier version and then the machine type gets upgraded with the next unRaid version)

    That might be a possibility for a post-6.8 release however we would be concerned if the machine type changed a win vm might think there was a h/w change that necessitates perhaps reactivation?  Not sure.

     

    In next release the -4.1 type goes away and -4.0.1 will appear selected in the dropdown but it will be necessary to click Apply unfortunately - but can't imagine there are a whole lot of people who have upgraded their VM's to -4.1 type.

    Link to comment
    9 hours ago, limetech said:

    we would be concerned if the machine type changed a win vm might think there was a h/w change that necessitates perhaps reactivation?  Not sure.

    Then don't worry about it.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.