Jump to content
We're Hiring! Full Stack Developer ×

Configuration Question - Cache Vs Unassigned


TiHKAL

Recommended Posts

I was using my win10 gaming vm for 4 hours last night without issue. I downloaded a few larger files to my C:\ drive on the VM which is stored on Cache SSD. I have a secondary drive (Z:\) that is platter based in the VM and part of my disk array. (both are dedicated disks to the VM and not mapped drives)

 

I copied a large file from C:\ to Z:\ and it worked fine

I was copying a larger file from C:\ to Z:\ and the *virtual machine locked up and apparently turned off

 

I am not completely aware of how unraid uses caching but I am wondering if I created some kind of issue where the file was being copied from cache and being cached during the copy so it divided by zero(joking) and killed the vm. I have powered down so I don't have the log for this one. I am mainly just curious if copying the way I did is a known thing to avoid. I read the FAQs and did not see this answered.

 

Edit: find the *

Link to comment

I was using my win10 gaming vm for 4 hours last night without issue. I downloaded a few larger files to my C:\ drive on the VM which is stored on Cache SSD. I have a secondary drive (Z:\) that is platter based in the VM and part of my disk array. (both are dedicated disks to the VM and not mapped drives)

 

I copied a large file from C:\ to Z:\ and it worked fine

I was copying a larger file from C:\ to Z:\ and the machine locked up and apparently turned off

 

I am not completely aware of how unraid uses caching but I am wondering if I created some kind of issue where the file was being copied from cache and being cached during the copy so it divided by zero(joking) and killed the vm. I have powered down so I don't have the log for this one. I am mainly just curious if copying the way I did is a known thing to avoid. I read the FAQs and did not see this answered.

If the Z drive is part of the array and VM "C" drive (presumably your OS drive) is on Cache, how is that related to Unassigned?

 

Your VM is on a vdisk file on cache, so your scenario of copying from cache + being cached shouldn't be a problem as they would be 2 different files as far as unRAID is concerned.

 

I have tried copying files up to 8GB each around in various permutation between cache, vdisk, array and unassigned and have never had anything crashed, let alone automatically turned off (and I'm on 6.2.0 beta which should be buggy).

 

In your case, I would suggest having a look at the PSU. Machine locking up is nothing strange.

Machine turning itself off suddenly is an entirely different story.

Link to comment

It's not related to unassigned. I was just wondering if I should be using my ssd from unassigned instead presuming that was causing the issue.

 

Thanks for the info.

 

Edit:

 

The VM crashed - host was fine.

 

Edit Edit:

I am also on beta if you read my sig.

Link to comment

I got home from work today and tried it again.

 

Same result, however this time I have the error log.

Apr 27 21:38:49 fapper kernel: [ 1689] 32 1689 5354 1469 15 3 0 0 rpc.statd
Apr 27 21:38:49 fapper kernel: [ 1699] 0 1699 1621 393 7 3 0 0 inetd
Apr 27 21:38:49 fapper kernel: [ 1708] 0 1708 6122 103 16 3 0 -1000 sshd
Apr 27 21:38:49 fapper kernel: [ 1722] 0 1722 24541 1181 20 3 0 0 ntpd
Apr 27 21:38:49 fapper kernel: [ 1729] 0 1729 1097 27 7 3 0 0 acpid
Apr 27 21:38:49 fapper kernel: [ 1738] 0 1738 1622 377 9 3 0 0 crond
Apr 27 21:38:49 fapper kernel: [ 1740] 0 1740 1620 26 7 3 0 0 atd
Apr 27 21:38:49 fapper kernel: [ 1746] 0 1746 55025 1416 103 3 0 0 nmbd
Apr 27 21:38:49 fapper kernel: [ 1748] 0 1748 74291 3798 143 3 0 0 smbd
Apr 27 21:38:49 fapper kernel: [ 1750] 0 1750 72702 1144 136 3 0 0 smbd-notifyd
Apr 27 21:38:49 fapper kernel: [ 1751] 0 1751 72702 1049 135 3 0 0 cleanupd
Apr 27 21:38:49 fapper kernel: [ 1755] 0 1755 67816 1720 130 4 0 0 winbindd
Apr 27 21:38:49 fapper kernel: [ 1757] 0 1757 67822 1634 131 4 0 0 winbindd
Apr 27 21:38:49 fapper kernel: [ 1764] 0 1764 2358 581 9 3 0 0 cpuload
Apr 27 21:38:49 fapper kernel: [ 1776] 0 1776 22460 940 17 3 0 0 emhttp
Apr 27 21:38:49 fapper kernel: [ 1777] 0 1777 1625 405 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1778] 0 1778 1625 406 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1779] 0 1779 1625 413 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1780] 0 1780 1625 405 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1781] 0 1781 1625 416 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1782] 0 1782 1625 398 8 3 0 0 agetty
Apr 27 21:38:49 fapper kernel: [ 1926] 61 1926 8592 569 22 3 0 0 avahi-daemon
Apr 27 21:38:49 fapper kernel: [ 1927] 61 1927 8559 63 21 3 0 0 avahi-daemon
Apr 27 21:38:49 fapper kernel: [ 1935] 0 1935 3187 27 11 3 0 0 avahi-dnsconfd
Apr 27 21:38:49 fapper kernel: [ 2176] 0 2176 38313 114 16 3 0 0 shfs
Apr 27 21:38:49 fapper kernel: [ 2186] 0 2186 55244 115 20 3 0 0 shfs
Apr 27 21:38:49 fapper kernel: [ 2246] 0 2246 120057 8997 52 6 0 0 docker
Apr 27 21:38:49 fapper kernel: [ 2433] 0 2433 202092 3153 85 4 0 0 libvirtd
Apr 27 21:38:49 fapper kernel: [ 2563] 99 2563 4376 507 14 3 0 0 dnsmasq
Apr 27 21:38:49 fapper kernel: [ 2564] 0 2564 4343 50 13 3 0 0 dnsmasq
Apr 27 21:38:49 fapper kernel: [ 3737] 0 3289 3575644 3475753 7027 17 0 0 qemu-system-x86
Apr 27 21:38:49 fapper kernel: [ 3975] 0 3975 1091 167 6 3 0 0 sleep
Apr 27 21:38:49 fapper kernel: [ 3976] 0 3976 29402 2112 43 3 0 0 php
Apr 27 21:38:49 fapper kernel: Out of memory: Kill process 3737 (qemu-system-x86) score 834 or sacrifice child
Apr 27 21:38:49 fapper kernel: Killed process 3737 (qemu-system-x86) total-vm:14302576kB, anon-rss:13880536kB, file-rss:22476kB
Apr 27 21:39:08 fapper kernel: br0: port 2(vnet0) entered disabled state
Apr 27 21:39:08 fapper kernel: device vnet0 left promiscuous mode
Apr 27 21:39:08 fapper kernel: br0: port 2(vnet0) entered disabled state
Apr 27 21:39:08 fapper kernel: vgaarb: device changed decodes: PCI:0000:01:00.0,olddecodes=io+mem,decodes=io+mem:owns=none

 

Out of memory? I have 12.5 GB of ram dedicated to the VM.

 

Edit: I have over 300GB free on cache drive and 900GB free on array

 

Edit Edit:

VM XML Config:

<domain type='kvm'>
  <name>Windows 10</name>
  <uuid>7c861ab6-edc9-1cfe-9c22-f84d3fd0d8e5</uuid>
  <metadata>
    <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
  </metadata>
  <memory unit='KiB'>13107200</memory>
  <currentMemory unit='KiB'>13107200</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>8</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
    <vcpupin vcpu='2' cpuset='2'/>
    <vcpupin vcpu='3' cpuset='3'/>
    <vcpupin vcpu='4' cpuset='4'/>
    <vcpupin vcpu='5' cpuset='5'/>
    <vcpupin vcpu='6' cpuset='6'/>
    <vcpupin vcpu='7' cpuset='7'/>
  </cputune>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
    <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
    <nvram>/etc/libvirt/qemu/nvram/7c861ab6-edc9-1cfe-9c22-f84d3fd0d8e5_VARS-pure-efi.fd</nvram>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
      <vendor id='none'/>
    </hyperv>
  </features>
  <cpu>
    <topology sockets='1' cores='8' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/local/sbin/qemu</emulator>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOz/en_windows_10_multiple_editions_updated_feb_2016_x64.iso'/>
      <target dev='hda' bus='virtio'/>
      <readonly/>
      <boot order='2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </disk>
    <disk type='file' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <source file='/mnt/user/ISOz/virtio-win-0.1.117.iso'/>
      <target dev='hdb' bus='virtio'/>
      <readonly/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/vdisk/Windows 10/vdisk1.img'/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/user/ArrayVdisk/Windows 10/vdisk3.img'/>
      <target dev='hdd' bus='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
    </disk>
    <controller type='usb' index='0' model='nec-xhci'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'/>
    <controller type='virtio-serial' index='0'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:80:5b:68'/>
      <source bridge='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <channel type='unix'>
      <source mode='connect'/>
      <target type='virtio' name='org.qemu.guest_agent.0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='pci' managed='yes'>
      <driver name='vfio'/>
      <source>
        <address domain='0x0000' bus='0x00' slot='0x1b' function='0x0'/>
      </source>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x045e'/>
        <product id='0x0719'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0040'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1532'/>
        <product id='0x0203'/>
      </source>
    </hostdev>
    <hostdev mode='subsystem' type='usb' managed='no'>
      <source>
        <vendor id='0x1b1c'/>
        <product id='0x0c04'/>
      </source>
    </hostdev>
    <memballoon model='virtio'>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/>
    </memballoon>
  </devices>
</domain>

Link to comment

That OOM message is from the unRAID level not the VM level!    Perhaps you should reduce the amount of memory allocated to the VM to leave more free for unRAID itself?  My general rule with VMs is to allocate the minimum amount of RAM consistent with them being able to do the task I want them for.

Link to comment

unRAID error log is hillarious. It sounds like Skynet is coming. ;D

Apr 27 21:38:49 fapper kernel: Out of memory: Kill process 3737 (qemu-system-x86) score 834 or sacrifice child

 

Anyway on a serious note, I did notice my VM on 6.2.0 beta crashing frequently due to out-of-memory error too.

 

You have 16GB RAM, dedicated 12.5GB to VM. That leaves 3.5GB to unRAID.

 

That's a bit high - unRAID is supposed to be a low footprint kinda thing - am I missing something?

 

Link to comment

 

You have 16GB RAM, dedicated 12.5GB to VM. That leaves 3.5GB to unRAID.

 

That's a bit high - unRAID is supposed to be a low footprint kinda thing - am I missing something?

There's also overhead for the VMs.  Not 100% sure what it is, but the memory left for unRaid will be less than 3.5
Link to comment

I had to free up almost 6GB of ram to stop the crashing. It turns out it doesn't require a transfer to cause the crash. Extracting the same file causes crash as well. It seems Array disk I/O in 6.2.21 beta may consume a lot of ram when the IO originates from within a VM.

 

The performance really degrades towards the end of the transfer / extracting (halts momentarily.) It seems it is using the ram as disk cache or something.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...