JKunraid

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by JKunraid

  1. I opened up the fsab file with nano. I've never done this before so I'm not sure what the exact line I should add looks like.
  2. I just realized I have one more related question. How would I keep the connection to share persistent across reboots?
  3. Setup following virtiofs settings when I created a PopOS VM (settings selected from dropdown). Unraid Share Mode: Virtiofs Mode Unraid Share: user: vmShare Unraid Source Path: /mnt/user/vmShare Unraid Mount Tag: vmShare The VM boots fine but I can't see the share named "vmShare". either in locations, VM's mnt folder, or using Disks utility. Is there some step I'm missing or is there a problem?
  4. JKunraid

    Can

    Unraid Share Mode: Virtiofs Mode Unraid Share: user: vmShare Unraid Source Path: /mnt/user/vmShare Unraid Mount Tag: vmShare
  5. Success! My existing Ubuntu VM still doesn't see it. And I tried with multiple new templates of Ubuntu but I tried with your suggested PopOS VM and it recognizes the GPU. Thanks so much for the help.
  6. Threadripper 3970x on Asus Zenith II Extreme Mobo. Bios is from 2022. There is a newer version so i'm going to give it a shot tomorrow. I created the diagnostic file. i have some privacy related concerns about it since it contains a lot of data. Is there any way I can send you specific files or folder? (I will review before sending) I'll give Pop OS a try tomorrow. Another Linux distro with an existing template in the vm setup after that. Big Brother AI spyware after that... umm I mean Windows.
  7. Results of running (dmesg | grep -e DMAR -e IOMMU) on host system. [ 0.507012] pci 0000:60:00.2: AMD-Vi: IOMMU performance counters supported [ 0.508237] pci 0000:40:00.2: AMD-Vi: IOMMU performance counters supported [ 0.513697] pci 0000:20:00.2: AMD-Vi: IOMMU performance counters supported [ 0.515574] pci 0000:00:00.2: AMD-Vi: IOMMU performance counters supported [ 0.518364] pci 0000:60:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 0.518532] pci 0000:40:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 0.518699] pci 0000:20:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 0.518867] pci 0000:00:00.2: AMD-Vi: Found IOMMU cap 0x40 [ 0.521408] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank). [ 0.521522] perf/amd_iommu: Detected AMD IOMMU #1 (2 banks, 4 counters/bank). [ 0.521637] perf/amd_iommu: Detected AMD IOMMU #2 (2 banks, 4 counters/bank). [ 0.521747] perf/amd_iommu: Detected AMD IOMMU #3 (2 banks, 4 counters/bank). [ 1.748930] AMD-Vi: AMD IOMMUv2 loaded and initialized Results of (dmesg | grep 'remapping') on host, 0.519033] AMD-Vi: Interrupt remapping enabled Installed "disable security mitigations" with no luck. Is this enabled or disabled by default? You mention add line (options vfio_iommu_type1 allow_unsafe_interrupts=1) to grub. Am I supposed to append it to it's own line somewhere specific? (e.g append it to the line I previously added?) label GPU passthrough mode menu default kernel /bzimage append initrd=/bzroot video=vesafb:off,efifb:off,simplefb:off,astdrmfb initcall_blacklist=sysfb_init pci=noaer pcie_aspm=off pcie_acs_override=downstream,multifunction options vfio_iommu_type1 allow_unsafe_interrupts=1 btw - thanks for the help. Hopefully we can figure this out but even if not I'm learning a lot just trying different things.
  8. The good news. The System boots fine. The bad news. VM still doesn't recognize the GPU. Any thoughts on what I should try next?
  9. Thanks. I made the edit (also changing it to default menu option I think) Can you quickly review before I try rebooting in case I messed something up. default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label Unraid OS kernel /bzimage append initrd=/bzroot label Unraid OS GUI Mode kernel /bzimage append initrd=/bzroot,/bzroot-gui label Unraid OS Safe Mode (no plugins, no GUI) kernel /bzimage append initrd=/bzroot unraidsafemode label Unraid OS GUI Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest label GPU passthrough mode menu default kernel /bzimage append initrd=/bzroot video=vesafb:off,efifb:off,simplefb:off,astdrmfb initcall_blacklist=sysfb_init pci=noaer pcie_aspm=off pcie_acs_override=downstream,multifunction (btw- I'm on Unraid 6.12.8)
  10. I tried to add line... "kernel /bzimage append initrd=/bzroot video=vesafb:off,efifb:off,simplefb:off,astdrmfb initcall_blacklist=sysfb_init" ...but couldn't find the grub file on flash drive. Where exactly is located in Unraid and its exact name?
  11. I ran "lspci -v | grep vfio" (on host) and it returned following. Kernel driver in use: vfio-pci Kernel driver in use: vfio-pci I also ran "lspci -v | grep nvidia" Kernel modules: nvidia_drm, nvidia
  12. "technical yes. but doesn't mater. My cpu has a onboard GPU. i run it in headless mode." What steps do I need to take to run Unraid in headless mode? "lspci -v" Does not see the GPU. "bind via system devices." Both Nvidia GPU and it's audio are checked in system devices (i.e. bound together into IOMMU group) Here is my Ubuntu VM's XML in full if it helps any... <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='60'> <name>t-ubuntu</name> <uuid>1f83ae0a-6582-6423-7dac-30c1882ceed8</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='24'/> <vcpupin vcpu='1' cpuset='56'/> <vcpupin vcpu='2' cpuset='25'/> <vcpupin vcpu='3' cpuset='57'/> <vcpupin vcpu='4' cpuset='26'/> <vcpupin vcpu='5' cpuset='58'/> <vcpupin vcpu='6' cpuset='27'/> <vcpupin vcpu='7' cpuset='59'/> <vcpupin vcpu='8' cpuset='28'/> <vcpupin vcpu='9' cpuset='60'/> <vcpupin vcpu='10' cpuset='29'/> <vcpupin vcpu='11' cpuset='61'/> <vcpupin vcpu='12' cpuset='30'/> <vcpupin vcpu='13' cpuset='62'/> <vcpupin vcpu='14' cpuset='31'/> <vcpupin vcpu='15' cpuset='63'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-7.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/1f83ae0a-6582-6423-7dac-30c1882ceed8_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/t-ubuntu-chia/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <serial>vdisk1</serial> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:e6:4f:6e'/> <source bridge='br0'/> <target dev='vnet57'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-60-t-ubuntu-chia/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <audio id='1' type='none'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  13. I changed bus setting to "2" as you suggested but it wouldn't save XML ("attempted double use of PCI Address error). I was able to change it to "5" but it didn't make a difference. Out of curiousity are you on a system with two GPUs? My system only has one GPU the 3090. It has no IGPU.. The first reply (other poster) suggested that when my system boots the host system might be locking the GPU which is what is preventing the passthrough to VM. I didn't understand his instructions as to how to change that though.
  14. I downloaded driver directly from Nvidia. During installation it doesn't recognize the existence of the 3090 not do I see it when I run various hardware utilities (I can see the GPU in Unraid host but not the VM). Also connected to Ubuntu VM with SSH as opposed to RDP but didn't make a difference.
  15. Thanks for the reply. There is a lot to unpack so I'm trying to make sure I'm doing things right step by step. I made multifunction edit to my VM's config but doens't work yet. Does this at least look correct? (before I move on to something else) <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev>
  16. My Ubuntu VM installed with no issues and boots fine. I am also able to RDP into it no problem (have XRDP installed). I'm trying to do a GPU passthrough with a rtx3090. I've checked below IOMMU group in tools > system devices (and rebooted) IOMMU group 84 : [10de:2204] 01:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1) [10de:1aef] 01:00.1 Audio device: NVIDIA Corporation GA102 High Definition Audio Controller (rev a1) Then I selected "NVIDIA GeForce RTX 3090 (01:00.0)" in dropdown of "Graphics Card" field of my VM's configuration. When I try to install the Nvidia Linux driver it can't find the GPU. I installed inxi to check then ran "sudo inxi --full". Although Unraid OS recognizes it, my Ubuntu VM does not see my GPU. Any ideas on what I'm missing?
  17. I have an unraid server with 12 drives on main array. 7 of them are 18TB (1 of the 7 is a parity drive). The remaining 5 drives are smaller. I'd like to replace the 5 smaller drives with 18TB drives. I can change drives one by one and rebuild but is there a faster and less drive torturous way to do it without losing my data? For instance, can I pull all five drives at once, add the 5 replacements, then manually add back the data from the smaller drives I pulled? (e.g. using USB dock or temporary using spare SATA).
  18. When I click the "Add remote SMB/NFS Share" button on Main tab button a pop up appears that allows me to select Linux. I then click "next" and it lets be search for servers. It can find server but I can't view or add the share because there is no obvious way to add userid and password. so I can connect to remote share.
  19. Problem is still occurring. Unraid's log of the crashed VM when I try to restart it without rebooting Unraid.... -uuid c2e6aa27-27d4-3b09-bf73-6cfe5722ef4c \ -no-user-config \ -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0x15,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \ -device qemu-xhci,p2=15,p3=15,id=usb,bus=pcie.0,addr=0x7 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Ubuntu20.04LTS-Desktop/vdisk1.img","node-name":"libvirt-1-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-1-storage","backing":null}' \ -device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-1-format,id=virtio-disk2,bootindex=1,write-cache=on \ -fsdev local,security_model=passthrough,id=fsdev-fs0,path=/mnt/user/ChiaPlots/ \ -device virtio-9p-pci,id=fs0,fsdev=fsdev-fs0,mount_tag=/mnt/ChiaPlots,bus=pci.1,addr=0x0 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:64:27:e5,bus=pci.3,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700 \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device vfio-pci,host=0000:23:00.0,id=hostdev0,bus=pci.5,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2021-09-27 12:04:31.013+0000: Domain id=3 is tainted: high-privileges 2021-09-27 12:04:31.013+0000: Domain id=3 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) 2021-09-27 12:04:57.956+0000: shutting down, reason=failed
  20. Created an Ubuntu Desktop 20.04 (x64) VM. Use it mostly for Chia farming and occasional odd task. it's supposed to be on 24/7. Worked fine for weeks but in the last week its been crashing several times a day. The weird part is after the VM crashes I can't even start it up again unless I reboot the Unraid server entirely. (which suggests an Unraid problem rather than a VM OS problem). Any suggestions how to tackle this problem? Unless I reboot unraid server entirely, when I try to restart the crashed VM I get following message "Execution error internal error: qemu unexpectedly closed the monitor"
  21. I have a share setup in Unraid. If I start my Ubuntu VM (in Unraid) and go to "other locations" in the file manager I can see my Unraid server. When I click it prompts for userid/password (which I input) and then displays all my Unraid shares. I click on my share in the file manager and it appears to the left as a mounted drive in file manager. I can even add folders and files to it through the file manager The problems arise when I then switch to terminal window in same VM. I can see the mounted share in the folder /run/user/1000/gvfs but when I try to write a folder or file to it using command line it fails. (folder shows permissions of drwx------)
  22. Found the solution. Speaking as an experienced dev, this was way way too complicated to setup. I can only imagine how many non-techies give up trying to figure this out Hopefully future versions of Unraid make this far more intuative. For anyone else experiencing similar problem here is a brief walkthrough of major steps I took to get an EVGA GTX 1070 SC running on an AMD Threadripper board on Unraid 6,9.2. 1. Enable IOMMU, CSM and Legacy mode in bios (note for some GPUs UEFI rather than legacy might work better) 2.1 In Unraid go to tools > system devices. 2.2 Scroll down until you find the IOMMU group with your GPU. 2.3. Click the checkbox for all the devices in the GPU's IOMMU group and save. 3. Go to techpower up website and download vbios version for your GPU https://www.techpowerup.com/vgabios/ 4.1 (NOTE: This step might only apply to systems with single NVIDIA GPU) Download/Install a Hex editor. 4.2 Use Hex editor to modify the above VBIOS file deleting leading section (google to find exact section that needs to be deleted on your card) 4.3 Save then upload modified VBIOS file to some location on your Unraid server. 5.1 Go to VM manger screen and create new win10 VM 5.2 For initial installation for "GRAPHICS CARD" field select VNC. 5.3 Install Win10 and enable remote desktop. 5.4 Confirm you can connect with RDP 6.1 Go back to the VM template you created to edit it. 6.2. For "Graphics Card" field select you GPU from dropdown (in my case a EVGA GTX 1070 SC) 6.3 For "Graphics ROM BIOS" select your modified VBIOS 6.4. For "Sound Card" you must select sound card that is same IOMMU group as GPU (you can add second card if needed) 6.6 select "UPDATE" to save the file 7.1 reopen your VM template from above (the reason why this step is needed is because of a bug that resets below if you try to use "form view" for following steps) 7.2 click on "XML View" 7.3 scroll down to the fields that list the slots for you GPU and Sound Drive .(In my case it looks like below. Yours may differ and need additional edits if more devices on same IOMMU) <rom file='/mnt/transfer_pool/Transfer/Unraid/VBIOS/EVGA.GTX1070.8192.161103_1.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x50' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> 7.4 My GPU also have a sound device built in. When you plug into a physical mobo both devices are in the same PCIe slot. Unfortunately when Unraid maps it to virtual mobo it places the devices in different slots which confuse the NVIDIA driver. To fix this issue you have to put all the devices on the same IOMMU in same slots. For my particular GPU these were the two lines that I modified from above. <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/> (changed to below) <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0/> (changed to below) <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> 7.5 Click update to save XML file. Note: it's essential not to go back to the "form version" to save the VM template as it will discard these manual edits. If you do ever change the template again in form view you will need to save it then manually edit it again in "XML view" again. 8.1. Start the VM. And try to connect with RDP after a minute (does some sort of update so takes a bit longer than normal to connect) 8.2 If you can't connect with RDP do a force shutdown of VM then restart it again (for some reason I had to do it twice for it to work)
  23. UPDATE 2: Now I'm getting a message from fix common problems plugin that log is 100% full. I checked my syslog in /var/logs/ and its over 125Mb. When I tail the last 200 records show up with this message Aug 9 09:11:16 Threadripper kernel: vfio-pci 0000:50:00.0: BAR 1: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]