Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About alexciurea

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. facing a similar issue. my linux vm's with passthrough GPU fail to start, getting stuck at tianocore screen. but windows VM's seemed ok, no issue encountered. rolled-back and problem solved.
  2. one way to solve this annoyance is to use the Queue Manager; use F2 - Queue, when performing long duration actions.
  3. thanks for sharing this, it solved my problem when moving vm's from one unraid box to another.
  4. hi coppit, A similar issue just happened to me. The WM is a ubuntu server 16.04 LTS and I am passing to it a TEMPer usb device, plugged directly to the onboard usb - motherboard's back i/o - so no hub involved. Whenever i am trying to access/read the TEMPer device from inside the VM, the device is getting reset - in the unraid log i get messages like this: Feb 24 17:42:02 Towerx48 kernel: hid-generic 0003:0C45:7401.0003: input,hidraw0: USB HID v1.10 Keyboard [RDing TEMPerV1.4] on usb-0000:00:1d.2-1/input0 Feb 24 17:42:02 Towerx48 kernel: hid-generic 0003:0C45:7401.0004: hiddev96,hidraw1: USB HID v1.10 Device [RDing TEMPerV1.4] on usb-0000:00:1d.2-1/input1 The device becomes unavailable / inaccessible in the VM - does not show up anymore in lsusb. A restart of VM is required to see it again in VM. I hope I solved it: I changed the VM USB definition from ehci to xhci, although the motherboard does not have usb3.0 (rampage formula x48). If in the past i was always getting the usb reset, now after changing to xhci, i can consistently read it. I will update if i get issues in the following of days... But worth to try, in case you haven't already... good luck alex LE: i see your device recognized as xhci: Jan 31 21:32:41 storage kernel: usb 5-4: reset full-speed USB device number 4 using xhci_hcd but not sure if this is because of VM definition, or how the unraid sees it (most probably the later) anyhow, it's worth toggling this definition in VM...
  5. Hi I tried to search if this bug was already reported, but I could not find it. I admit that i did not browse through the dozens of result pages so if this is duplicate, please feel free to remove my post. This looks like a small issue and with W/A... still, maybe it's worth sharing it. Description: Users will end up with orphaned VM vdisk file, if following a failed attempt to initially start the VM (enabled "Start VM after creation"), the user decides to cancel the VM definition. How to reproduce: 1) create VM with enabled checkbox "Start VM after creation" and allocate to VM more RAM than you have free in the system at that moment (this is easy to follow, so that VM will fail on first initialization, but for sure there are other ways to fail it) 2) Save 3) VM will fail to start due to not enough ram with an error like: VM creation error internal error: process exited while connecting to monitor: 2018-02-22T21:30:35.398973Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0) 2018-02-22T21:30:35.401988Z qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory 4) vdisk file is created 5) After dismissing the popup with the RAM allocation error, if user is pressing Cancel (instead of unchecking the "Start VM after creation" and pressing Done), then VM definition is lost, but vdisk is not deleted. Expected results: VM definition is saved, even if user is pressing Cancel after failure - since the user already pressed Create Actual results: VM definition is lost, VM does not appear in the list Other information: W/A - user needs to login to the terminal of the unraid (/mnt/user/domains/), identify the orphaned vdisk file and reclaim the space by manually removing it. I am not aware if other entities are left orphaned, besides the vdisk file.
  6. hello seems that fix common problems plugin report the gitlab-ce ports as non-standard. Getting in FCP errors like: Docker Application GitLab-CE, Container Port 22 not found or changed on installed application Docker Application GitLab-CE, Container Port 80 not found or changed on installed application Docker Application GitLab-CE, Container Port 443 not found or changed on installed application somebody else reporting this in the GL-CE docker support thread. i guess we can ignore, but probably the plugin reporting such error should be corrected?
  7. same issue here. I don't remember if i changed these to other values. tried to change to 22, 80 and 443, but container does not start anymore. tried to remove the container and reinstalled, but same values are used by default (9022, 9080, 9443). will post also in the fix common problems plugin
  8. + 1 for the compose / kubernetes in unraid out-of-box (and also using multiple unraid nodes)
  9. looks like solved, i did few things - hope it helps somebody: 1) i manually checked for updates in the Plugins page, then installed the updates. Some hickups along the way: a) the check for updates was very slow b) the process stuck 2-3 times, had to navigate away, then back to Plugins page, then restarting the process of check for updates. c) one pluggin failed to update on first try, then had to restart the update and all was ok... 2) Also, i disabled the autostart of the VM, then started the VM from CLI. Then re-enabled the autostart for that VM and since then, it starts properly and quickly, couple of seconds after unraid startup.
  10. i always thought that it's not possible to passthrough integrated wireles cards...
  11. hi, i need your help for an issue that is getting bigger and bigger... my setup: x99-m ws, 5820k, 32gb ram 1 disk in array, 1 parity 1 cache ssd (crucial mx300) 1 passthrough ssd 850 EVO (daily driver with windows 10 with 1060 passthrough with rom dump) 1 unassigned ssd (750 EVO) I noticed towards the end of the year that upon a fresh start of the unraid box, the windows 10 VM (autostart) would take more time to initialize. The unraid loads, reaching the tower login prompt, but waiting more and more (sometimes up to 2-3 minutes) to get the win 10 vm to autostart... now, after the new year, the VM does not start anymore under normal conditions. it gets stuck at tower login prompt... keyboard seems unresponsive, i cannot even type a username/password (i remember i could do that in the past if i was quick enough)... Current workaround is to select the gui mode from boot menu of unraid, and then at certain point some 3-5 minutes i believe, after loading the gui, it will autostart the windows 10 vm (with the usual artifacts on screen) One interesting thing to notice, is that while waiting for autostart VM, i cannot access the admin console (emhttp) - the firefox browser remains in "waiting for IP..." I am not sure if this is because of the flash usb drive, or some other issue with my hardware, or something else...? I do get some message just before the tower login prompt, related to unassigned devices, as if a certain file does not exist. I attach diagnostics for when i am successfully able to autostart the VM (unraid gui boot) - but i cannot extract the diagnostics for the situation when VM does not autostart (unraid default boot) kindly please suggest what should i try... thanks, alex towerx99-diagnostics-20180110-0035.zip
  12. really not sure about that difference - slot vs bus. are you also using the ubuntu vm template?
  13. i'm using the latest stable 6.3.5 Not sure about your question - it's a setting in uefi bios or some configuration in unraid? If bios, i think i'm using Legacy OS (non uefi) - really i am a bit confused on this and not sure if/how it matters. Here's the xml. Just note it's passing through a secondary 1060 gpu (so no rom file). also i'm passing through an on-board ASMedia usb controller (bus 0x07). <domain type='kvm'> <name>MintCin</name> <uuid>xx</uuid> <description></description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <emulatorpin cpuset='0,6'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/xx_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/Samsung_SSD_750_EVO_500GB_xx/MintCin/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/lindrive'/> <target dir='lindrive'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='xx'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  14. totally understand the situation. d15s is asymetrical and might allow for better compatibility (but might go into the top fan of the case, etc...) but d15 ... then no option to take away the external fan? it will only increase temps by 2-3 C. i checked now - i also don't see the ovmf tianocore splash logo and neither the grub menu when booting the linux mint 18.2 - i just get the dots... then, the mint logo with the loading dots, then blank screen for 1-2 seconds, and finally the login screen - with an 1060. but this does not bother me currently give it a try also to give the rom explicitly to the gpu, even if it's not required - maybe a downloaded one will help you better than "who knows what" customized bios asus might have put in there... good luck...