alexciurea

Members
  • Posts

    42
  • Joined

Everything posted by alexciurea

  1. Hi @AinzOolGown In my case I faced a problem with my gitlab-ce docker installation during updates because: 1) no autoapdates, and 2) I did not follow the update path recommended by gitlab. When I manually updated, it jumped from a very old version to latest. The container was not starting because db was failing; I think the db upgrade scrips failed. So I manually updated step by step, using specifically the respective tags in the docker, based on this info below: https://docs.gitlab.com/ee/update/#upgrade-paths This resolved the issue. Maybe this is already common knowledge for many, but I hope it helps someone failing like me... -a
  2. facing a similar issue. my linux vm's with passthrough GPU fail to start, getting stuck at tianocore screen. but windows VM's seemed ok, no issue encountered. rolled-back and problem solved.
  3. one way to solve this annoyance is to use the Queue Manager; use F2 - Queue, when performing long duration actions.
  4. thanks for sharing this, it solved my problem when moving vm's from one unraid box to another.
  5. hello seems that fix common problems plugin report the gitlab-ce ports as non-standard. Getting in FCP errors like: Docker Application GitLab-CE, Container Port 22 not found or changed on installed application Docker Application GitLab-CE, Container Port 80 not found or changed on installed application Docker Application GitLab-CE, Container Port 443 not found or changed on installed application somebody else reporting this in the GL-CE docker support thread. i guess we can ignore, but probably the plugin reporting such error should be corrected?
  6. same issue here. I don't remember if i changed these to other values. tried to change to 22, 80 and 443, but container does not start anymore. tried to remove the container and reinstalled, but same values are used by default (9022, 9080, 9443). will post also in the fix common problems plugin
  7. + 1 for the compose / kubernetes in unraid out-of-box (and also using multiple unraid nodes)
  8. looks like solved, i did few things - hope it helps somebody: 1) i manually checked for updates in the Plugins page, then installed the updates. Some hickups along the way: a) the check for updates was very slow b) the process stuck 2-3 times, had to navigate away, then back to Plugins page, then restarting the process of check for updates. c) one pluggin failed to update on first try, then had to restart the update and all was ok... 2) Also, i disabled the autostart of the VM, then started the VM from CLI. Then re-enabled the autostart for that VM and since then, it starts properly and quickly, couple of seconds after unraid startup.
  9. i always thought that it's not possible to passthrough integrated wireles cards...
  10. hi, i need your help for an issue that is getting bigger and bigger... my setup: x99-m ws, 5820k, 32gb ram 1 disk in array, 1 parity 1 cache ssd (crucial mx300) 1 passthrough ssd 850 EVO (daily driver with windows 10 with 1060 passthrough with rom dump) 1 unassigned ssd (750 EVO) I noticed towards the end of the year that upon a fresh start of the unraid box, the windows 10 VM (autostart) would take more time to initialize. The unraid loads, reaching the tower login prompt, but waiting more and more (sometimes up to 2-3 minutes) to get the win 10 vm to autostart... now, after the new year, the VM does not start anymore under normal conditions. it gets stuck at tower login prompt... keyboard seems unresponsive, i cannot even type a username/password (i remember i could do that in the past if i was quick enough)... Current workaround is to select the gui mode from boot menu of unraid, and then at certain point some 3-5 minutes i believe, after loading the gui, it will autostart the windows 10 vm (with the usual artifacts on screen) One interesting thing to notice, is that while waiting for autostart VM, i cannot access the admin console (emhttp) - the firefox browser remains in "waiting for IP..." I am not sure if this is because of the flash usb drive, or some other issue with my hardware, or something else...? I do get some message just before the tower login prompt, related to unassigned devices, as if a certain file does not exist. I attach diagnostics for when i am successfully able to autostart the VM (unraid gui boot) - but i cannot extract the diagnostics for the situation when VM does not autostart (unraid default boot) kindly please suggest what should i try... thanks, alex towerx99-diagnostics-20180110-0035.zip
  11. really not sure about that difference - slot vs bus. are you also using the ubuntu vm template?
  12. i'm using the latest stable 6.3.5 Not sure about your question - it's a setting in uefi bios or some configuration in unraid? If bios, i think i'm using Legacy OS (non uefi) - really i am a bit confused on this and not sure if/how it matters. Here's the xml. Just note it's passing through a secondary 1060 gpu (so no rom file). also i'm passing through an on-board ASMedia usb controller (bus 0x07). <domain type='kvm'> <name>MintCin</name> <uuid>xx</uuid> <description></description> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='5'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <emulatorpin cpuset='0,6'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/xx_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/Samsung_SSD_750_EVO_500GB_xx/MintCin/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </controller> <filesystem type='mount' accessmode='passthrough'> <source dir='/mnt/user/lindrive'/> <target dir='lindrive'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/> </filesystem> <interface type='bridge'> <mac address='xx'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/> </memballoon> </devices> </domain>
  13. totally understand the situation. d15s is asymetrical and might allow for better compatibility (but might go into the top fan of the case, etc...) but d15 ... then no option to take away the external fan? it will only increase temps by 2-3 C. i checked now - i also don't see the ovmf tianocore splash logo and neither the grub menu when booting the linux mint 18.2 - i just get the dots... then, the mint logo with the loading dots, then blank screen for 1-2 seconds, and finally the login screen - with an 1060. but this does not bother me currently give it a try also to give the rom explicitly to the gpu, even if it's not required - maybe a downloaded one will help you better than "who knows what" customized bios asus might have put in there... good luck...
  14. i did not encounter stability issues with the plugged devices. also i am switching the unifying receiver from one controller to another (the controllers are passed through to different VM's) and no issue. USB sticks, HDD, speedlink wireless controllers - all went ok... The only issue i faced sometimes, as mentioned earlier, was with a rii mini keyboard usb dongle, when plugged in to an asmedia 3.1 internal controller - from time to time, it makes the controller to freeze or something, and i need to reset the whole unraid box...
  15. another alternative - you really need that usb card? have you tried to passthrough the onboard controllers (if there is one that can reset, is not internal - e.g. you can plug devices in it - and is not hosting the flash) - maybe the asmedia one?
  16. any chance to unblock that 1st pcie slot by rotating the cpu cooler? another option, is to use a pcie raiser extension cable for that blocked 1x slot - but it depends on the case layout, if you have an extra slot to exit that usb card outside the mobo layout at the bottom... Maybe to have it hanging in the case, and plug in devices that you are not swapping frequently (e.g. keyboard, controller, etc) All this adds to the rabbit hole you're in... but i suggested these, because the GPU's should stay in the 2 designated GPU PCIE slots - they are connected with the cpu pcie lanes (and not the lanes from the chipset) Try without the USB card, to have your GPU cards & VM's configured as you want. If that works, then clearly you have to find a way to plug that USB card in that first 1x slot... (just note that device id's might change, so a reconfiguration of VM's xmls might be required after removing/adding the usb card...) I will check about my mint vm what is configured. It might be that i'm also not seing a grub menu, just the green dots loading...
  17. it might be that the only solution is with the rom file, for that 780ti card. I have my doubts anyway if this is a long term solution - it could be that VM will start once, then consequent restarts will not boot anymore ... and a reboot (or shutdown+start) of the unraid box will be required. Try first with a 10xx card from a friend. or a 750ti - that one is on Maxwell i believe, which probably will work better. good luck
  18. I forgot to ask you - is the GPU isolated in it's own IOMMU group? The fact that Windows VM works is a good thing But probably Windows will behave different than Linux Mint... yes, this is one of the reasons i am not in favor of liquid cooling specifically hard tubing too hard to play around...
  19. dear community I have currently an unraid server with multiple VM's in place. The motherboard has 2 x 1Gb NICs - and i observed recently that bonding is enabled... I currently use only one of the interfaces (only 1 cable connected). No managed switch. Probably the only benefit to enabled bonding in my case, is that i can plug the cable to any of the 2 LAN ports...? Maybe in the future i will need to passthrough the other LAN controller to one of the VM's - and it's my current impression that bonding must be disabled Any recommended steps to disable bonding and still be able to access the unraid box and run the VM's? thanks, alex
  20. I am not sure, but i believe this is kepler architecture? Maybe because it's olded, it's not really supporting all the states for a proper passthrough on some of the OS's? I would suggest to extract the rom for the card, or try to get one from techpowerup. and then, specify the rom file in the xml of the VM. you might have better success... Just to clarify, i am able to passthrough 10xx cards to mint 17, 18 VM without the need to use that nomodeset parameter... I tried for both: second PCIe slot (no rom required) and first slot PCIe (with rom extracted and file specified in the xml) but i am not having integrated GPU - i'm on x99 platform.
  21. welcome my next suggestion would have been to try with a fresh restart of entire unraid server. I got a similar issue when trying to install ubuntu 14.04. After restart of unraid (so all usb's were "fresh", without any leftovers from previous connections) i was able to boot and install the ubuntu 14.04. good luck!
  22. i always get a similar message - but now, not sure if when passingthrough similar card (inatek fresco), or the onboard controller asmedia 3.1 - will check and update... But the VM is ok, it boots without issues and i can work with the USB ports, plug and play... If you have any devices plugged in, try to remove them. Sometimes i got issues like this with a rii mini wireless keyboard, but not with logitech (unifying dongle). Also try various other VM guest OS's - linux mint for example, or ubuntu gnome, or fedora 25/26...
  23. i faced issues with solus budgie VM, but i was passing through GPU cards (so no vnc) 1) Login to desktop not working - it asks for password, it seems itis accepting it, but then screen does not go past the login form (Solus Mate was ok though) This seems to be caused by the fact that i was putting my second GPU in a pcie slot with pcie lanes allocated from the chipset controller, and not from CPU... Solved when I moved the GPU... So Solus Budgie was ok for a while... 2) Update of distribution destroyed the VM, did not boot anymore - not sure if this is caused by distro or by Unraid - i had to recreate the VM again 3) Update of MB bios re-arranged the device id's for my GPU's and other devices - changed the XML to accomodate the reshuffle but came back to the issue 1 - cannot login... although this time the gpu's are in the correct PCIe slots I will try in the next days to recreate the Budgie VM to see if all ok... Overall, indeed more issues with Solus Budgie - cinnamon mint is more resilent to changes and less issues... But i do agree Solus is amazing... boots super fast and interface is one of the best!
  24. so... what nvidia card you have? which slot is located? if first slot, are you specifying the GPU ROM in the xml? - not sure if there are other methods recently...