Jump to content

alexciurea

Members
  • Posts

    42
  • Joined

Posts posted by alexciurea

  1. Hi @AinzOolGown

    In my case I faced a problem with my gitlab-ce docker installation during updates because:

    1) no autoapdates, and

    2) I did not follow the update path recommended by gitlab.

     

    When I manually updated, it jumped from a very old version to latest. The container was not starting because db was failing; I think the db upgrade scrips failed.

     

    So I manually updated step by step, using specifically the respective tags in the docker, based on this info below:

    https://docs.gitlab.com/ee/update/#upgrade-paths

     

    This resolved the issue.

    Maybe this is already common knowledge for many, but I hope it helps someone failing like me...

     

    -a

     

     

     

  2. On 5/3/2018 at 11:23 AM, PROJECTBLUE said:

    After updating to 6.5.1 from 6.5.0 all of my VM's started to experience stuttering and freezing especially under high load such as gaming. Rolled back to 6.5.0 and everything is fine again.

     

    facing a similar issue. my linux vm's with passthrough GPU fail to start, getting stuck at tianocore screen.

    but windows VM's seemed ok, no issue encountered.

    rolled-back and problem solved.

  3. On 4/9/2017 at 6:10 PM, jonathanm said:

    1. What @Squid said. When you close the file progress child window, there is no way to monitor the progress of the transfer. It will continue unless you stop the docker.

     

    one way to solve this annoyance is to use the Queue Manager; use F2 - Queue, when performing long duration actions.

  4. hi coppit,

     

    A similar issue just happened to me.

     

    The WM is a ubuntu server 16.04 LTS and I am passing to it a TEMPer usb device, plugged directly to the onboard usb - motherboard's back i/o -  so no hub involved.

     

    Whenever i am trying to access/read the TEMPer device from inside the VM, the device is getting reset - in the unraid log i get messages like this:

    Feb 24 17:42:02 Towerx48 kernel: hid-generic 0003:0C45:7401.0003: input,hidraw0: USB HID v1.10 Keyboard [RDing TEMPerV1.4] on usb-0000:00:1d.2-1/input0
    Feb 24 17:42:02 Towerx48 kernel: hid-generic 0003:0C45:7401.0004: hiddev96,hidraw1: USB HID v1.10 Device [RDing TEMPerV1.4] on usb-0000:00:1d.2-1/input1

     

    The device becomes unavailable / inaccessible in the VM - does not show up anymore in lsusb.

    A restart of VM is required to see it again in VM.

     

    I hope I solved it: I changed the VM USB definition from ehci to xhci, although the motherboard does not have usb3.0 (rampage formula x48).

    If in the past i was always getting the usb reset, now after changing to xhci, i can consistently read it.

    I will update if i get issues in the following of days...

    But worth to try, in case you haven't already...

     

    good luck

    alex

     

    LE: i see your device recognized as xhci:

    Jan 31 21:32:41 storage kernel: usb 5-4: reset full-speed USB device number 4 using xhci_hcd

    but not sure if this is because of VM definition, or how the unraid sees it (most probably the later)

    anyhow, it's worth toggling this definition in VM...

  5. Hi

     

    I tried to search if this bug was already reported, but I could not find it.

    I admit that i did not browse through the dozens of result pages :) so if this is duplicate, please feel free to remove my post.

     

    This looks like a small issue and with W/A... still, maybe it's worth sharing it.

     

     

    Description:

    Users will end up with orphaned VM vdisk file, if following a failed attempt to initially start the VM (enabled "Start VM after creation"), the user decides to cancel the VM definition.

     

     

    How to reproduce:

    1) create VM with enabled checkbox "Start VM after creation" and allocate to VM more RAM than you have free in the system at that moment (this is easy to follow, so that VM will fail on first initialization, but for sure there are other ways to fail it)

    2) Save

    3) VM will fail to start due to not enough ram with an error like:

    VM creation error

    internal error: process exited while connecting to monitor: 2018-02-22T21:30:35.398973Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0)
    2018-02-22T21:30:35.401988Z qemu-system-x86_64: cannot set up guest memory 'pc.ram': Cannot allocate memory

     

    4) vdisk file is created

    5) After dismissing the popup with the RAM allocation error, if user is pressing Cancel (instead of unchecking the "Start VM after creation" and pressing Done), then VM definition is lost, but vdisk is not deleted.

     

    Expected results:

    VM definition is saved, even if user is pressing Cancel after failure - since the user already pressed Create

     

    Actual results:

    VM definition is lost, VM does not appear in the list

     

    Other information:

    W/A - user needs to login to the terminal of the unraid (/mnt/user/domains/), identify the orphaned vdisk file and reclaim the space by manually removing it.

    I am not aware if other entities are left orphaned, besides the vdisk file.

  6. hello

     

    seems that fix common problems plugin report the gitlab-ce ports as non-standard.

    Getting in FCP errors like:

    Docker Application GitLab-CE, Container Port 22 not found or changed on installed application

    Docker Application GitLab-CE, Container Port 80 not found or changed on installed application

    Docker Application GitLab-CE, Container Port 443 not found or changed on installed application

    somebody else reporting this in the GL-CE docker support thread.

    i guess we can ignore, but probably the plugin reporting such error should be corrected?

     

     

  7. On 1/25/2018 at 2:49 AM, johnchomp said:

    Fix Common Problems plugin keeps throwing this error for this 

     

     

    Docker Application GitLab-CE, Container Port 22 not found or changed on installed application

     

     

    Docker Application GitLab-CE, Container Port 80 not found or changed on installed application

     

     

    Docker Application GitLab-CE, Container Port 443 not found or changed on installed application

     

    same issue here. I don't remember if i changed these to other values. tried to change to 22, 80 and 443, but container does not start anymore.

    tried to remove the container and reinstalled, but same values are used by default (9022, 9080, 9443). will post also in the fix common problems plugin

  8. looks like solved, i did few things - hope it helps somebody:

     

    1) i manually checked for updates in the Plugins page, then installed the updates.

    Some hickups along the way:

    a) the check for updates was very slow

    b) the process stuck 2-3 times, had to navigate away, then back to Plugins page, then restarting the process of check for updates.

    c) one pluggin failed to update on first try, then had to restart the update and all was ok...

     

    2) Also, i disabled the autostart of the VM, then started the VM from CLI.

    Then re-enabled the autostart for that VM and since then, it starts properly and quickly, couple of seconds after unraid startup.

     

  9. hi,

     

    i need your help for an issue that is getting bigger and bigger...

     

    my setup:

    x99-m ws, 5820k, 32gb ram

    1 disk in array, 1 parity

    1 cache ssd (crucial mx300)

    1 passthrough ssd 850 EVO (daily driver with windows 10 with 1060 passthrough with rom dump)

    1 unassigned ssd (750 EVO)

     

    I noticed towards the end of the year that upon a fresh start of the unraid box, the windows 10 VM (autostart) would take more time to initialize.

    The unraid loads, reaching the tower login prompt, but waiting more and more (sometimes up to 2-3 minutes) to get the win 10 vm to autostart...

     

    now, after the new year, the VM does not start anymore under normal conditions.

    it gets stuck at tower login prompt... keyboard seems unresponsive, i cannot even type a username/password (i remember i could do that in the past if i was quick enough)...

     

    Current workaround is to select the gui mode from boot menu of unraid, and then at certain point some 3-5 minutes i believe, after loading the gui, it will autostart the windows 10 vm (with the usual artifacts on screen)

    One interesting thing to notice, is that while waiting for autostart VM, i cannot access the admin console (emhttp) - the firefox browser remains in "waiting for IP..."

     

    I am not sure if this is because of the flash usb drive, or some other issue with my hardware, or something else...?

     

    I do get some message just before the tower login prompt, related to unassigned devices, as if a certain file does not exist.

    I attach diagnostics for when i am successfully able to autostart the VM (unraid gui boot) - but i cannot extract the diagnostics for the situation when VM does not autostart (unraid default boot)

     

    kindly please suggest what should i try...

     

    thanks,

    alex

    towerx99-diagnostics-20180110-0035.zip

  10. i'm using the latest stable 6.3.5

     

    Not sure about your question - it's a setting in uefi bios or some configuration in unraid?

    If bios, i think i'm using Legacy OS (non uefi) - really i am a bit confused on this and not sure if/how it matters.

     

    Here's the xml. Just note it's passing through a secondary 1060 gpu (so no rom file). also i'm passing through an on-board ASMedia usb controller (bus 0x07).

     

    <domain type='kvm'>
      <name>MintCin</name>
      <uuid>xx</uuid>
      <description></description>
      <metadata>
        <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='4'/>
        <vcpupin vcpu='1' cpuset='5'/>
        <vcpupin vcpu='2' cpuset='10'/>
        <vcpupin vcpu='3' cpuset='11'/>
        <emulatorpin cpuset='0,6'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-q35-2.7'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/xx_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/disks/Samsung_SSD_750_EVO_500GB_xx/MintCin/vdisk1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x04' function='0x0'/>
        </disk>
        <controller type='usb' index='0' model='nec-xhci'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'/>
        <controller type='pci' index='1' model='dmi-to-pci-bridge'>
          <model name='i82801b11-bridge'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/>
        </controller>
        <controller type='pci' index='2' model='pci-bridge'>
          <model name='pci-bridge'/>
          <target chassisNr='2'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/>
        </controller>
        <filesystem type='mount' accessmode='passthrough'>
          <source dir='/mnt/user/lindrive'/>
          <target dir='lindrive'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
        </filesystem>
        <interface type='bridge'>
          <mac address='xx'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target port='0'/>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x07' function='0x0'/>
        </hostdev>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x08' function='0x0'/>
        </memballoon>
      </devices>
    </domain>


     

  11. totally understand the situation.

    d15s is asymetrical and might allow for better compatibility (but might go into the top fan of the case, etc...)

     

    but d15 ... then no option to take away the external fan? it will only increase temps by 2-3 C.

     

     

    i checked now - i also don't see the ovmf tianocore splash logo and neither the grub menu when booting the linux mint 18.2 - i just get the dots...

    then, the mint logo with the loading dots, then blank screen for 1-2 seconds, and finally the login screen - with an 1060. but this does not bother me currently

     

    give it a try also to give the rom explicitly to the gpu, even if it's not required  - maybe a downloaded one will help you better than "who knows what" customized bios asus might have put in there...

     

    good luck...

  12. i did not encounter stability issues with the plugged devices. also i am switching the unifying receiver from one controller to another (the controllers are passed through to different VM's) and no issue. USB sticks, HDD, speedlink wireless controllers - all went ok...

     

    The only issue i faced sometimes, as mentioned earlier, was with a rii mini keyboard usb dongle, when plugged in to an asmedia 3.1 internal controller - from time to time, it makes the controller to freeze or something, and i need to reset the whole unraid box... 

  13. any chance to unblock that 1st pcie slot by rotating the cpu cooler?

    another option, is to use a pcie raiser extension cable for that blocked 1x slot - but it depends on the case layout, if you have an extra slot to exit that usb card outside the mobo layout at the bottom... Maybe to have it hanging in the case, and plug in devices that you are not swapping frequently (e.g. keyboard, controller, etc)

     

    All this adds to the rabbit hole you're in... :)

    but i suggested these, because the GPU's should stay in the 2 designated GPU PCIE slots - they are connected with the cpu pcie lanes (and not the lanes from the chipset) 

     

    Try without the USB card, to have your GPU cards & VM's configured as you want. If that works, then clearly you have to find a way to plug that USB card in that first 1x slot... (just note that device id's might change, so a reconfiguration of VM's xmls might be required after removing/adding the usb card...)

     

    I will check about my mint vm what is configured. It might be that i'm also not seing a grub menu, just the green dots loading...

     

     

  14. it might be that the only solution is with the rom file, for that 780ti card. I have my doubts anyway if this is a long term solution - it could be that VM will start once, then consequent restarts will not boot anymore ... and a reboot (or shutdown+start) of the unraid box will be required.

     

    Try first with a 10xx card from a friend. or a 750ti - that one is on Maxwell i believe, which probably will work better.

     

    good luck

  15. dear community

     

    I have currently an unraid server with multiple VM's in place.

    The motherboard has 2 x 1Gb NICs - and i observed recently that bonding is enabled...

    I currently use only one of the interfaces (only 1 cable connected). No managed switch.

     

    Probably the only benefit to enabled bonding in my case, is that i can plug the cable to any of the 2 LAN ports...?

     

     

    Maybe in the future i will need to passthrough the other LAN controller to one of the VM's - and it's my current impression that bonding must be disabled

     

    Any recommended steps to disable bonding and still be able to access the unraid box and run the VM's?

     

    thanks,

    alex

  16. I am not sure, but i believe this is kepler architecture?

    Maybe because it's olded, it's not really supporting all the states for a proper passthrough on some of the OS's?

    I would suggest to extract the rom for the card, or try to get one from techpowerup. and then, specify the rom file in the xml of the VM. 

    you might have better success...

     

    Just to clarify, i am able to passthrough 10xx cards to mint 17, 18 VM without the need to use that nomodeset parameter...

    I tried for both: second PCIe slot (no rom required) and first slot PCIe (with rom extracted and file specified in the xml)

    but i am not having integrated GPU - i'm on x99 platform.

     

  17. i always get a similar message  - but now, not sure if when passingthrough similar card (inatek fresco), or the onboard controller asmedia 3.1 - will check and update...

     

    But the VM is ok, it boots without issues and i can work with the USB ports, plug and play...

     

    If you have any devices plugged in, try to remove them. Sometimes i got issues like this with a rii mini wireless keyboard, but not with logitech (unifying dongle).

    Also try various other VM guest OS's - linux mint for example, or ubuntu gnome, or fedora 25/26...

     

×
×
  • Create New...