Wuast94

Members
  • Posts

    66
  • Joined

  • Last visited

Posts posted by Wuast94

  1. I have Setup a VM with pass trough Nvidia GPU, normaly when i shutdown the vm the monitors turn off too. after updating unraid to 6.11 the VM shutt down and the Monitors stays on with the last seen screen. It seems that the GPU dont get shutdown correctly? 

    Hope someone can help.

    here my xml: 
     

    Spoiler
    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm' id='1'>
      <name>Manjaro</name>
      <uuid>1b2675e7-7e91-4aa5-b332-2edb20f44b0e</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Arch" icon="arch.png" os="arch"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>16</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='16'/>
        <vcpupin vcpu='1' cpuset='40'/>
        <vcpupin vcpu='2' cpuset='17'/>
        <vcpupin vcpu='3' cpuset='41'/>
        <vcpupin vcpu='4' cpuset='18'/>
        <vcpupin vcpu='5' cpuset='42'/>
        <vcpupin vcpu='6' cpuset='19'/>
        <vcpupin vcpu='7' cpuset='43'/>
        <vcpupin vcpu='8' cpuset='20'/>
        <vcpupin vcpu='9' cpuset='44'/>
        <vcpupin vcpu='10' cpuset='21'/>
        <vcpupin vcpu='11' cpuset='45'/>
        <vcpupin vcpu='12' cpuset='22'/>
        <vcpupin vcpu='13' cpuset='46'/>
        <vcpupin vcpu='14' cpuset='23'/>
        <vcpupin vcpu='15' cpuset='47'/>
      </cputune>
      <resource>
        <partition>/machine</partition>
      </resource>
      <os>
        <type arch='x86_64' machine='pc-q35-6.2'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/1b2675e7-7e91-4aa5-b332-2edb20f44b0e_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
      </features>
      <cpu mode='host-passthrough' check='none' migratable='on'>
        <topology sockets='1' dies='1' cores='8' threads='2'/>
        <cache mode='passthrough'/>
        <feature policy='require' name='topoext'/>
      </cpu>
      <clock offset='utc'>
        <timer name='rtc' tickpolicy='catchup'/>
        <timer name='pit' tickpolicy='delay'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source file='/mnt/user/domains/Manjaro/vdisk1.img' index='2'/>
          <backingStore/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <alias name='virtio-disk2'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/manjaro-kde-21.2.5-minimal-220314-linux515.iso' index='1'/>
          <backingStore/>
          <target dev='hda' bus='sata'/>
          <readonly/>
          <boot order='2'/>
          <alias name='sata0-0-0'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <controller type='usb' index='0' model='qemu-xhci' ports='15'>
          <alias name='usb'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pcie-root'>
          <alias name='pcie.0'/>
        </controller>
        <controller type='pci' index='1' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='1' port='0x8'/>
          <alias name='pci.1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/>
        </controller>
        <controller type='pci' index='2' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='2' port='0x9'/>
          <alias name='pci.2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='pci' index='3' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='3' port='0xa'/>
          <alias name='pci.3'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
        </controller>
        <controller type='pci' index='4' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='4' port='0xb'/>
          <alias name='pci.4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/>
        </controller>
        <controller type='pci' index='5' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='5' port='0xc'/>
          <alias name='pci.5'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/>
        </controller>
        <controller type='pci' index='6' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='6' port='0xd'/>
          <alias name='pci.6'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/>
        </controller>
        <controller type='pci' index='7' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='7' port='0xe'/>
          <alias name='pci.7'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/>
        </controller>
        <controller type='pci' index='8' model='pcie-root-port'>
          <model name='pcie-root-port'/>
          <target chassis='8' port='0xf'/>
          <alias name='pci.8'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x7'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <alias name='virtio-serial0'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </controller>
        <controller type='sata' index='0'>
          <alias name='ide'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
        </controller>
        <filesystem type='mount' accessmode='passthrough'>
          <source dir='/mnt/user/'/>
          <target dir='unraid'/>
          <alias name='fs0'/>
          <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
        </filesystem>
        <interface type='bridge'>
          <mac address='52:54:00:7c:e3:9a'/>
          <source bridge='br0'/>
          <target dev='vnet0'/>
          <model type='virtio-net'/>
          <alias name='net0'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>
        <serial type='pty'>
          <source path='/dev/pts/1'/>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
          <alias name='serial0'/>
        </serial>
        <console type='pty' tty='/dev/pts/1'>
          <source path='/dev/pts/1'/>
          <target type='serial' port='0'/>
          <alias name='serial0'/>
        </console>
        <channel type='unix'>
          <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Manjaro/org.qemu.guest_agent.0'/>
          <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
          <alias name='channel0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'>
          <alias name='input0'/>
        </input>
        <input type='keyboard' bus='ps2'>
          <alias name='input1'/>
        </input>
        <audio id='1' type='none'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0xc1' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev0'/>
          <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0xc1' slot='0x00' function='0x1'/>
          </source>
          <alias name='hostdev1'/>
          <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0xc2' slot='0x00' function='0x0'/>
          </source>
          <alias name='hostdev2'/>
          <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x046d'/>
            <product id='0xc31c'/>
            <address bus='3' device='3'/>
          </source>
          <alias name='hostdev3'/>
          <address type='usb' bus='0' port='1'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x1e7d'/>
            <product id='0x2e22'/>
            <address bus='3' device='4'/>
          </source>
          <alias name='hostdev4'/>
          <address type='usb' bus='0' port='2'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
      <seclabel type='dynamic' model='dac' relabel='yes'>
        <label>+0:+100</label>
        <imagelabel>+0:+100</imagelabel>
      </seclabel>
    </domain>

     

     
    If u need more Information feel free to ask :)

  2. i have similiar issue

    i have a VM where i passthrough a mouse and keyboard gpu soundcard, worked fine for month. 
    today i added 2 more USB devices, wich work fine. but now my mouse stops working after some time. with the hotplug usb plugin i can detach and attach the mouse again and after a few minutes the mouse stops working again

    now i followed spaceinvader video to passthroug the whole usb controller, nothing changed. the only thing i can now detach and attach the usb physically instead of do it over the plugin.

    someone know whats the problem here? 

    EDIT: Same for Keyboard, the other USB devices dont have this issue

  3. Here a few more stats and infos, all the lsof tasks seems a bit of to me 

     

    image.thumb.png.87115a3baf74bf9aee6d007c4ed0b7eb.png

    image.png.b9da278745432a29fc7eb89b5f9f60ac.png

     

    after stopping the docker service webui gets responsive again, but have a high CPU usage, every core about 10-30%, i pinned my cores so when i stop docker service unraid have 16/32 Cores/Threats alone

    My Log was at 100%, mainly nginx with a lot of worker connections are not enough lines

    another strange thing, i want to stop the array but the buttons are blinking (Mover and Stop Array) so some Array process is spawning again and again? 

    after restart and starting all up again it looks good so far, will see if it break after some time
     

  4. 2 minutes ago, Squid said:

    It looks like you've got multiple browser sessions and/or devices still open to the webUI which doesn't help things out.

     

    Also, one quirk of the UI is that basically until everything is started then it webUI won't respond to login attempts.  IE: After all containers are started (including whatever delays) and the VMs are started will the UI begin to start.

    I had make a backup at the time of diag and becouse of the long loading times i had multiple instance open, it makes no diffrenze if i use one tab or more, my containers are up quite some time and even this makes no diffrence 

  5. 14 minutes ago, TheNyan said:

    Cant help you with the network.

    But if you set the USB Type to ModBus, you also get power readings.

    You have to enabled ModBus on the UPS itself too, its in some sub menu.

    i enabled modbus but when i set to ModBus (no matter if i use usb or ether) i get nothing :/

  6. I have installed a APC Smart UPS from APC with a UPS Network Management Card 2. 

    How can i get the data? when i use the USB Cable i dont get any Power readings, but it works for Battery % and static data as Serial number and such things

    now i want to get it up over network with SNMP, i select "Ether" as usb Cable and "SNMP" as UPS Type. i enter the IP adress dont get any information, i tried something like "192.168.178.5;apc;apc" too but no luck, anyone have more information how i can get this up and running ? 

     

  7. im stuck in a boot loop on 7daystodie. i created the server and seems to work fine, i then imported an existing save, changed the config (server name and so on) and started the server again.

     

    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    ---Update Server---
    Redirecting stderr to '/serverdata/Steam/logs/stderr.txt'
    [ 0%] Checking for available updates...
    [----] Verifying installation...
    Steam Console Client (c) Valve Corporation
    -- type 'quit' to exit --
    Loading Steam API...OK.
    
    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    Success! App '294420' already up to date.
    ---Prepare Server---
    ---SaveGameFolder location correct---
    ---Savegame location found---
    ---UserDataFolder location correct---
    ---UserDataFolder location found---
    chmod: changing permissions of '/serverdata/serverfiles/serverconfig.xml': Operation not permitted
    ---Server ready---
    ---Start Server---
    Found path: /serverdata/serverfiles/7DaysToDieServer.x86_64
    ---Checking if UID: 99 matches user---
    usermod: no changes
    ---Checking if GID: 100 matches user---
    usermod: no changes
    ---Setting umask to 000---
    ---Checking for optional scripts---
    ---No optional script found, continuing---
    ---Starting...---
    ---Update SteamCMD---
    Redirecting stderr to '/serverdata/Steam/logs/stderr.txt'
    [ 0%] Checking for available updates...
    [----] Verifying installation...
    Steam Console Client (c) Valve Corporation
    -- type 'quit' to exit --
    Loading Steam API...OK.
    
    Connecting anonymously to Steam Public...Logged in OK
    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    ---Update Server---
    Redirecting stderr to '/serverdata/Steam/logs/stderr.txt'
    [ 0%] Checking for available updates...
    [----] Verifying installation...
    Steam Console Client (c) Valve Corporation
    -- type 'quit' to exit --
    Loading Steam API...OK.
    
    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    Success! App '294420' already up to date.
    ---Prepare Server---
    ---SaveGameFolder location correct---
    ---Savegame location found---
    ---UserDataFolder location correct---
    ---UserDataFolder location found---
    chmod: changing permissions of '/serverdata/serverfiles/serverconfig.xml': Operation not permitted
    ---Server ready---
    ---Start Server---
    Found path: /serverdata/serverfiles/7DaysToDieServer.x86_64

    I fixed the permissions in a try before that wasnt the error so we can ignore this for now :)

  8. On 10/7/2020 at 12:13 PM, ich777 said:

    No, don't want a screenshoot. :D

    This was only to show you that it works OOB. ;)

     

    Hmm really strange since the last entry says: 'Ready to go! Please point your browser to: http://0.0.0.0:8080' that should indicate that it runs.

    Can you please try to download a new copy with a different name (eg: 'MagicMirror2Test' or something like that) and install your plugins/addons there?

    I think there must be something wrong with a plugin/addon.

    Ok i finally found the issue... 
    a module https://github.com/NolanKingdon/MMM-DailyPokemon is crashing since MM2 V2.13

    • Like 1
  9. My Magic Mirror goes black a few days ago .. after a while of tweaking i checked that there is no problem with my hardware .. my server just serves a black screen. 

    I didnt changed anything and it was working before .. checkt the logs and no problems there .. but i just get a black screen when browsing to the url

     

  10. I have some problems with the Onlyoffice container, i run it first stock from template and was getting MySQL errors, i then changed to an external mariadb instance and this error is gone. But now i getting errors related to elasticsearch, i would use an external there too but there are no variables for that. 

     

    more infos are here https://github.com/ONLYOFFICE/Docker-CommunityServer/issues/100 , i created a github issue becouse i think its more related to the docker image than this thread but i hope that someone has the same proble or have a fix for that :) 

  11. 10 minutes ago, johnnie.black said:

    The log snippet you posted is perfectly normal, if the FAQ suggestions are being followed try this, it might catch something.

    i have two tails open, one on my pc and one that is piping it into a txt file on cache drive to see if there is anything usefull on crash

     

    2 minutes ago, testdasi said:

    Of course, if you recently make changes to the system (even software e.g. new docker / docker settings / VM etc.) then they do stand out as potential causes so you should provide details whatever that changed in tha last 3 days.

    i dont change anything the last month at the system, i also dont installed any plugins or anything else 

  12. bios is up to date but i will check the setting :)

    but why was my system running for month´s and is now stucking daily since 3 days ? 

     

    EDIT: Ram speeds are good, no over clock at all. CPU is stock too

  13. My Unraid server is Shutting down randomly, first i think it was the SSD, checked and reformated the whole SSD without errors, 

    Then i run a Memtest, with no errors at all. 

    SMART is okk too. 

     

    The server runs for serveral houres and then it goes offline. 

    the logs are a bit wired, this is an example, the logs are full with this: 
     

    Apr 29 13:28:32 Server kernel: vethd123a7d: renamed from eth0
    Apr 29 13:28:32 Server kernel: docker0: port 32(vethe08ef94) entered disabled state
    Apr 29 13:28:32 Server kernel: docker0: port 32(vethe08ef94) entered disabled state
    Apr 29 13:28:32 Server kernel: device vethe08ef94 left promiscuous mode
    Apr 29 13:28:32 Server kernel: docker0: port 32(vethe08ef94) entered disabled state
    Apr 29 13:29:05 Server kernel: docker0: port 32(vethfb022db) entered blocking state
    Apr 29 13:29:05 Server kernel: docker0: port 32(vethfb022db) entered disabled state
    Apr 29 13:29:05 Server kernel: device vethfb022db entered promiscuous mode
    Apr 29 13:29:05 Server kernel: IPv6: ADDRCONF(NETDEV_UP): vethfb022db: link is not ready
    Apr 29 13:29:06 Server kernel: eth0: renamed from veth5eba356
    Apr 29 13:29:06 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfb022db: link becomes ready
    Apr 29 13:29:06 Server kernel: docker0: port 32(vethfb022db) entered blocking state
    Apr 29 13:29:06 Server kernel: docker0: port 32(vethfb022db) entered forwarding state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered blocking state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered disabled state
    Apr 29 13:29:32 Server kernel: device veth3c88d1b entered promiscuous mode
    Apr 29 13:29:32 Server kernel: IPv6: ADDRCONF(NETDEV_UP): veth3c88d1b: link is not ready
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered blocking state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered forwarding state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered disabled state
    Apr 29 13:29:32 Server kernel: eth0: renamed from veth3e71acd
    Apr 29 13:29:32 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3c88d1b: link becomes ready
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered blocking state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered forwarding state
    Apr 29 13:29:32 Server kernel: veth3e71acd: renamed from eth0
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered disabled state
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered disabled state
    Apr 29 13:29:32 Server kernel: device veth3c88d1b left promiscuous mode
    Apr 29 13:29:32 Server kernel: docker0: port 34(veth3c88d1b) entered disabled state
    Apr 29 13:29:40 Server kernel: veth5eba356: renamed from eth0
    Apr 29 13:29:40 Server kernel: docker0: port 32(vethfb022db) entered disabled state
    Apr 29 13:29:40 Server kernel: docker0: port 32(vethfb022db) entered disabled state
    Apr 29 13:29:40 Server kernel: device vethfb022db left promiscuous mode
    Apr 29 13:29:40 Server kernel: docker0: port 32(vethfb022db) entered disabled state
    Apr 29 13:30:32 Server kernel: docker0: port 32(vethda94d6d) entered blocking state
    Apr 29 13:30:32 Server kernel: docker0: port 32(vethda94d6d) entered disabled state
    Apr 29 13:30:32 Server kernel: device vethda94d6d entered promiscuous mode
    Apr 29 13:30:32 Server kernel: IPv6: ADDRCONF(NETDEV_UP): vethda94d6d: link is not ready
    Apr 29 13:30:32 Server kernel: docker0: port 32(vethda94d6d) entered blocking state
    Apr 29 13:30:32 Server kernel: docker0: port 32(vethda94d6d) entered forwarding state
    Apr 29 13:30:32 Server kernel: docker0: port 32(vethda94d6d) entered disabled state
    Apr 29 13:30:33 Server kernel: eth0: renamed from vethbfaccc9
    Apr 29 13:30:33 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethda94d6d: link becomes ready
    Apr 29 13:30:33 Server kernel: docker0: port 32(vethda94d6d) entered blocking state
    Apr 29 13:30:33 Server kernel: docker0: port 32(vethda94d6d) entered forwarding state
    Apr 29 13:30:33 Server kernel: vethbfaccc9: renamed from eth0
    Apr 29 13:30:33 Server kernel: docker0: port 32(vethda94d6d) entered disabled state
    Apr 29 13:30:33 Server kernel: docker0: port 32(vethda94d6d) entered disabled state
    Apr 29 13:30:33 Server kernel: device vethda94d6d left promiscuous mode
    Apr 29 13:30:33 Server kernel: docker0: port 32(vethda94d6d) entered disabled state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered blocking state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered disabled state
    Apr 29 13:31:33 Server kernel: device veth7c1c504 entered promiscuous mode
    Apr 29 13:31:33 Server kernel: IPv6: ADDRCONF(NETDEV_UP): veth7c1c504: link is not ready
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered blocking state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered forwarding state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered disabled state
    Apr 29 13:31:33 Server kernel: eth0: renamed from veth6728934
    Apr 29 13:31:33 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7c1c504: link becomes ready
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered blocking state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered forwarding state
    Apr 29 13:31:33 Server kernel: veth6728934: renamed from eth0
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered disabled state
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered disabled state
    Apr 29 13:31:33 Server kernel: device veth7c1c504 left promiscuous mode
    Apr 29 13:31:33 Server kernel: docker0: port 32(veth7c1c504) entered disabled state


    Diag: 

    server-diagnostics-20200429-1337.zip

  14. Is there a way that all my docker Containers that are curently shown in docker tab are addet automaticly after recreating the docker image file ? 

    I have more then 30 Containers running and to add them one by another needs time and is not the best i can think of. 

    at least it would be nice to have a multi selection for adding containers back. that way i can select all containers i want to recreate after recreating the docker image file. 

  15. My unraid server stops running after an online time about a few days. I didn't do anything with the server at this moment. 
    unraid crashes and boots up, after I enter the encryption key and hitting start the server crashes again. 

    so I did tail the log "tail -f /var/log/syslog" 
    it runs to the point where the docker container boots up: 

    Dec 19 04:00:09 Server kernel: docker0: port 19(vetha153c45) entered blocking state
    Dec 19 04:00:09 Server kernel: docker0: port 19(vetha153c45) entered disabled state
    Dec 19 04:00:09 Server kernel: device vetha153c45 entered promiscuous mode
    Dec 19 04:00:09 Server kernel: IPv6: ADDRCONF(NETDEV_UP): vetha153c45: link is not ready
    Dec 19 04:00:09 Server kernel: docker0: port 19(vetha153c45) entered blocking state
    Dec 19 04:00:09 Server kernel: docker0: port 19(vetha153c45) entered forwarding state
    Dec 19 04:00:09 Server kernel: docker0: port 19(vetha153c45) entered disabled state
    Dec 19 04:00:10 Server kernel: eth0: renamed from veth3e07bc1
    Dec 19 04:00:10 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth352a4a0: link becomes ready
    Dec 19 04:00:10 Server kernel: docker0: port 16(veth352a4a0) entered blocking state
    Dec 19 04:00:10 Server kernel: docker0: port 16(veth352a4a0) entered forwarding state
    Dec 19 04:00:11 Server kernel: docker0: port 20(vethce2356b) entered blocking state
    Dec 19 04:00:11 Server kernel: docker0: port 20(vethce2356b) entered disabled state
    Dec 19 04:00:11 Server kernel: device vethce2356b entered promiscuous mode
    Dec 19 04:00:11 Server kernel: IPv6: ADDRCONF(NETDEV_UP): vethce2356b: link is not ready
    Dec 19 04:00:11 Server kernel: docker0: port 20(vethce2356b) entered blocking state
    Dec 19 04:00:11 Server kernel: docker0: port 20(vethce2356b) entered forwarding state
    Dec 19 04:00:11 Server kernel: docker0: port 20(vethce2356b) entered disabled state
    Dec 19 04:00:11 Server kernel: eth0: renamed from veth5b5b5ea
    Dec 19 04:00:11 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe941427: link becomes ready
    Dec 19 04:00:11 Server kernel: docker0: port 17(vethe941427) entered blocking state
    Dec 19 04:00:11 Server kernel: docker0: port 17(vethe941427) entered forwarding state
    Dec 19 04:00:13 Server kernel: docker0: port 21(vethf34c5ce) entered blocking state
    Dec 19 04:00:13 Server kernel: docker0: port 21(vethf34c5ce) entered disabled state
    Dec 19 04:00:13 Server kernel: device vethf34c5ce entered promiscuous mode
    Dec 19 04:00:13 Server kernel: IPv6: ADDRCONF(NETDEV_UP): vethf34c5ce: link is not ready
    Dec 19 04:00:13 Server kernel: docker0: port 21(vethf34c5ce) entered blocking state
    Dec 19 04:00:13 Server kernel: docker0: port 21(vethf34c5ce) entered forwarding state
    Dec 19 04:00:13 Server kernel: docker0: port 21(vethf34c5ce) entered disabled state
    Dec 19 04:00:13 Server kernel: eth0: renamed from veth0662c75
    Dec 19 04:00:13 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth83fcb3f: link becomes ready
    Dec 19 04:00:13 Server kernel: docker0: port 18(veth83fcb3f) entered blocking state
    Dec 19 04:00:13 Server kernel: docker0: port 18(veth83fcb3f) entered forwarding state

    and there it stops. i think its between the docker containers starting.. I have more containers booting up at start as I can count in this log file. 

    is there any docker log I can look at or anything else where I can find this problem ?