L0rdRaiden

Members
  • Posts

    568
  • Joined

  • Last visited

Posts posted by L0rdRaiden

  1. On 3/28/2019 at 12:11 AM, ken-ji said:

    You just need to create (and persist) a bridge device for your VMs to use.

    create a xml file (ie /tmp/lab-network.xml)

    <network ipv6='yes'>
      <name>lab-network</name>
      <bridge name="virbr1" stp="on" delay="0"/>
    </network>

    Then you enable the network with

    virsh net-define /tmp/lab-network.xml

    virsh net-start lab-network

     

    This will create a bridge virbr1, which you can assign to your VMs.

    There will be a host interface virbr1-nic (but will not be assigned an IP or any such automatically)

     

    refer to https://libvirt.org/formatnetwork.html for more details on the xml file format

    I read the entire thread but just to be sure before breaking everything.

    I have a FW running in a VM, I have VMs and I have docker. All in Unraid

     

    I want to create a "virtual NIC" attach it as a interface to the FW VM, and have it available to docker and other VMs. As far as I understood this is possible with the explanations given. Right?

    One concern is the speed of the interface, currently I'm using 10gb nics, but this would liberate some of them for other purposes. So I guess the virtual nic will consume CPU, not a problem, but is there a way to know what would be the expected performance? how the new virtual adapters will appear in the VM's? as 1000 Mbps, full duplex? 10000? is there a way to configure this? is there any other configuration with better performance? I don't see anything related to this here https://libvirt.org/formatnetwork.html

     

    Thanks

  2. Does it make sense for Unraid "a server" to be base on slackware?

    Slackware is mainly mantained by 1 person Patrick Volkerding - Wikipedia, when he was ill the future of Slackware wasn't clear, the truth is that no one what will happen with Slackware if he stops working on it.

    Other distros based on Redhat or Debian, are server grade / enterprise ready, they should be more stable, more tested, you get security updates instantly, SElinux support, it would be easier to do administrative task for Unraid users, etc.

    Currently in Unraid we lack of any way to monitor the security via audit logs, with redhat or debian based distros this could be easily solved even by the Users.

     

    I understand that Slackware for Unraid allows to use very little space and RAM but I don't see how would be a problem if unraid needs 200 mb of ram instead 100 to work.

    If a more flexible distro is a must then Arch linux could be another option.

     

    The same with dockerman, I understand why dockerman was develop but currently with docker compose, dockerman should be deprecated or at least provide full support for docker compose in Unraid.

     

    I think Unraid core needs a modernization before adding new features, and the sooner the better.

    • Like 1
  3. 9 hours ago, L0rdRaiden said:

    When I create a new docker compose and I use the same name for the container that I was using in dockerman for some reason the links to information related to the original docker with dockerman like donate, more info, support appear as well, it also broke the web UI set with the labels.

    I have tried deleting the dockerman templanes but does anyone know what unraid config file do I have to delete or edit so dockerman menu information won't be associated with a container created with compose?

     

    imagen.png.5fc1009c7e6ac613434ac9807cf81a6e.png

     

    If someone else have this problem, the fix is to delete the template from unraid and restart the server.

  4. 14 minutes ago, L0rdRaiden said:

    After enable the new setting is still problematic

    imagen.thumb.png.4be9c05c58ef13a0723d7baa3c2f46a1.png

     

    I have disable and enable docker, not a normal reboot in case it matters.

     

    The result

    imagen.thumb.png.635d0ea89c6a6c012a8679e3f5c9f7b5.png

     

    Examples

    imagen.thumb.png.8ee27fc245db085e983b19f58ce6ee3b.png

     

    imagen.thumb.png.156a1690f04589706ae21c65801401d6.png

     

    But if I manually do compose down, compose up, (buttons) it works.... any idea?

    Does it work only on reboot and not If docker is disable via settings?

    I tried a normal reboot and it worked, it would be easy to make it work as well when docker is disable/enable via webui?

  5. After enable the new setting is still problematic

    imagen.thumb.png.4be9c05c58ef13a0723d7baa3c2f46a1.png

     

    I have disable and enable docker, not a normal reboot in case it matters.

     

    The result

    imagen.thumb.png.635d0ea89c6a6c012a8679e3f5c9f7b5.png

     

    Examples

    imagen.thumb.png.8ee27fc245db085e983b19f58ce6ee3b.png

     

    imagen.thumb.png.156a1690f04589706ae21c65801401d6.png

     

    But if I manually do compose down, compose up, (buttons) it works.... any idea?

    Does it work only on reboot and not If docker is disable via settings?

  6. When I create a new docker compose and I use the same name for the container that I was using in dockerman for some reason the links to information related to the original docker with dockerman like donate, more info, support appear as well, it also broke the web UI set with the labels.

    I have tried deleting the dockerman templanes but does anyone know what unraid config file do I have to delete or edit so dockerman menu information won't be associated with a container created with compose?

     

    imagen.png.5fc1009c7e6ac613434ac9807cf81a6e.png

     

  7. Just this

     

    1) Fully compatible with Docker Compose (Network, shares, etc) within GUI. 

    2) Decoupling VM and Containers from Array if they don't use it

    3) Native Backup (For Array, cache, VM's and containers data)

    4) Incorporation of several plugins into Unraid Core - Some of them are just too important to not having them

    5) Support for auditd (security purposes)  https://slackbuilds.org/repository/15.0/system/audit/

     

    https://github.com/linux-audit

    https://github.com/Neo23x0/auditd

    • Upvote 3
  8. 16 hours ago, primeval_god said:

    The containers from any stack that was running will still exist when docker is restarted but will be in the stopped state. For those stacks that are not set to autostart the the containers will stay stopped and the red square is the expected symbol to mark this condition. As for your stacks that are set to autostart, due to the other issue you have, the containers fail to start when compose up is run, leaving the containers in the stopped state and the red square displayed. 

     

    Please clarify this statement. From your previous posts i was under the impression that clicking "Compose Up" would not result in the stack starting. My understanding was that you first had to click "Compose Down" then "Compose Up", which in effect removes and recreates the entire stack. 

    The commands executed by auto-start and the compose up button are the same.

     

    My suggestion would be to move the VMs and other containers on to the custom created networks as well. Once created they will be available for use in the rest of the unRAID GUI. Essentially just replace all the unRAID created networks with a manually created equivalent. 

     

     

    I'm now in 6.12.4 and I have applied the fix for the macvlan problems so now my networks are ethx, instead brx. I am using physical NICs for every networks excep for the docker internal networks.

    Now the behaviour is a little bit weird sometimes If I run them after the red squeare they just run ok but sometimes they complain about the missing network. I have to do some more test, I think it works when the stack was stoped before reboot and autostart is disable, in this case I can run the stack manually after reboot and it works.

     

    I have done a little test and it looks like that it properly works if I create the networks manually, so this is one possible solution but I wouldn't like to mess too much with the Unraid networks "standards".

     

    I know is too much to ask but could you add an option (optional) in the settings that on boot or docker service restart, compose manager will do a compose down and the compose up, and not only compose up. I guess this fill fix the problems for the "noobs" without messing with the creation of custom networks in docker.

    A setting to delay the startup like Unraid does would be nice as well, I have a lot of containers pending to migrate but I don't want to run them at the same time on restart.

     

    In any case you are doing and excellent job, I don't understand why compose doesn't get official support from unraid, that doesn't mean that it has to replace dockerman but to provide compose as a fully suported alternative.

  9. 12 hours ago, primeval_god said:

    The compose stacks without autostart do not 'fail' they are working as intended. The Red Stop Icon indicates that the compose stack is stopped, i.e. its containers exist but none of them are running. This is expected behavior for a running stack with autostart off when docker restarts. It is the same behavior as for non-compose containers on unRAID that are not configured with autostart. 

     

    As for your compose stacks that are set to auto start, the problem and recommendation remain the same as last time it was brought up. Compose stacks do not seem to work with the docker networks created by unRAID (via the gui). It appears that those networks are removed and recreated by unRAID when docker is restarted. Compose does not like this as it references networks not by name but by id in its internal live state. The solution is to not use those networks with compose stacks, instead you need to create custom macvlan networks manually on the command line using a docker network command. Once done the "Preserve user defined networks" setting will ensure that the custom networks are not removed between docker restarts. Compose containers should start correctly when attached to those networks. 

     

    As for why non-compose containers work fine with the networks unRAID creates, I dont really know. I assume it has something to do with how dockerman works, though it could just be something compose is picky about.

     

    Thanks for your detailed answer but regarding the red square all the stack always start with a red square does not matter if they have auto start enable or not.

     

    So I don't want to create a different network because the stacks need to comunicate with services in the existing networks that are being used by other containers and VM. I know you can allow connectivity but it will unnecessarily complicate things.


    So, considering this "restriction", maybe this question is very stupid but for some unknown reason compose manager is not able to properly auto-start stacks when existing networks (created by unraid) are being used BUT if I manually click in on "compose launch" button, the stack is launched properly and works, so my question is why auto-start does not work and it works prefectly when I manually press the button "compose up"

    Is the command different?

    Is because autostart tries to start the compose stacks too soon and networks are not ready? if this is the case can we add a delay option? like wait 30 secs after docker starts to start the docker compose stacks?

     

    I mean I can not understand why if I do it manually works but the autostart does not work, if they are doing basically the same thing, starting a compose stack.

  10. @primeval_god

    I'm using the latest version release today (thanks) but the same happens with the older ones.

    I still have the problem where the docker compose fails to start on reboot.

    For example this compose file

     

    ###############################################################
    # Watchtower 
    ###############################################################
    
    version: '3.8'
    
    # Services ####################################################
    
    services:
    
      watchtower:
        image: containrrr/watchtower
        container_name: Watchtower
        restart: unless-stopped
        networks:
          eth1:
            ipv4_address: 10.10.40.7
        volumes:
          - /var/run/docker.sock:/var/run/docker.sock
          #- /mnt/user/Docker/Watchtower:/config
        environment:
          - TZ
          - WATCHTOWER_CLEANUP=true
          - WATCHTOWER_INCLUDE_RESTARTING=true
          - WATCHTOWER_INCLUDE_STOPPED=true
          - WATCHTOWER_REVIVE_STOPPED=false
          - WATCHTOWER_TIMEOUT=60s
          - WATCHTOWER_LABEL_ENABLE=true
          - WATCHTOWER_LIFECYCLE_HOOKS=true
          - WATCHTOWER_NOTIFICATIONS=shoutrrr
          - WATCHTOWER_NOTIFICATION_URL
         #- WATCHTOWER_DEBUG=true
          - WATCHTOWER_LOG_LEVEL=info
          - WATCHTOWER_SCHEDULE=0 0 1 * * *
        secrets:
          - WATCHTOWER_NOTIFICATION_URL
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
    # Networks ####################################################
    
    networks:
      eth1:
        driver: macvlan
        external: true
    
    # Secrets ##############################################
    
    secrets:
      # WATCHTOWER_NOTIFICATION_URL
      WATCHTOWER_NOTIFICATION_URL:
        file: $DOCKERDIR/WATCHTOWER_NOTIFICATION_URL
        

     

     

    Is using eth1 with macvlan.

    The network is created by unraid and used by other dockers that I run without compose without any issue they auto start fine with Unraid.

    imagen.png.27988e5ce779ce1bc665143ada3bc95c.png

     

     

    Watchtower has autostart enable

    imagen.thumb.png.e7b22ed52fab44400d342d9034dda9e5.png

     

    I stop docker in unraid.

    imagen.png.6a97fe781d42b4092fabbb8438e88868.png

    And I enable it again

    watchtower appears like this

    imagen.thumb.png.db7f3a60b886d09dbab856683cb95715.png

    all the compose stack fail even if they dont have auto start enable

    imagen.thumb.png.d93277ee765d87640a0e342aa5da18d0.png

     

    I have to start them manually to make them work.

     

    this is log in debug mode during the whole process.

     

    I think I have a very common setup so why my compose stacks don't auto start properly

     

    Sep 13 17:49:27 Unraid ool www[13549]: /usr/local/emhttp/plugins/dynamix/scripts/emcmd 'cmdStatus=Apply'
    Sep 13 17:49:27 Unraid emhttpd: Starting services...
    Sep 13 17:49:27 Unraid emhttpd: shcmd (232738): /etc/rc.d/rc.samba restart
    Sep 13 17:49:27 Unraid winbindd[6185]: [2023/09/13 17:49:27.391735,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:49:27 Unraid winbindd[6185]:   Got sig[15] terminate (is_parent=1)
    Sep 13 17:49:27 Unraid winbindd[6187]: [2023/09/13 17:49:27.391756,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:49:27 Unraid winbindd[6187]:   Got sig[15] terminate (is_parent=0)
    Sep 13 17:49:27 Unraid winbindd[6489]: [2023/09/13 17:49:27.391808,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:49:27 Unraid winbindd[6489]:   Got sig[15] terminate (is_parent=0)
    Sep 13 17:49:29 Unraid root: Starting Samba:  /usr/sbin/smbd -D
    Sep 13 17:49:29 Unraid smbd[14080]: [2023/09/13 17:49:29.564446,  0] ../../source3/smbd/server.c:1741(main)
    Sep 13 17:49:29 Unraid smbd[14080]:   smbd version 4.17.10 started.
    Sep 13 17:49:29 Unraid smbd[14080]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Sep 13 17:49:29 Unraid root:                  /usr/sbin/winbindd -D
    Sep 13 17:49:29 Unraid winbindd[14082]: [2023/09/13 17:49:29.588195,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Sep 13 17:49:29 Unraid winbindd[14082]:   winbindd version 4.17.10 started.
    Sep 13 17:49:29 Unraid winbindd[14082]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Sep 13 17:49:29 Unraid winbindd[14087]: [2023/09/13 17:49:29.593508,  0] ../../source3/winbindd/winbindd_cache.c:3117(initialize_winbindd_cache)
    Sep 13 17:49:29 Unraid winbindd[14087]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Sep 13 17:49:29 Unraid emhttpd: shcmd (232742): /etc/rc.d/rc.avahidaemon restart
    Sep 13 17:49:29 Unraid root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
    Sep 13 17:49:29 Unraid avahi-daemon[6227]: Got SIGTERM, quitting.
    Sep 13 17:49:29 Unraid avahi-dnsconfd[6236]: read(): EOF
    Sep 13 17:49:29 Unraid avahi-daemon[6227]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.10.10.5.
    Sep 13 17:49:29 Unraid avahi-daemon[6227]: avahi-daemon 0.8 exiting.
    Sep 13 17:49:29 Unraid root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Successfully dropped root privileges.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: avahi-daemon 0.8 starting up.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Successfully called chroot().
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Successfully dropped remaining capabilities.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Loading service file /services/sftp-ssh.service.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Loading service file /services/smb.service.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Loading service file /services/ssh.service.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Joining mDNS multicast group on interface eth0.IPv4 with address 10.10.10.5.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: New relevant interface eth0.IPv4 for mDNS.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Network interface enumeration completed.
    Sep 13 17:49:29 Unraid avahi-daemon[14140]: Registering new address record for 10.10.10.5 on eth0.IPv4.
    Sep 13 17:49:29 Unraid emhttpd: shcmd (232743): /etc/rc.d/rc.avahidnsconfd restart
    Sep 13 17:49:29 Unraid root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
    Sep 13 17:49:29 Unraid root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
    Sep 13 17:49:29 Unraid avahi-dnsconfd[14153]: Successfully connected to Avahi daemon.
    Sep 13 17:49:30 Unraid root: Stopping compose stack: Graylog
    Sep 13 17:49:30 Unraid root: Stopping compose stack: Immich
    Sep 13 17:49:30 Unraid root: Stopping compose stack: Monitoring
    Sep 13 17:49:30 Unraid root: Stopping compose stack: Watchtower
    Sep 13 17:49:30 Unraid root: Stopping compose stack: WebProxyDMZ
    Sep 13 17:49:30 Unraid emhttpd: shcmd (232749): /etc/rc.d/rc.docker stop
    Sep 13 17:49:30 Unraid kernel: veth606a3b1: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: vethc72eb47: renamed from eth0
    Sep 13 17:49:30 Unraid avahi-daemon[14140]: Server startup complete. Host name is Unraid.local. Local service cookie is 1752185444.
    Sep 13 17:49:30 Unraid kernel: veth00276db: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth9fb0042: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth34695c8: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth2f4dbac: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth12fb531: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth3d116ec: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: veth4ea5315: renamed from eth0
    Sep 13 17:49:30 Unraid kernel: vethfd45763: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vethe580030: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth445a552: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth9209006: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vethba6a2a1: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth92f1c22: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vethdd15901: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth38db138: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vetha1e9fc3: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vethc9eaaf6: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 2(veth96e1bd4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 6(vethc4a8398) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 1(veth245e18d) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 5(vethd6a72e4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 8(veth21f9d76) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-98e0e83abfb0: port 1(veth3541939) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-1b3b1767229f: port 2(veth4c09404) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 4(vetha14d5aa) entered disabled state
    Sep 13 17:49:31 Unraid kernel: veth804b255: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth3bf140b: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: vethd5f8ceb: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: veth6182234: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 6(vethc4a8398) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device vethc4a8398 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 6(vethc4a8398) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 1(veth245e18d) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device veth245e18d left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 1(veth245e18d) entered disabled state
    Sep 13 17:49:31 Unraid kernel: vetha609c78: renamed from eth0
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 1(vethcbe2c9e) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 5(vethd6a72e4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device vethd6a72e4 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 5(vethd6a72e4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 8(veth21f9d76) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device veth21f9d76 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 8(veth21f9d76) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-1b3b1767229f: port 2(veth4c09404) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device veth4c09404 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-1b3b1767229f: port 2(veth4c09404) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 4(vetha14d5aa) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device vetha14d5aa left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-3f5c27e5ceca: port 4(vetha14d5aa) entered disabled state
    Sep 13 17:49:31 Unraid avahi-daemon[14140]: Service "Unraid" (/services/ssh.service) successfully established.
    Sep 13 17:49:31 Unraid avahi-daemon[14140]: Service "Unraid" (/services/smb.service) successfully established.
    Sep 13 17:49:31 Unraid avahi-daemon[14140]: Service "Unraid" (/services/sftp-ssh.service) successfully established.
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 2(veth96e1bd4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device veth96e1bd4 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 2(veth96e1bd4) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-98e0e83abfb0: port 1(veth3541939) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device veth3541939 left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-98e0e83abfb0: port 1(veth3541939) entered disabled state
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 1(vethcbe2c9e) entered disabled state
    Sep 13 17:49:31 Unraid kernel: device vethcbe2c9e left promiscuous mode
    Sep 13 17:49:31 Unraid kernel: br-33965e1a718e: port 1(vethcbe2c9e) entered disabled state
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 3(vethff5141d) entered disabled state
    Sep 13 17:49:32 Unraid kernel: vethaa4e234: renamed from eth0
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 3(vethff5141d) entered disabled state
    Sep 13 17:49:32 Unraid kernel: device vethff5141d left promiscuous mode
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 3(vethff5141d) entered disabled state
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 9(vethcb1dde7) entered disabled state
    Sep 13 17:49:32 Unraid kernel: vethfbf9b1b: renamed from eth1
    Sep 13 17:49:32 Unraid kernel: vethfa9fc66: renamed from eth1
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 2(veth5202ead) entered disabled state
    Sep 13 17:49:32 Unraid kernel: br-3f5c27e5ceca: port 7(vethaa50a1c) entered disabled state
    Sep 13 17:49:32 Unraid kernel: veth76f98a4: renamed from eth1
    Sep 13 17:49:32 Unraid kernel: vethaaaafab: renamed from eth1
    Sep 13 17:49:32 Unraid kernel: br-1b3b1767229f: port 4(vethf8a69ef) entered disabled state
    Sep 13 17:49:33 Unraid kernel: br-1b3b1767229f: port 4(vethf8a69ef) entered disabled state
    Sep 13 17:49:33 Unraid kernel: device vethf8a69ef left promiscuous mode
    Sep 13 17:49:33 Unraid kernel: br-1b3b1767229f: port 4(vethf8a69ef) entered disabled state
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 7(vethaa50a1c) entered disabled state
    Sep 13 17:49:33 Unraid kernel: device vethaa50a1c left promiscuous mode
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 7(vethaa50a1c) entered disabled state
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 2(veth5202ead) entered disabled state
    Sep 13 17:49:33 Unraid kernel: device veth5202ead left promiscuous mode
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 2(veth5202ead) entered disabled state
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 9(vethcb1dde7) entered disabled state
    Sep 13 17:49:33 Unraid kernel: device vethcb1dde7 left promiscuous mode
    Sep 13 17:49:33 Unraid kernel: br-3f5c27e5ceca: port 9(vethcb1dde7) entered disabled state
    Sep 13 17:49:33 Unraid kernel: veth7b5aca8: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: vethb9bbc69: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth5b6752a: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth36c48ec: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth64b4423: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth978c1b2: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth360a765: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth9d06992: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth9e5a030: renamed from eth0
    Sep 13 17:49:34 Unraid kernel: veth5153ab4: renamed from eth0
    Sep 13 17:49:35 Unraid kernel: veth2897b1c: renamed from eth0
    Sep 13 17:49:37 Unraid kernel: veth5e7be6e: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: vethaafb507: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: veth95ca8ee: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: veth86f1e3d: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: br-96c50f23b528: port 1(vethe206911) entered disabled state
    Sep 13 17:49:40 Unraid kernel: veth4b47089: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: veth5e4ab54: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: br-96c50f23b528: port 1(vethe206911) entered disabled state
    Sep 13 17:49:40 Unraid kernel: device vethe206911 left promiscuous mode
    Sep 13 17:49:40 Unraid kernel: br-96c50f23b528: port 1(vethe206911) entered disabled state
    Sep 13 17:49:40 Unraid kernel: veth9e58280: renamed from eth0
    Sep 13 17:49:40 Unraid kernel: br-1b3b1767229f: port 1(veth5d2671c) entered disabled state
    Sep 13 17:49:40 Unraid kernel: br-1b3b1767229f: port 1(veth5d2671c) entered disabled state
    Sep 13 17:49:40 Unraid kernel: device veth5d2671c left promiscuous mode
    Sep 13 17:49:40 Unraid kernel: br-1b3b1767229f: port 1(veth5d2671c) entered disabled state
    Sep 13 17:49:40 Unraid kernel: veth392206b: renamed from eth1
    Sep 13 17:49:40 Unraid kernel: br-33965e1a718e: port 3(veth4c13fa6) entered disabled state
    Sep 13 17:49:41 Unraid kernel: br-33965e1a718e: port 3(veth4c13fa6) entered disabled state
    Sep 13 17:49:41 Unraid kernel: device veth4c13fa6 left promiscuous mode
    Sep 13 17:49:41 Unraid kernel: br-33965e1a718e: port 3(veth4c13fa6) entered disabled state
    Sep 13 17:49:41 Unraid kernel: vethbb7b1be: renamed from eth2
    Sep 13 17:49:41 Unraid kernel: br-98e0e83abfb0: port 2(veth61ab720) entered disabled state
    Sep 13 17:49:41 Unraid kernel: br-1b3b1767229f: port 3(veth5cfbbde) entered disabled state
    Sep 13 17:49:41 Unraid kernel: vetha625cdb: renamed from eth1
    Sep 13 17:49:41 Unraid kernel: br-98e0e83abfb0: port 2(veth61ab720) entered disabled state
    Sep 13 17:49:41 Unraid kernel: device veth61ab720 left promiscuous mode
    Sep 13 17:49:41 Unraid kernel: br-98e0e83abfb0: port 2(veth61ab720) entered disabled state
    Sep 13 17:49:41 Unraid kernel: br-1b3b1767229f: port 3(veth5cfbbde) entered disabled state
    Sep 13 17:49:41 Unraid kernel: device veth5cfbbde left promiscuous mode
    Sep 13 17:49:41 Unraid kernel: br-1b3b1767229f: port 3(veth5cfbbde) entered disabled state
    Sep 13 17:49:41 Unraid kernel: veth049bcbc: renamed from eth3
    Sep 13 17:49:41 Unraid kernel: br-96c50f23b528: port 2(veth192da1c) entered disabled state
    Sep 13 17:49:41 Unraid kernel: br-96c50f23b528: port 2(veth192da1c) entered disabled state
    Sep 13 17:49:41 Unraid kernel: device veth192da1c left promiscuous mode
    Sep 13 17:49:41 Unraid kernel: br-96c50f23b528: port 2(veth192da1c) entered disabled state
    Sep 13 17:49:41 Unraid root: stopping dockerd ...
    Sep 13 17:49:42 Unraid emhttpd: shcmd (232750): umount /var/lib/docker
    Sep 13 17:49:43 Unraid kernel: XFS (loop2): Unmounting Filesystem
    Sep 13 17:50:27 Unraid ool www[18498]: /usr/local/emhttp/plugins/dynamix/scripts/emcmd 'cmdStatus=Apply'
    Sep 13 17:50:27 Unraid emhttpd: Starting services...
    Sep 13 17:50:27 Unraid emhttpd: shcmd (232886): /etc/rc.d/rc.samba restart
    Sep 13 17:50:27 Unraid winbindd[14088]: [2023/09/13 17:50:27.191589,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:50:27 Unraid winbindd[14087]: [2023/09/13 17:50:27.191593,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:50:27 Unraid winbindd[14088]:   Got sig[15] terminate (is_parent=0)
    Sep 13 17:50:27 Unraid winbindd[14087]:   Got sig[15] terminate (is_parent=1)
    Sep 13 17:50:27 Unraid winbindd[18010]: [2023/09/13 17:50:27.192030,  0] ../../source3/winbindd/winbindd_dual.c:1950(winbindd_sig_term_handler)
    Sep 13 17:50:27 Unraid winbindd[18010]:   Got sig[15] terminate (is_parent=0)
    Sep 13 17:50:29 Unraid root: Starting Samba:  /usr/sbin/smbd -D
    Sep 13 17:50:29 Unraid smbd[18705]: [2023/09/13 17:50:29.363840,  0] ../../source3/smbd/server.c:1741(main)
    Sep 13 17:50:29 Unraid smbd[18705]:   smbd version 4.17.10 started.
    Sep 13 17:50:29 Unraid smbd[18705]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Sep 13 17:50:29 Unraid root:                  /usr/sbin/winbindd -D
    Sep 13 17:50:29 Unraid winbindd[18707]: [2023/09/13 17:50:29.385071,  0] ../../source3/winbindd/winbindd.c:1440(main)
    Sep 13 17:50:29 Unraid winbindd[18707]:   winbindd version 4.17.10 started.
    Sep 13 17:50:29 Unraid winbindd[18707]:   Copyright Andrew Tridgell and the Samba Team 1992-2022
    Sep 13 17:50:29 Unraid winbindd[18712]: [2023/09/13 17:50:29.390296,  0] ../../source3/winbindd/winbindd_cache.c:3117(initialize_winbindd_cache)
    Sep 13 17:50:29 Unraid winbindd[18712]:   initialize_winbindd_cache: clearing cache and re-creating with version number 2
    Sep 13 17:50:29 Unraid emhttpd: shcmd (232890): /etc/rc.d/rc.avahidaemon restart
    Sep 13 17:50:29 Unraid root: Stopping Avahi mDNS/DNS-SD Daemon: stopped
    Sep 13 17:50:29 Unraid avahi-daemon[14140]: Got SIGTERM, quitting.
    Sep 13 17:50:29 Unraid avahi-dnsconfd[14153]: read(): EOF
    Sep 13 17:50:29 Unraid avahi-daemon[14140]: Leaving mDNS multicast group on interface eth0.IPv4 with address 10.10.10.5.
    Sep 13 17:50:29 Unraid avahi-daemon[14140]: avahi-daemon 0.8 exiting.
    Sep 13 17:50:29 Unraid root: Starting Avahi mDNS/DNS-SD Daemon: /usr/sbin/avahi-daemon -D
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Found user 'avahi' (UID 61) and group 'avahi' (GID 214).
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Successfully dropped root privileges.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: avahi-daemon 0.8 starting up.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Successfully called chroot().
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Successfully dropped remaining capabilities.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Loading service file /services/sftp-ssh.service.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Loading service file /services/smb.service.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Loading service file /services/ssh.service.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Joining mDNS multicast group on interface eth0.IPv4 with address 10.10.10.5.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: New relevant interface eth0.IPv4 for mDNS.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Network interface enumeration completed.
    Sep 13 17:50:29 Unraid avahi-daemon[18751]: Registering new address record for 10.10.10.5 on eth0.IPv4.
    Sep 13 17:50:29 Unraid emhttpd: shcmd (232891): /etc/rc.d/rc.avahidnsconfd restart
    Sep 13 17:50:29 Unraid root: Stopping Avahi mDNS/DNS-SD DNS Server Configuration Daemon: stopped
    Sep 13 17:50:29 Unraid root: Starting Avahi mDNS/DNS-SD DNS Server Configuration Daemon:  /usr/sbin/avahi-dnsconfd -D
    Sep 13 17:50:29 Unraid avahi-dnsconfd[18760]: Successfully connected to Avahi daemon.
    Sep 13 17:50:29 Unraid emhttpd: shcmd (232903): /usr/local/sbin/mount_image '/mnt/user/Docker/docker-xfs.img' /var/lib/docker 40
    Sep 13 17:50:29 Unraid kernel: loop2: detected capacity change from 0 to 83886080
    Sep 13 17:50:29 Unraid kernel: XFS (loop2): Mounting V5 Filesystem
    Sep 13 17:50:29 Unraid kernel: XFS (loop2): Ending clean mount
    Sep 13 17:50:29 Unraid root: meta-data=/dev/loop2             isize=512    agcount=4, agsize=2621440 blks
    Sep 13 17:50:29 Unraid root:          =                       sectsz=512   attr=2, projid32bit=1
    Sep 13 17:50:29 Unraid root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
    Sep 13 17:50:29 Unraid root:          =                       reflink=1    bigtime=1 inobtcount=1 nrext64=0
    Sep 13 17:50:29 Unraid root: data     =                       bsize=4096   blocks=10485760, imaxpct=25
    Sep 13 17:50:29 Unraid root:          =                       sunit=0      swidth=0 blks
    Sep 13 17:50:29 Unraid root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
    Sep 13 17:50:29 Unraid root: log      =internal log           bsize=4096   blocks=16384, version=2
    Sep 13 17:50:29 Unraid root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
    Sep 13 17:50:29 Unraid root: realtime =none                   extsz=4096   blocks=0, rtextents=0
    Sep 13 17:50:29 Unraid emhttpd: shcmd (232905): /etc/rc.d/rc.docker start
    Sep 13 17:50:29 Unraid root: starting dockerd ...
    Sep 13 17:50:30 Unraid avahi-daemon[18751]: Server startup complete. Host name is Unraid.local. Local service cookie is 1354496256.
    Sep 13 17:50:31 Unraid avahi-daemon[18751]: Service "Unraid" (/services/ssh.service) successfully established.
    Sep 13 17:50:31 Unraid avahi-daemon[18751]: Service "Unraid" (/services/smb.service) successfully established.
    Sep 13 17:50:31 Unraid avahi-daemon[18751]: Service "Unraid" (/services/sftp-ssh.service) successfully established.
    Sep 13 17:50:32 Unraid rc.docker: created network macvlan eth0 with subnets: 10.10.10.0/24; 
    Sep 13 17:50:32 Unraid rc.docker: connecting Frigate to network eth0
    Sep 13 17:50:32 Unraid rc.docker: connecting Mosquitto to network eth0
    Sep 13 17:50:32 Unraid rc.docker: connecting Zigbee2MQTT to network eth0
    Sep 13 17:50:32 Unraid rc.docker: connecting HomeAssistant to network eth0
    Sep 13 17:50:32 Unraid rc.docker: prepared network vhost0 for host access
    Sep 13 17:50:32 Unraid rc.docker: created network macvlan eth1 with subnets: 10.10.40.0/24; 
    Sep 13 17:50:32 Unraid rc.docker: connecting KMSserver to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Unpackerr to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting MariaDBHA to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Firefox to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Sonarr to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Dozzle to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Code-server to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Duplicati to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting UptimeKuma to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting FlareSolverr to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Radarr to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Tautulli to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting AdGuardHome to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Jackett to network eth1
    Sep 13 17:50:32 Unraid rc.docker: connecting Scrutiny to network eth1
    Sep 13 17:50:32 Unraid rc.docker: created network macvlan eth2 with subnets: 10.10.50.0/24; 
    Sep 13 17:50:32 Unraid rc.docker: connecting qbittorrent to network eth2
    Sep 13 17:50:32 Unraid rc.docker: connecting AdGuardHomeDMZ to network eth2
    Sep 13 17:50:32 Unraid rc.docker: connecting Plex to network eth2
    Sep 13 17:50:33 Unraid rc.docker: created network macvlan eth2.60 with subnets: 10.10.60.0/24; 
    Sep 13 17:50:33 Unraid kernel: eth0: renamed from veth22f2ca7
    Sep 13 17:50:33 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:33 Unraid rc.docker: AdGuardHome: started succesfully!
    Sep 13 17:50:33 Unraid kernel: eth0: renamed from vethd35f440
    Sep 13 17:50:33 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:34 Unraid rc.docker: AdGuardHomeDMZ: started succesfully!
    Sep 13 17:50:34 Unraid kernel: eth0: renamed from veth8247be8
    Sep 13 17:50:34 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:34 Unraid rc.docker: MariaDBHA: wait 15 seconds
    Sep 13 17:50:34 Unraid rc.docker: MariaDBHA: started succesfully!
    Sep 13 17:50:49 Unraid kernel: eth0: renamed from vethe1031f2
    Sep 13 17:50:49 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:49 Unraid rc.docker: HomeAssistant: started succesfully!
    Sep 13 17:50:50 Unraid kernel: eth0: renamed from veth9abd61c
    Sep 13 17:50:50 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:50 Unraid rc.docker: Mosquitto: started succesfully!
    Sep 13 17:50:50 Unraid kernel: eth0: renamed from veth2c84adf
    Sep 13 17:50:50 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:50 Unraid rc.docker: Zigbee2MQTT: started succesfully!
    Sep 13 17:50:51 Unraid kernel: eth0: renamed from veth9f8ab68
    Sep 13 17:50:51 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:53 Unraid rc.docker: Frigate: started succesfully!
    Sep 13 17:50:53 Unraid kernel: eth0: renamed from veth075a479
    Sep 13 17:50:53 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:53 Unraid rc.docker: Tautulli: started succesfully!
    Sep 13 17:50:54 Unraid kernel: eth0: renamed from veth37d6726
    Sep 13 17:50:54 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:54 Unraid rc.docker: Jackett: started succesfully!
    Sep 13 17:50:54 Unraid kernel: eth0: renamed from veth5fd4845
    Sep 13 17:50:54 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:54 Unraid rc.docker: Radarr: started succesfully!
    Sep 13 17:50:55 Unraid kernel: eth0: renamed from vethd27c04c
    Sep 13 17:50:55 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:55 Unraid rc.docker: Sonarr: started succesfully!
    Sep 13 17:50:55 Unraid kernel: eth0: renamed from veth46bcd56
    Sep 13 17:50:55 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:55 Unraid rc.docker: Unpackerr: started succesfully!
    Sep 13 17:50:56 Unraid kernel: eth0: renamed from veth89b5b39
    Sep 13 17:50:56 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:56 Unraid rc.docker: FlareSolverr: started succesfully!
    Sep 13 17:50:56 Unraid kernel: eth0: renamed from veth336e2a4
    Sep 13 17:50:56 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:56 Unraid rc.docker: Duplicati: started succesfully!
    Sep 13 17:50:57 Unraid kernel: eth0: renamed from veth8e20be4
    Sep 13 17:50:57 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:50:57 Unraid rc.docker: Dozzle: started succesfully!
    Sep 13 17:50:57 Unraid rc.docker: Dozzle: wait 30 seconds
    Sep 13 17:51:01 Unraid emhttpd: read SMART /dev/sde
    Sep 13 17:51:17 Unraid root: ttyd -R -o -i '/var/tmp/compose_manager_action.sock' '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug' > /dev/null &
    Sep 13 17:51:17 Unraid root: '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug'
    Sep 13 17:51:17 Unraid root: /plugins/compose.manager/php/show_ttyd.php
    Sep 13 17:51:19 Unraid root: docker compose  -f '/mnt/user/Docker/docker-compose/Watchtower/docker-compose.yml' -f '/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' -p watchtower up -d
    Sep 13 17:51:19 Unraid kernel: eth0: renamed from vethcd6fb17
    Sep 13 17:51:19 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:27 Unraid kernel: eth0: renamed from vethb168e81
    Sep 13 17:51:27 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:27 Unraid rc.docker: UptimeKuma: started succesfully!
    Sep 13 17:51:28 Unraid kernel: eth0: renamed from vethdd81f4c
    Sep 13 17:51:28 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:28 Unraid rc.docker: Code-server: started succesfully!
    Sep 13 17:51:28 Unraid kernel: eth0: renamed from vethde06897
    Sep 13 17:51:28 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:28 Unraid rc.docker: KMSserver: started succesfully!
    Sep 13 17:51:29 Unraid kernel: eth0: renamed from veth6245715
    Sep 13 17:51:29 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:30 Unraid root: ttyd -R -o -i '/var/tmp/compose_manager_action.sock' '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cdown' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug' > /dev/null &
    Sep 13 17:51:30 Unraid root: '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cdown' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug'
    Sep 13 17:51:30 Unraid root: /plugins/compose.manager/php/show_ttyd.php
    Sep 13 17:51:31 Unraid rc.docker: Plex: started succesfully!
    Sep 13 17:51:31 Unraid kernel: eth0: renamed from veth7f28c4d
    Sep 13 17:51:31 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:51:31 Unraid root: docker compose  -f '/mnt/user/Docker/docker-compose/Watchtower/docker-compose.yml' -f '/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' -p watchtower down
    Sep 13 17:51:31 Unraid rc.docker: qbittorrent: started succesfully!
    Sep 13 17:51:31 Unraid kernel: vethcd6fb17: renamed from eth0
    Sep 13 17:51:37 Unraid root: ttyd -R -o -i '/var/tmp/compose_manager_action.sock' '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug' > /dev/null &
    Sep 13 17:51:37 Unraid root: '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pwatchtower' '-d/mnt/user/Docker/docker-compose/Watchtower' '-f/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' '--debug'
    Sep 13 17:51:37 Unraid root: /plugins/compose.manager/php/show_ttyd.php
    Sep 13 17:51:38 Unraid root: docker compose  -f '/mnt/user/Docker/docker-compose/Watchtower/docker-compose.yml' -f '/boot/config/plugins/compose.manager/projects/Watchtower/docker-compose.override.yml' -p watchtower up -d
    Sep 13 17:51:39 Unraid kernel: eth0: renamed from vethf015c6b
    Sep 13 17:51:39 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:52:25 Unraid root: ttyd -R -o -i '/var/tmp/compose_manager_action.sock' '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pgraylog' '-d/mnt/user/Docker/docker-compose/Graylog' '-f/boot/config/plugins/compose.manager/projects/Graylog/docker-compose.override.yml' '--debug' > /dev/null &
    Sep 13 17:52:25 Unraid root: '/usr/local/emhttp/plugins/compose.manager/scripts/compose.sh' '-cup' '-pgraylog' '-d/mnt/user/Docker/docker-compose/Graylog' '-f/boot/config/plugins/compose.manager/projects/Graylog/docker-compose.override.yml' '--debug'
    Sep 13 17:52:25 Unraid root: /plugins/compose.manager/php/show_ttyd.php
    Sep 13 17:52:27 Unraid root: docker compose  -f '/mnt/user/Docker/docker-compose/Graylog/docker-compose.yml' -f '/boot/config/plugins/compose.manager/projects/Graylog/docker-compose.override.yml' -p graylog up -d
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 1(vethe4b3c32) entered blocking state
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 1(vethe4b3c32) entered disabled state
    Sep 13 17:52:27 Unraid kernel: device vethe4b3c32 entered promiscuous mode
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered blocking state
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered disabled state
    Sep 13 17:52:27 Unraid kernel: device vethf8b05df entered promiscuous mode
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered blocking state
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered forwarding state
    Sep 13 17:52:27 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered disabled state
    Sep 13 17:52:28 Unraid kernel: eth0: renamed from veth49f02e6
    Sep 13 17:52:28 Unraid kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe4b3c32: link becomes ready
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 1(vethe4b3c32) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 1(vethe4b3c32) entered forwarding state
    Sep 13 17:52:28 Unraid kernel: eth0: renamed from veth1d5db67
    Sep 13 17:52:28 Unraid kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethf8b05df: link becomes ready
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 2(vethf8b05df) entered forwarding state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 3(vethdcee40c) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 3(vethdcee40c) entered disabled state
    Sep 13 17:52:28 Unraid kernel: device vethdcee40c entered promiscuous mode
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 3(vethdcee40c) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 3(vethdcee40c) entered forwarding state
    Sep 13 17:52:28 Unraid kernel: eth0: renamed from veth6aa6af0
    Sep 13 17:52:28 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:52:28 Unraid kernel: eth1: renamed from veth79aefcf
    Sep 13 17:52:28 Unraid kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethdcee40c: link becomes ready
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered disabled state
    Sep 13 17:52:28 Unraid kernel: device vethe4fbe7f entered promiscuous mode
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered blocking state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered forwarding state
    Sep 13 17:52:28 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered disabled state
    Sep 13 17:52:29 Unraid kernel: eth0: renamed from vethfe4c684
    Sep 13 17:52:29 Unraid kernel: 8021q: adding VLAN 0 to HW filter on device eth0
    Sep 13 17:52:29 Unraid kernel: eth1: renamed from veth22f3a7a
    Sep 13 17:52:29 Unraid kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethe4fbe7f: link becomes ready
    Sep 13 17:52:29 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered blocking state
    Sep 13 17:52:29 Unraid kernel: br-1b3b1767229f: port 4(vethe4fbe7f) entered forwarding state

     

    imagen.thumb.png.e90a73a37922601d65ea6aa5292beae1.png

  11. 19 minutes ago, Jase said:

    @ich777Hello! Long time no speak. I hope you are well and thank you so much for all your wonderful efforts for the Unraid community.

     

    I'm chiming into this thread to see if there is a way to downgrade to Nvidia driver 525.60.13 as this is the only version that works with the current version of Plex on Linux/Unraid. Please have a look at this thread over here. My comments start at the bottom '2 months later' (Jase) with some logs.

    https://forums.plex.tv/t/nvidia-hardware-acceleration-inconsistently-working-with-web-streaming/828463/149

     

    The issue is that transcoding does not work with browsers when playing content from Plex. There are no problems however with Hardware client devices like the Apple TV, iOS etc. Any ideas on how to downgrade would be greatly appreciated.

     

    Thank you!

     

     

    That and the infoyou reported sound similar with what I posted

  12. I have the impression that when you restart or shutdown the server, there is no a clean stop of the docker compose, in the same way if you do it manually clicking on "compose down", and sometimes even if the auto start is off (happens as well if it is on), some dockers of the compose are launched like the "restart: always" parameter is doing something because there wasn't a clean stop of the compose before.

     

    I think if unraid would apply the same command executed with compose down button, it will work fine.

     

    In my case I can only launch properly the compose if I do it manually, first stop and then launch. everything else fails.

    Also I think that a delay for the auto start, if it works could be useful.

  13. same issue, I think is the first time I see this. 6.12.3 and I monitor my logs with graylog.

     

    Jul 20 01:00:28 Unraid kernel: traps: IPMI[sel][17163] general protection fault ip:7feafda266a5 sp:7feafd4747e8 error:0 in ld-musl-x86_64.so.1[7feafda15000+4c000]

     

    Jul 20 02:01:25 Unraid kernel: traps: IPMI[sel][29259] general protection fault ip:7f7b7f3716a5 sp:7f7b7edbf7d8 error:0 in ld-musl-x86_64.so.1[7f7b7f360000+4c000]

     

    @Adriano Frare I think it would be a good idea if you upload your diagnostic file.

    unraid-diagnostics-20230720-1906.zip

  14. I got this errors in case the mean something

     

    Jul 18 20:20:28 Unraid kernel: NVRM: GPU at PCI:0000:04:00: GPU-f1c0f52c-e491-64c7-428c-e10038734368
    Jul 18 20:20:28 Unraid kernel: NVRM: GPU Board Serial Number: 1425320036134
    Jul 18 20:20:28 Unraid kernel: NVRM: Xid (PCI:0000:04:00): 13, pid='<unknown>', name=<unknown>, Graphics SM Warp Exception on (GPC 1, TPC 1): Stack Error
    Jul 18 20:20:28 Unraid kernel: NVRM: Xid (PCI:0000:04:00): 13, pid='<unknown>', name=<unknown>, Graphics Exception: ESR 0x50ce48=0x170001 0x50ce50=0x0 0x50ce44=0xd3eff2 0x50ce4c=0x17f

  15. @alturismo @ich777

    I have good news and bad news... xD

    The "good" news is that I can transcode full hd content with GPU and it works, although plex might become a little bit unresposive

    Even 5 or 7 minutes after I stop the trancoding GPO and or CPU are with a high consuption for no reason

    imagen.thumb.png.7c8f03236e5904bf0d189cf64bf211a8.png

     

    imagen.thumb.png.4dc77066a8c7d507cc10764509f77824.png

     

    With 4k content I get the error reported before... it's very strange. I'm using now 6.12.3

    I will try a clean plex install or revert to a previous version of the driver.

     

    I know you can't help me much more but any ideas are welcome

  16. I got another call macvlan error

     

    Jul 17 18:03:04 Unraid kernel: ------------[ cut here ]------------
    Jul 17 18:03:04 Unraid kernel: WARNING: CPU: 9 PID: 245 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 17 18:03:04 Unraid kernel: Modules linked in: nvidia_uvm(PO) xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge 8021q garp mrp stp llc ixgbe xfrm_algo mdio igb i2c_algo_bit nvidia_drm(PO) nvidia_modeset(PO) zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) edac_mce_amd intel_rapl_msr edac_core intel_rapl_common iosf_mbi zcommon(PO) znvpair(PO) spl(O) kvm_amd nvidia(PO) kvm video drm_kms_helper crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel crypto_simd cryptd wmi_bmof mxm_wmi asus_wmi_sensors drm rapl k10temp i2c_piix4 nvme ccp backlight nvme_core i2c_core ahci syscopyarea cdc_acm sysfillrect sysimgblt libahci
    Jul 17 18:03:04 Unraid kernel: fb_sys_fops tpm_crb tpm_tis tpm_tis_core tpm wmi button acpi_cpufreq unix [last unloaded: xfrm_algo]
    Jul 17 18:03:04 Unraid kernel: CPU: 9 PID: 245 Comm: kworker/u64:5 Tainted: P           O       6.1.38-Unraid #2
    Jul 17 18:03:04 Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021
    Jul 17 18:03:04 Unraid kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan]
    Jul 17 18:03:04 Unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 17 18:03:04 Unraid kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01
    Jul 17 18:03:04 Unraid kernel: RSP: 0018:ffffc90000438d98 EFLAGS: 00010202
    Jul 17 18:03:04 Unraid kernel: RAX: 0000000000000001 RBX: ffff8884be48b300 RCX: a343d541328389c7
    Jul 17 18:03:04 Unraid kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8884be48b300
    Jul 17 18:03:04 Unraid kernel: RBP: 0000000000000001 R08: f738cf72635c1332 R09: a602caa3a0dd9a76
    Jul 17 18:03:04 Unraid kernel: R10: 11d3e2b4abc2d99c R11: ffffc90000438d60 R12: ffffffff82a11d00
    Jul 17 18:03:04 Unraid kernel: R13: 0000000000034284 R14: ffff8881086d1a00 R15: 0000000000000000
    Jul 17 18:03:04 Unraid kernel: FS:  0000000000000000(0000) GS:ffff888ffea40000(0000) knlGS:0000000000000000
    Jul 17 18:03:04 Unraid kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Jul 17 18:03:04 Unraid kernel: CR2: 000000c0004f9000 CR3: 0000000174692000 CR4: 0000000000350ee0
    Jul 17 18:03:04 Unraid kernel: Call Trace:
    Jul 17 18:03:04 Unraid kernel: <IRQ>

     

    I have 3 physical nics for docker. br1 and br2 are exclusive for docker.

    br0 is shared with unraid OS. Could this be a problem too? I can not even share a NIC with unraid OS?

     

  17. 1 hour ago, alturismo said:

    is not relevant while you use a tempfs ramdisk, which i assume you are using as transcoding path

     

    I remove the path and used as well this patch which is an ssd but the same error appears

     

    imagen.png.9b326fdbb839310bb3e4fa4417f63a74.png

     

    Anyway if I do CPU transcoding it works with the /tmp and ramdisk

  18. I still have the same problem,

     

    If I manually stop the compose stacks before a reboot I get a clean and good start.

    If I don't do it and let unraid do the reboot alone, after restart not a single compose stack is started properly and I have to manually stop it and start it again.

     

    In addition I'm getting this errors with watchtower that I'm only using to update compose stacks

    imagen.thumb.png.d762cef7eebee6cdf9270fc8d33392d3.png

  19. 20 hours ago, alturismo said:

    your logs looks a little weird while plex is trying to use vaapi (intel /amd) before nvenc ...

     

    may try to delete the codecs dir from plex and restart the docker, also may try another browser to test playback (or a native client and force transcoding like mentioned from @ich777)

     

    image.png.e87d6fe0a23bf0eceeed7f876805615f.png

     

    also may try without your ramdisk as transcoding path for testing, i know some DTS streams needs some insane high ramdisk free space (whyever)

     I have 64gb ram only 20gb in use.

    I have stopped plex delete the codec folder, started plex and still the same issue

  20. 21 hours ago, ich777 said:

    Wait, are you using the the WebClient from Plex to transcode a movie? Can you try to use a native app like for Android or iOS and see if it is working there if you force a transcode?

    The Plex WebClient is nutorious

     

    What happens when you stop the transcode? Can you post a screenshot from nvidia-smi if nothing is using the GPU please?

     

    This is nvidia-smi without transcoding

    imagen.thumb.png.782444a70218a77050f59dea0b82b036.png

     

    With android doesn't work either, I can direct stream but if I try to transcode I get an endless black screen without errors or messages in android.

     

    This is with plex trying to transcode.

    Nvidia smi while transcoding

    imagen.thumb.png.d6fa39446f9f1578e2a7fb4f4e9b50c0.png

     

    after I get the error plex becomes unstable"

     

    imagen.thumb.png.48e85245b65d4c067f1bf927a420b39e.png

     

     

    imagen.thumb.png.66da87d692d49b582dff2df125696fce.png

  21. 22 hours ago, ich777 said:

    This call trace is not caused by the Nvidia Driver plugin, please switch from MACVLAN to IPVLAN in your Docker settings and reboot your server.

     

    Can you post the transcoding logs too?

     

    Do you have another PC where you can test the card to just make sure that it is working as expected?

    I don't have other pc to try but I have remove the card and plug it again (didn't work)

    Then I clean the logs folder of plex, I started plex, I reproduced the issue, and here are the logs and also screenshots of nvidia-smi while trying to transcode.

     

    imagen.thumb.png.7036aa4d876e440a54030661857db6cf.png

     

    Maybe the card died but apparently it's working...

    Logs.zip

  22. I have rebooted the server several times but nothings changes.

     

    I have been reproducing the error with plex

     

    this is what I get in unraid log

     

    Jul 10 20:44:49 Unraid kernel: WARNING: CPU: 14 PID: 0 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: Modules linked in: veth wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha af_packet nvidia_uvm(PO) xt_nat macvlan xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs bridge 8021q garp mrp stp llc ixgbe xfrm_algo mdio igb i2c_algo_bit nvidia_drm(PO) nvidia_modeset(PO) zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) nvidia(PO) edac_mce_amd zcommon(PO) edac_core znvpair(PO) spl(O) kvm_amd video drm_kms_helper kvm drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 backlight aesni_intel crypto_simd syscopyarea tpm_crb cryptd wmi_bmof
    Jul 10 20:44:49 Unraid kernel: mxm_wmi asus_wmi_sensors tpm_tis sysfillrect i2c_piix4 k10temp nvme rapl tpm_tis_core input_leds ccp ahci sysimgblt led_class cdc_acm nvme_core i2c_core libahci fb_sys_fops tpm wmi button acpi_cpufreq unix [last unloaded: xfrm_algo]
    Jul 10 20:44:49 Unraid kernel: CPU: 14 PID: 0 Comm: swapper/14 Tainted: P           O       6.1.36-Unraid #1
    Jul 10 20:44:49 Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021
    Jul 10 20:44:49 Unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01
    Jul 10 20:44:49 Unraid kernel: RSP: 0018:ffffc900004c8838 EFLAGS: 00010202
    Jul 10 20:44:49 Unraid kernel: RAX: 0000000000000001 RBX: ffff8885c2e81f00 RCX: 7aecd0b99ace0591
    Jul 10 20:44:49 Unraid kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff8885c2e81f00
    Jul 10 20:44:49 Unraid kernel: RBP: 0000000000000001 R08: fed2146f5781fd9e R09: d403ee2a01cdc41c
    Jul 10 20:44:49 Unraid kernel: R10: 13c56616bc33d4cc R11: ffffc900004c8800 R12: ffffffff82a11440
    Jul 10 20:44:49 Unraid kernel: R13: 00000000000254b3 R14: ffff88892d6dbe00 R15: 0000000000000000
    Jul 10 20:44:49 Unraid kernel: FS:  0000000000000000(0000) GS:ffff888ffeb80000(0000) knlGS:0000000000000000
    Jul 10 20:44:49 Unraid kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Jul 10 20:44:49 Unraid kernel: CR2: 000000c000107010 CR3: 00000001c7cee000 CR4: 0000000000350ee0
    Jul 10 20:44:49 Unraid kernel: Call Trace:
    Jul 10 20:44:49 Unraid kernel: <IRQ>
    Jul 10 20:44:49 Unraid kernel: ? __warn+0xab/0x122
    Jul 10 20:44:49 Unraid kernel: ? report_bug+0x109/0x17e
    Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: ? handle_bug+0x41/0x6f
    Jul 10 20:44:49 Unraid kernel: ? exc_invalid_op+0x13/0x60
    Jul 10 20:44:49 Unraid kernel: ? asm_exc_invalid_op+0x16/0x20
    Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: ? __nf_conntrack_confirm+0x9e/0x2b0 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: ? nf_nat_inet_fn+0xc0/0x1a8 [nf_nat]
    Jul 10 20:44:49 Unraid kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack]
    Jul 10 20:44:49 Unraid kernel: nf_hook_slow+0x3d/0x96
    Jul 10 20:44:49 Unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Jul 10 20:44:49 Unraid kernel: NF_HOOK.constprop.0+0x79/0xd9
    Jul 10 20:44:49 Unraid kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Jul 10 20:44:49 Unraid kernel: ip_sabotage_in+0x52/0x60 [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: nf_hook_slow+0x3d/0x96
    Jul 10 20:44:49 Unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8
    Jul 10 20:44:49 Unraid kernel: NF_HOOK.constprop.0+0x79/0xd9
    Jul 10 20:44:49 Unraid kernel: ? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8
    Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_one_core+0x77/0x9c
    Jul 10 20:44:49 Unraid kernel: netif_receive_skb+0xbf/0x127
    Jul 10 20:44:49 Unraid kernel: br_handle_frame_finish+0x438/0x472 [bridge]
    Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Jul 10 20:44:49 Unraid kernel: br_nf_hook_thresh+0xe5/0x109 [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Jul 10 20:44:49 Unraid kernel: br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Jul 10 20:44:49 Unraid kernel: ? NF_HOOK.isra.0+0xe4/0x140 [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: br_nf_pre_routing+0x236/0x24a [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: ? br_nf_hook_thresh+0x109/0x109 [br_netfilter]
    Jul 10 20:44:49 Unraid kernel: br_handle_frame+0x27a/0x2e0 [bridge]
    Jul 10 20:44:49 Unraid kernel: ? br_pass_frame_up+0xdd/0xdd [bridge]
    Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_core.constprop.0+0x4fd/0x6e9
    Jul 10 20:44:49 Unraid kernel: __netif_receive_skb_list_core+0x8a/0x11e
    Jul 10 20:44:49 Unraid kernel: netif_receive_skb_list_internal+0x1d2/0x20b
    Jul 10 20:44:49 Unraid kernel: gro_normal_list+0x1d/0x3f
    Jul 10 20:44:49 Unraid kernel: napi_complete_done+0x7b/0x11a
    Jul 10 20:44:49 Unraid kernel: igb_poll+0xd88/0xf8e [igb]
    Jul 10 20:44:49 Unraid kernel: ? run_cmd+0x13/0x51
    Jul 10 20:44:49 Unraid kernel: ? update_overutilized_status+0x33/0x6e
    Jul 10 20:44:49 Unraid kernel: ? hrtick_update+0x17/0x4f
    Jul 10 20:44:49 Unraid kernel: __napi_poll.constprop.0+0x2b/0x124
    Jul 10 20:44:49 Unraid kernel: net_rx_action+0x159/0x24f
    Jul 10 20:44:49 Unraid kernel: __do_softirq+0x129/0x288
    Jul 10 20:44:49 Unraid kernel: __irq_exit_rcu+0x5e/0xb8
    Jul 10 20:44:49 Unraid kernel: common_interrupt+0x9b/0xc1
    Jul 10 20:44:49 Unraid kernel: </IRQ>
    Jul 10 20:44:49 Unraid kernel: <TASK>
    Jul 10 20:44:49 Unraid kernel: asm_common_interrupt+0x22/0x40
    Jul 10 20:44:49 Unraid kernel: RIP: 0010:cpuidle_enter_state+0x11d/0x202
    Jul 10 20:44:49 Unraid kernel: Code: 16 37 a0 ff 45 84 ff 74 1b 9c 58 0f 1f 40 00 0f ba e0 09 73 08 0f 0b fa 0f 1f 44 00 00 31 ff e8 24 f6 a4 ff fb 0f 1f 44 00 00 <45> 85 e4 0f 88 ba 00 00 00 48 8b 04 24 49 63 cc 48 6b d1 68 49 29
    Jul 10 20:44:49 Unraid kernel: RSP: 0018:ffffc900001c7e98 EFLAGS: 00000246
    Jul 10 20:44:49 Unraid kernel: RAX: ffff888ffeb80000 RBX: ffff888108c8cc00 RCX: 0000000000000000
    Jul 10 20:44:49 Unraid kernel: RDX: 0000096113d8cad6 RSI: ffffffff820909fc RDI: ffffffff82090f05
    Jul 10 20:44:49 Unraid kernel: RBP: 0000000000000002 R08: 0000000000000002 R09: 0000000000000002
    Jul 10 20:44:49 Unraid kernel: R10: 0000000000000020 R11: 0000000000004bc6 R12: 0000000000000002
    Jul 10 20:44:49 Unraid kernel: R13: ffffffff823235a0 R14: 0000096113d8cad6 R15: 0000000000000000
    Jul 10 20:44:49 Unraid kernel: ? cpuidle_enter_state+0xf7/0x202
    Jul 10 20:44:49 Unraid kernel: cpuidle_enter+0x2a/0x38
    Jul 10 20:44:49 Unraid kernel: do_idle+0x18d/0x1fb
    Jul 10 20:44:49 Unraid kernel: cpu_startup_entry+0x1d/0x1f
    Jul 10 20:44:49 Unraid kernel: start_secondary+0xeb/0xeb
    Jul 10 20:44:49 Unraid kernel: secondary_startup_64_no_verify+0xce/0xdb
    Jul 10 20:44:49 Unraid kernel: </TASK>
    Jul 10 20:44:49 Unraid kernel: ---[ end trace 0000000000000000 ]---

     

    This is plex app logs while reproducing the error, the error below is what i get every time I try to transcode I get the popup error.

     

    imagen.png.e9c82dd66d370218c9f0abd9f9cc2f3b.pngimagen.thumb.png.0557c8fa8033fefdfbbd58a5e9b1da8b.png

     

     

    Regarding your comments frigate, the network is fine because as soon as I disable gpu decoding everything works. While using gpu decoding I can access the cameras using other tools and works. ANyway I'm going to do the changes you proposed to see if something changes, considering that it affects to plex as well, and I have the same problem with plex even if frigate is stopped...

     

    thanks for you help. Maybe is something with my config... but I don't even know where to start to troubleshoot it, and the logs don't tell a lot.

    All I know is that it only happens when the container try to use the GPU for something