L0rdRaiden

Members
  • Posts

    568
  • Joined

  • Last visited

Posts posted by L0rdRaiden

  1. 39 minutes ago, ich777 said:

    This is already solved with Plugin version 2023.07.06

     

    Can you please be a bit more specific? Where do you get errors and where do you get crashes? What does crash exactly?

     

    Do you have a way to test the card other than in your server? I see nothing obvious in your syslogs why it shouldn't work.

    Have you yet tried to reboot your system?

     

    For plex there is nothing in the logs, just the error in the screenshot above when I try to use transcoding

     

    [migrations] started
    [migrations] no migrations found
    usermod: no changes
    ───────────────────────────────────────
    
          ██╗     ███████╗██╗ ██████╗ 
          ██║     ██╔════╝██║██╔═══██╗
          ██║     ███████╗██║██║   ██║
          ██║     ╚════██║██║██║   ██║
          ███████╗███████║██║╚██████╔╝
          ╚══════╝╚══════╝╚═╝ ╚═════╝ 
    
       Brought to you by linuxserver.io
    ───────────────────────────────────────
    
    To support LSIO projects visit:
    https://www.linuxserver.io/donate/
    
    ───────────────────────────────────────
    GID/UID
    ───────────────────────────────────────
    
    User UID:    99
    User GID:    100
    ───────────────────────────────────────
    
    **** Server already claimed ****
    **** permissions for /dev/dri/renderD128 are good ****
    **** permissions for /dev/dri/card0 are good ****
    Docker is used for versioning skip update check
    [custom-init] No custom files found, skipping...
    Starting Plex Media Server. . . (you can ignore the libusb_init error)
    [ls.io-init] done.
    Critical: libusb_init failed

     

    Plex docker run

     

    docker run
      -d
      --name='Plex'
      --net='br2'
      --ip='10.10.50.20'
      --cpuset-cpus='4,5,6,7,8,9,16,17,18,19,20,21'
      -e TZ="Europe/Paris"
      -e HOST_OS="Unraid"
      -e HOST_HOSTNAME="Unraid"
      -e HOST_CONTAINERNAME="Plex"
      -e 'VERSION'='docker'
      -e 'NVIDIA_VISIBLE_DEVICES'='GPU-f1c0f52c-e491-64c7-428c-e10038734368'
      -e 'NVIDIA_DRIVER_CAPABILITIES'='all'
      -e 'PUID'='99'
      -e 'PGID'='100'
      -e 'TCP_PORT_32400'='32400'
      -e 'TCP_PORT_3005'='3005'
      -e 'TCP_PORT_8324'='8324'
      -e 'TCP_PORT_32469'='32469'
      -e 'UDP_PORT_1900'='1900'
      -e 'UDP_PORT_32410'='32410'
      -e 'UDP_PORT_32412'='32412'
      -e 'UDP_PORT_32413'='32413'
      -e 'UDP_PORT_32414'='32414'
      -e '022'='022'
      -l net.unraid.docker.managed=dockerman
      -l net.unraid.docker.webui='http://[IP]:[PORT:32400]/web'
      -l net.unraid.docker.icon='https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/plex-icon.png'
      -v '/mnt/user/Video/Películas/':'/media/Películas':'rw'
      -v '/mnt/user/Video/Movies/':'/media/Movies':'rw'
      -v '/mnt/user/Video/Series/':'/media/Series':'rw'
      -v '':'/movies':'rw'
      -v '':'/tv':'rw'
      -v '':'/music':'rw'
      -v '/mnt/user/Docker/Plex/':'/config':'rw'
      --dns=10.10.50.5
      --no-healthcheck
      --runtime=nvidia
      --mount type=tmpfs,destination=/tmp,tmpfs-size=4000000000 'lscr.io/linuxserver/plex'
    
    d52fd6937b48a59636659eacbee1624de26fe7ba3f718fff524eafdd4e205cba
    
    The command finished successfully!

     

    For frigate I opened this bug when I though that the problem was with frigate but is actually with the GPU, you can see the logs in the last 3 or 4 posts

    https://github.com/blakeblackshear/frigate/issues/7051

     

    I don't know what else to do to troubleshoot this.

  2. Since I installed 6.12.2 nvidia stoped working in my containers like plex or frigate, when I try to use it I get errors and crashes

     

    imagen.thumb.png.c5c236695298cf2ee5c4c81ecc92d1bf.png

    imagen.thumb.png.c511984bf442cfeb47b17921768d1301.png

     

    plugin ver 2023.07.06 

     

    in frigate I get ffmpeg errors due to the H264 decoder using nvidia GPU

    error in plex when transcoding

    imagen.thumb.png.4c1cd450be667750011a8ef34b18153c.png

     

     

    All this was working fine before. I haven't changed anything not in the docker run, not in the apps config. Any idea what the problem could be?

     

    @SpencerJ could this be related with the issue you mention here?

    Although I',m using the latest version of everything.

     

    unraid-diagnostics-20230710-1727.zip

  3. I got this logs, any idea about the root cause? or how to do more troubleshooting?

     

     

    ===== START =====
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  25409]     0 25409       52        7    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  25389]     0 25389   180192     1006   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  25226]     0 25226   234860    19547  3559424        0             0 node
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  25088]     0 25088      536       15    40960        0             0 dumb-init
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  25067]     0 25067   180256      999   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24676]     0 24676   179791     1679   106496        0             0 dozzle
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24653]     0 24653   180192      890   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24643]     0 24643   715402     9959   557056        0             0 mono-sgen
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24471]   201 24471     5140      264    77824        0             0 netdata
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24296]     0 24296    35484     4973   176128        0             0 mono
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24217]   201 24217    62608    39133   581632        0             0 netdata
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24180]     0 24180       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24172]     0 24172       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24171]     0 24171       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24170]     0 24170       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24150]     0 24150   180192      812   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24123]     0 24123       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24122]     0 24122       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  24069]    99 24069   100958    28553   622592        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23957]     0 23957       52        8    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23931]     0 23931   180256     1036   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23777]    99 23777     4665     3186    73728        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23775]     0 23775      114       10    36864        0             0 s6-ftrigrd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23774]     0 23774       56        7    36864        0             0 s6-svlisten1
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23633]     0 23633       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23629]     0 23629       54        6    24576        0             0 s6-rc
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23620]     0 23620       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23619]     0 23619       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23617]     0 23617       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23608]     0 23608   180131      968   126976        0             0 unpackerr
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23579]     0 23579   180256     1103   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23563]     0 23563       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23561]     0 23561       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23560]     0 23560      406       16    40960        0             0 rc.init
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23554]    99 23554    24228    14171   237568        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23329]     0 23329       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23307]     0 23307       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23306]     0 23306       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23305]     0 23305       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23281]     0 23281       52        6    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23250]     0 23250   180192     1046   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23235]     0 23235       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  23233]     0 23233       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  22973]     0 22973       52        7    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  22953]     0 22953   180256     1067   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  22704]  1883 22704      956      186    40960        0             0 mosquitto
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  22684]     0 22684   180256     1031   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20792]     0 20792   676449   241599  2670592        0             0 qemu-system-x86
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20786]     0 20786     2340     1021    53248        0             0 swtpm
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20771]     0 20771    19125     1319   126976        0             0 winbindd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20658]     0 20658    19042     1531   126976        0             0 winbindd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20653]     0 20653    18993     1409   126976        0             0 winbindd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20651]     0 20651    19534     1049   131072        0             0 cleanupd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20650]     0 20650    19534     1201   135168        0             0 smbd-notifyd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20647]     0 20647    19881     1669   139264        0             0 smbd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20552]     0 20552    37503     2450   233472        0             0 ffmpeg
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20497]     0 20497  1802527  1640742 14467072        0             0 qemu-system-x86
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20475]     0 20475   441940   100915  1548288        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20473]     0 20473       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20440]     0 20440       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20432]     0 20432       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20431]     0 20431       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20422]     0 20422       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20420]     0 20420       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20413]     0 20413    31705     1763   167936        0             0 ffmpeg
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20411]     0 20411    31705     1753   167936        0             0 ffmpeg
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20398]     0 20398  2421639    53602   741376        0             0 ffmpeg
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20394]     0 20394   349098    21812   495616        0             0 frigate.capture
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20391]     0 20391   360790    22700   495616        0             0 frigate.capture
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20390]     0 20390   434827    24150   593920        0             0 frigate.process
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20389]     0 20389    31705     1763   163840        0             0 ffmpeg
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20388]     0 20388   432779    22059   569344        0             0 frigate.process
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20385]     0 20385   333083    21109   532480        0             0 frigate.output
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20383]     0 20383   350647    22117   495616        0             0 frigate.detecto
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20382]     0 20382     3564     1319    65536        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20370]     0 20370   247659    19180   425984        0             0 frigate.logger
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20231]     0 20231       52        7    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20205]     0 20205   180256      907   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20113]     0 20113     1857       84    49152        0             0 dnsmasq
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  20112]    99 20112     1890      500    49152        0             0 dnsmasq
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  19614]     0 19614   364802     2705   286720        0             0 libvirtd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  19570]     0 19570     8099     1476    98304        0             0 virtlogd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  19548]     0 19548     8082     1484    98304        0             0 virtlockd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  17867]     0 17867   355200     2727   266240        0             0 scrutiny
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16881]     0 16881   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16837]     0 16837   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16800]     0 16800   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16762]     0 16762   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16721]     0 16721   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16681]     0 16681   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16654]     0 16654   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16630]     0 16630   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16607]     0 16607   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16562]     0 16562   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16504]     0 16504   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16456]     0 16456   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16414]     0 16414   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16376]     0 16376   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16355]     0 16355   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16332]     0 16332   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16313]     0 16313   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16291]     0 16291   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16261]     0 16261   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16232]     0 16232   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16212]     0 16212   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16204]     0 16204   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16203]     0 16203   161947     1220   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16202]     0 16202   161947     1216   208896        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16190]  1000 16190     5750     1500    81920        0             0 Xvfb
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16112]     0 16112   145420     1818    94208        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16104]     0 16104      973       70    49152        0             0 bash
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16101]     0 16101   697818    24103   823296        0             0 python3
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16095]     0 16095   181921     4465   143360        0             0 go2rtc
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16091] 65534 16091       69        5    36864        0             0 s6-log
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16090] 65534 16090       69        7    36864        0             0 s6-log
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16087] 65534 16087       69        4    36864        0             0 s6-log
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16041]     0 16041      131       18    28672        0             0 s6-fdholderd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16040]     0 16040       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16030]     0 16030       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16029]     0 16029       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16028]     0 16028       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16027]     0 16027       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16026]     0 16026       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16025]     0 16025       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16024]     0 16024       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16023]     0 16023       53        5    28672        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  16022]     0 16022       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15984]    99 15984   166116    36237   585728        0             0 mariadbd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15807]     0 15807       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15806]     0 15806       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15759]    99 15759   493526    43842   790528        0             0 mono
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15712]     0 15712    79225    35284   647168        0             0 vector
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15706]    99 15706      428       37    45056        0             0 mariadbd-safe
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15700]     0 15700      571       60    36864        0             0 bash
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15652]     0 15652   180256     2103   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15561]   999 15561    54773      785   135168        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15560]   999 15560    18345      580   114688        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15558]   999 15558    54800      878   147456        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15557]   999 15557    54666     1659   139264        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15556]   999 15556    54666      939   147456        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15555]   999 15555    54699     1007   155648        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15513]     0 15513      210       15    36864        0             0 vlmcsd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: CPU: 1 PID: 32346 Comm: node Tainted: P           O       6.1.36-Unraid #1
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15479]     0 15479   180256     2777   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15405]  1100 15405  5744396   381475  4644864        0             0 java
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15349]     0 15349       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15346]     0 15346       47        1    24576        0             0 s6-ipcserverd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15332]     0 15332       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15331]     0 15331       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15330]     0 15330       53        6    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15329]     0 15329       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15328]     0 15328       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15327]     0 15327       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15246]     0 15246  1911122   331930  4112384        0             0 python
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15224]     0 15224   180256     2643   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15205]     0 15205   223057    30886   532480        0             0 AdGuardHome
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15086]    99 15086   651503    27495   634880        0             0 Radarr
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  15038]  1100 15038      696       23    45056        0             0 tini
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14976]     0 14976   180192     1942   118784        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14909]     0 14909       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14907]     0 14907       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14878]     0 14878      204        9    36864        0             0 tini
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14850]     0 14850   180320     2004   122880        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14841]     0 14841       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14838]     0 14838       53        4    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14618]   101 14618     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14617]   101 14617     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14616]   101 14616     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14615]   101 14615     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14614]   101 14614     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14613]   101 14613     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14612]   101 14612     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14611]   101 14611     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14610]   101 14610     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14609]   101 14609     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14608]   101 14608     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14607]   101 14607     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14606]   101 14606     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14605]   101 14605     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14604]   101 14604     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14603]   101 14603     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14602]   101 14602     2279      414    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14601]   101 14601     2279      431    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14600]   101 14600     2279      431    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14599]   101 14599     2279      431    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14598]   101 14598     2285      433    49152        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14280]     0 14280       52        7    28672        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  14085]   101 14085     2161      314    53248        0             0 nginx
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13955]     0 13955      937       63    45056        0             0 cron
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13938]     0 13938       52        1    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13700]     0 13700       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13619]     0 13619       53        5    24576        0             0 s6-supervise
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13438]     0 13438       50        1    24576        0             0 s6-linux-init-s
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  13286]   999 13286    54666     3258   155648        0             0 postgres
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  12988]     0 12988       52        8    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  12849]     0 12849       52        7    24576        0             0 s6-svscan
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  12378]     0 12378   180256     2292   114688        0             1 containerd-shim
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  12137]  1000 12137     2892      202    65536        0             0 opensearch-dock
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  11393]  1000 11393      558       16    40960        0             0 dumb-init
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  10921]   999 10921   312530     3142   167936        0             0 go-cron
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  10557]   999 10557   612412    32020   700416        0             0 mongod
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [  10209]     0 10209      204       10    36864        0             0 tini
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   9876]     0  9876      204       10    36864        0             0 tini
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   7276]     0  7276   612083     9836   446464        0             0 containerd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   7180]    61  7180     1428      629    45056        0             0 avahi-daemon
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   7038]     0  7038      995      756    45056        0             0 sh
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   6929]     0  6929   172862     3575   139264        0             0 shfs
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   6156]     0  6156    23181     2064   163840        0             0 php-fpm
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   5976]     0  5976      652      235    40960        0             0 agetty
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   5972]     0  5972      652      233    40960        0             0 agetty
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   4178]     0  4178     1281      785    45056        0             0 atd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   1820]     0  1820      650       24    40960        0             0 acpid
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: [   1433]     0  1433    52847     1079    69632        0             0 rsyslogd
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: 0 pages cma reserved
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Total swap = 0kB
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15360kB
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Node 0 DMA32 free:122372kB boost:0kB min:6020kB low:8936kB high:11852kB reserved_highatomic:0KB active_anon:1900kB inactive_anon:2748980kB active_file:0kB inactive_file:372kB unevictable:0kB writepending:4kB present:3037444kB managed:2945280kB mlocked:0kB bounce:0kB free_pcp:248kB local_pcp:0kB free_cma:0kB
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: free:61397 free_pcp:62 free_cma:0
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: slab_reclaimable:52862 slab_unreclaimable:63682
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Mem-Info:
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: RBP: 00007ffc5b195250 R08: 0000000000000000 R09: 00001e0be983fd41
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Code: Unable to access opcode bytes at 0x55b3dd0c63e6.
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: do_user_addr_fault+0x36a/0x530
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: filemap_fault+0x317/0x52f
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: __alloc_pages+0x132/0x1e8
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: dump_header+0x4a/0x211
    2023-07-04T11:03:00.000+02:00 source=Unraid Unraid kernel: Hardware name: ASUS System Product Name/ROG CROSSHAIR VII HERO, BIOS 4603 09/13/2021
    2023-07-04T11:03:11.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sde
    2023-07-04T11:18:12.000+02:00 source=Unraid Unraid emhttpd: spinning down /dev/sde
    2023-07-04T13:56:22.000+02:00 source=Unraid Unraid kernel: hrtimer: interrupt took 7515 ns
    2023-07-04T17:19:45.000+02:00 source=Unraid Unraid webGUI: Successful login user root from 10.10.10.21
    2023-07-04T17:20:58.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sdd
    2023-07-04T17:20:58.000+02:00 source=Unraid Unraid emhttpd: read SMART /dev/sdf
    ===== END =====

     

    unraid-diagnostics-20230704-1730.zip

  4. On 6/22/2023 at 3:24 AM, primeval_god said:

    The composeman autostart runs a compose up on each stack. There should be no difference between the Compose Up button and autostart. Regardless i would bet that your diagnosis of the overall problem is correct.

     

    My guess is that as @Kilrah is correct that the issue is the br1 network is recreated thus when compose tries to start the existing containers it fails, regardless whether it is via the button or autostart. If i understand your posts above you are doing compose down first then compose up to restart the stack. This makes sense as compose down followed by compose up removes all containers and then recreates them whereas compose up alone would just start existing containers.

     

    Therefore is a problem with the plugin, because compose manager should do a clean stop of the compose files before restart or shutdown, so on next time it will autostart without issues.

     

    I need brx for security reasons, I need the containers to be in different vlans/networks and to setup different firewalls rules between hosts. This won't be possible if everything is behind on IP with a layer 3-4 fw.

  5. 43 minutes ago, Kilrah said:

    It won't, the problem is that the brX networks are recreated from scratch on each boot. The stack references the network by ID, but when it's recreated it'll get a new random ID so the network referenced in the stack will never exist. Clicking Compose Up recreates the whole stack so it now references the new network.

     

    If you need macvlan for compose stacks you have to make your own network and enable "preserve custom networks" in docker settings. There should be a few posts in this thread about it.

     

     

    but br1 is a network created by unraid

    imagen.thumb.png.4b9482d3f5703de1d010927304a69d75.png

     

    the internal ones are only docker compose, do you mean that those networks are the problematic? in any case might be something that can be solve in the plugin side, it doesn't make sense that if the plugin launches the compose file it doesn't work and If I do it manually pressing the button it works.

  6. this is how it looks like after restarting the array

     

    imagen.thumb.png.3ffb1ae9bad7d88be79500ee8f67c0e7.png

     

    and if I try to start some of them

     

    imagen.thumb.png.5dd3a5172cf0d0f8a10490fbeaf773aa.png

     

    imagen.thumb.png.12d5089e44c74540f7f7152d27413722.png

     

    imagen.thumb.png.2eaed07b5d74f6c24a6d1e696a64fd76.png

     

    I have to stop them and start again to work.

     

    Maybe a delay won't fix it and the issue is another but in any case delay and order features would be nice.

  7. 41 minutes ago, Kilrah said:

    What network are they using?

     

    This configuration or similar (different names on the internal network)

     

    networks:
      br1:
        driver: macvlan
        external: true
      graylog-network:
        internal: true

     

    In a clean restart fails some of the compose fail to start, so I have to stop them and start them again.

  8. Right now there is no clean way to monitor the security on Unraid OS, I think is something critical since many people is publishing dockers to internet.

     

    It's not compatible with auditd or wazuh or elastic agents or similar solutions.

     

    Right now even easy projects like crowdsec are compatible with auditd so people can easily implement some monitoring or going more advance with wazuh or other tools make use of the sigma rules, wazuh, security onion, Qradar Community edition, etc.

     

    So the request at least is to have auditd official support which is the standard way to monitor linux OS, wazuh support would be awesome as well.

     

    https://slackbuilds.org/repository/15.0/system/audit/

     

    • Upvote 4
  9. Some of my compose containers fail to start at boot, I'm under the impression that this is because some network components or other things of the OS are not ready so early. Anyone is experiencing the same thing? any way to fix it or add a delay?

     

    I'm using only the autostart feature that comes with compose manager,

     

    Thanks in advance.

  10. can we get support for auditd?

     

    https://unix.stackexchange.com/questions/502878/slackware-14-2-turn-on-the-auditd-daemon

    https://slackbuilds.org/repository/14.2/system/audit/  a newer version would be better.

    https://slackbuilds.org/repository/15.0/system/audit/

    https://github.com/linux-audit

     

    I think is crucial to be able to monitor Unraid security

     

    Or wazuh agent? not sure if it's compatible but this is the only reference to slackware I have found

    https://github.com/wazuh/wazuh/blob/master/src/init/init.sh

     

  11. Have anyone manage to install/enable auditd in Unraid?

     

    Is there any way to get security logs from unraid/slackware? auditd doesn't work, wazuh either.... security is always forgotten in Unraid.

     

    @limetech can we get "official" auditd support in Unraid?

     

    would this instructions works with the latest version of unraid?

    https://unix.stackexchange.com/questions/502878/slackware-14-2-turn-on-the-auditd-daemon

    https://slackbuilds.org/repository/15.0/system/audit/

    https://github.com/linux-audit

    • Upvote 1
  12. It's possible with the docker version with

          - /var/run/docker.sock:/var/run/docker.sock

     

     

    server:
      http_listen_port: 9080
      grpc_listen_port: 0
    
    positions:
      filename: /positions/positions.yaml
    
    clients:
      - url: http://10.10.40.251:3100/loki/api/v1/push
    
    scrape_configs:
    - job_name: system
      static_configs:
      - targets:
          - localhost
        labels:
          job: varlogs
          __path__: /host/log/*log
    
    - job_name: docker
      # use docker.sock to filter containers
      docker_sd_configs:
        - host: unix:///var/run/docker.sock
          refresh_interval: 15s
          #filters:
          #  - name: label
          #    values: ["logging=promtail"]    # use container name to create a loki label
      relabel_configs:
        - source_labels: ['__meta_docker_container_name']
          regex: '/(.*)'
          target_label: 'container'
        - source_labels: ['__meta_docker_container_log_stream']
          target_label: 'logstream'
        - source_labels: ['__meta_docker_container_label_logging_jobname']
          target_label: 'job'

     

      promtail:
        # run as root, update to rootless mode later
        user: "0:0"
        container_name: Mon-Promtail
        image: grafana/promtail:main
        command: -config.file=/etc/promtail/docker-config.yaml
        depends_on:
          - loki
        restart: unless-stopped
        networks:
          mon-netsocketproxy:
          mon-netgrafana:
          br1:
            ipv4_address: 10.10.40.252
        dns: 10.10.50.5
        ports:
          - 9800:9800
        volumes:
          # logs for linux host only
          - /var/log:/host/log
          #- /var/lib/docker/containers:/var/lib/docker/containers:ro
          - /mnt/user/Docker/Monitoring/Promtail/promtail-config.yaml:/etc/promtail/docker-config.yaml
          - /mnt/user/Docker/Monitoring/Promtail/positions:/positions
          - /var/run/docker.sock:/var/run/docker.sock
        labels:
          - "com.centurylinklabs.watchtower.enable=true"

     

    • Thanks 2
  13. On 5/21/2023 at 4:14 AM, evan326 said:
    plugin: installing: compose.manager.plg
    Executing hook script: pre_plugin_checks
    plugin: downloading: compose.manager.plg ... done
    
    Executing hook script: pre_plugin_checks
    
    +==============================================================================
    | Skipping package compose.manager-package-2023.04.27 (already installed)
    +==============================================================================
    
    
    ----------------------------------------------------
    Applying WebUI Patches...
    ----------------------------------------------------
    
    plugin: run failed: '/bin/bash' returned 2
    Executing hook script: post_plugin_checks

    Hi, I've been using this plugin for months and it's great. After updating to rc2 I'm having issues. Every few times I visit the docker page, the compose manager is gone. It seems sporadic, it is also after most, but not all reboots. I have to go to the app page, it isn't listed under installed apps. After reinstalling it, it works for a while. During installation I get these messages. 

    I was going to try and uninstall and reinstall, but it doesn't ever say it's installed in the app page.

     

    I have the same problem, I upgrade to RC, the plugin disappears from the plugin list I installed it again, gives that error, it "works" but appears as not installed everywhere.

  14. 9 hours ago, primeval_god said:

    I am not sure what you are asking here. Do you mean the path to the compose.manager project folder where the .yml file is? If so it depends on whether or not you specified a non-default directory when creating the stack. If it is the default directory its on the boot drive, but i recommend against placing custom files there. Better to have all your app specific files under your appdata folder and use absolute paths to resources in your compose file. 
    If you are asking about some other path you will have to be more specific.

     

    I'm trying to use secrets with docker compose in unraid.

     

    Right now I'm using this

     

    compose

    ###############################################################
    # Nextcloud
    ###############################################################
    
    version: "3.9"
    
    # Services ####################################################
    
    services:
    
      db:
        image: postgres:alpine
        container_name: Nextcloud_Postgres
        restart: unless-stopped
        healthcheck:
          test: ["CMD-SHELL", "pg_isready -d $${POSTGRES_DB} -U $${POSTGRES_USER}"]
          interval: 10s
          timeout: 5s
          retries: 10
        networks:
          - nextcloud_network
        environment:
          - TZ
          - POSTGRES_PASSWORD
          - POSTGRES_USER
          - POSTGRES_DB
          #- POSTGRES_INITDB_ARGS
          #- POSTGRES_INITDB_WALDIR
          #- POSTGRES_HOST_AUTH_METHOD
        secrets:
          - POSTGRES_PASSWORD
          - POSTGRES_USER
          - POSTGRES_DB
        volumes:
          - /mnt/user/Docker/Nextcloud/postgres/data:/var/lib/postgresql/data:z
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
      pgbackups:
        image: prodrigestivill/postgres-backup-local
        container_name: NextCloud_pgbackups  
        restart: unless-stopped
        user: postgres:postgres # Optional: see below
        networks:
          - nextcloud_network    
        volumes:
          - /mnt/user/Docker/Nextcloud/pgbackups:/backups
        links:
          - db
        depends_on:
          db:
            condition: service_healthy
        environment:
          - TZ
          - POSTGRES_HOST
          - POSTGRES_DB
          - POSTGRES_USER
          - POSTGRES_PASSWORD
         #- POSTGRES_PASSWORD_FILE=/run/secrets/db_password <-- alternative for POSTGRES_PASSWORD (to use with docker secrets)
          - POSTGRES_EXTRA_OPTS=-Z6
          - SCHEDULE=0 1 */3 * * #At 01:00 AM, every 3 days
          - BACKUP_KEEP_DAYS=6
         #- BACKUP_KEEP_WEEKS=4
         #- BACKUP_KEEP_MONTHS=6
          - HEALTHCHECK_PORT=5432
        secrets:
          - POSTGRES_DB
          - POSTGRES_USER
          - POSTGRES_PASSWORD
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
      redis:
        image: redis:alpine
        container_name: NextCloud_Redis
        restart: unless-stopped
        command: redis-server --requirepass $REDIS_HOST_PASSWORD
        volumes:
          - /mnt/user/Docker/Nextcloud/redis:/data
        environment:
          - TZ
        networks:
          - nextcloud_network
        secrets:
          - REDIS_HOST_PASSWORD    
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
      app:
        image: nextcloud:fpm-alpine
        container_name: Nextcloud
        restart: unless-stopped
        depends_on:
          db:
            condition: service_healthy
        networks:
          nextcloud_network:
          br1:
            ipv4_address: 10.10.40.161
        dns:
          - 10.10.50.5
        volumes:
          - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z
          - /mnt/user/Docker/Nextcloud/nextcloud/custom_apps:/var/www/html/custom_apps:z
          - /mnt/user/Docker/Nextcloud/nextcloud/config:/var/www/html/config:z
          - /mnt/user/Docker/Nextcloud/nextcloud/data:/var/www/html/data:z
          - /mnt/user/Personal/Nextcloud:/var/www/html/data
          - /mnt/user/Personal/Photos:/Albums
        environment:
          - TZ
          - POSTGRES_DB
          - POSTGRES_USER
          - POSTGRES_PASSWORD
          - POSTGRES_HOST
          - REDIS_HOST
          - REDIS_HOST_PASSWORD
        secrets:
          - POSTGRES_PASSWORD
          - POSTGRES_DB
          - POSTGRES_USER
          - REDIS_HOST_PASSWORD
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
      web:
        build: ./web
        container_name: NextCloud_Nginx-fpm
        restart: unless-stopped
        networks:
          nextcloud_network:
          br1:
            ipv4_address: 10.10.40.160
        ports:
          - 8080:80
        dns:
          - 10.10.50.5
        volumes:
          - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z,ro
        environment:
          - TZ
        depends_on:
          - app
    
      cron:
        image: nextcloud:fpm-alpine
        container_name: NextCloud_Cron
        restart: unless-stopped
        depends_on:
          - db
          - redis
        networks:
          - nextcloud_network
        volumes:
          - /mnt/user/Docker/Nextcloud/nextcloud:/var/www/html:z
          - /mnt/user/Docker/Nextcloud/nextcloud/custom_apps:/var/www/html/custom_apps:z
          - /mnt/user/Docker/Nextcloud/nextcloud/config:/var/www/html/config:z
          - /mnt/user/Docker/Nextcloud/nextcloud/data:/var/www/html/data:z
          - /mnt/user/Personal/Nextcloud:/var/www/html/data:z
        environment:
          - TZ
        entrypoint: /cron.sh
        labels:
          - "com.centurylinklabs.watchtower.enable=true"
    
    # Networks ####################################################
    
    networks:
      br1:
        driver: macvlan
        external: true
      nextcloud_network:
        internal: true
    
    # Docker Secrets ##############################################
    
    secrets:
      # POSTGRES_PASSWORD
      POSTGRES_PASSWORD:
        file: $DOCKERDIR/secrets/POSTGRES_PASSWORD.txt
      # POSTGRES_USER
      POSTGRES_USER:
        file: $DOCKERDIR/secrets/POSTGRES_USER.txt
      # POSTGRES_DB
      POSTGRES_DB:
        file: $DOCKERDIR/secrets/POSTGRES_DB.txt
      # REDIS_HOST_PASSWORD
      REDIS_HOST_PASSWORD:
        file: $DOCKERDIR/secrets/REDIS_HOST_PASSWORD.txt

     

    with this env file

     

     

    ###############################################################
    # Nextcloud
    ###############################################################
    
    DOCKERDIR=/boot/config/plugins/compose.manager/projects/Nextcloud
    TZ=Europe/Madrid
    PUID=99
    PGID=100
    
    # Redis
    
    #REDIS_HOST_PASSWORD=/run/secrets/REDIS_HOST_PASSWORD.txt
    REDIS_HOST_PASSWORD=password
    
    REDIS_HOST=redis
    
    # Postgres
    
    POSTGRES_PASSWORD=/run/secrets/POSTGRES_PASSWORD.txt
    POSTGRES_USER=/run/secrets/POSTGRES_USER.txt
    POSTGRES_DB=/run/secrets/POSTGRES_DB.txt
    
    POSTGRES_HOST=db

     

    but as you can see below the secrets are not being loaded correctly

    imagen.png.981c99b1bce9568c1e9c24654ca09edf.png

     

    If I put the passwords in the env file it works I was trying to learn how to use secrets.

     

    I have the feeling that docker is not loading correctly the secrets from the path

    /boot/config/plugins/compose.manager/projects/Nextcloud/secrets

     

    Maybe it doesn't have access, or it's not the right path...

    I think I'm doing everything right but I have spend a few hours reading and trying to fix it without success

    https://docs.docker.com/compose/use-secrets/

  15. But is there a benefit of using secrets with files vs including the secrets in environment variables, right? @neecapp

    https://gist.github.com/bvis/b78c1e0841cfd2437f03e20c1ee059fe

     

    I have tried to implement it like it is explained here, I am also using docker compose manager

    https://github.com/brokenscripts/authentik_traefik

     

    But I'm not sure what is the real path where I have to store the secrets, in the example is

    "/ssd/compose/secrets/authelia_notifier_smtp_password"

     

    What would be the path for docker compose in unraid? @primeval_god

     

    Any help or guidance will be welcome.

  16. 11 hours ago, primeval_god said:

    If you enable the debug option the plugin should print the commands it uses to the system log when a button is clicked.

     

    This is expected behavior. At this time dockerman does not correctly display the update status of containers created by other plugins such as compose and the update button it provides cannot be used to update such containers.. 

     

    Instead of trying to do script like I'm trying can you add the feature to schedule auto updates in docker compose manager?

    CPU pinning support would be awesome as well.

  17. @primeval_god Could you please share the commands you use in the plugin to run compose up/down and update?

     

    On the other hand I have this bug where this containers are detected like all of them have an updated while is not true

    and when I click on apply update I get this message

     

    "Configuration not found. Was this container created using this plugin?"

     

    imagen.png.92d5788da65058d392623f4ad0b2ebc0.png

  18. @Presjar

    I have tried to fix it, not sure if it works or not yet.

     

    I have been trying as well to make Watchtower work with no success.

     

    #!/bin/bash
    
    # Define the root directory to search for projects
    projects_dir="/boot/config/plugins/compose.manager/projects"
    
    # Find all subdirectories containing docker-compose files
    compose_files=$(find "$projects_dir" -maxdepth 2 -name "docker-compose.y*ml")
    
    # Loop over each file found
    for file in $compose_files; do
      # Get the directory containing the file
      dir="$(dirname "$file")"
    
      # Get the last folder name in the directory path
      project_name="$(basename "$dir")"
    
      # Change the current directory to the project directory
      pushd "$dir" > /dev/null
    
      # Get the image IDs before pulling
      before_pull=$(docker-compose images --quiet)
    
      # Pull the latest version of the Docker images
      docker-compose pull
    
      # Get the image IDs after pulling
      after_pull=$(docker-compose images --quiet)
    
      # Compare the image IDs to check if any images have been updated
      if [[ "$before_pull" != "$after_pull" ]]; then
        # Get the names of the updated Docker images
        updated_images=$(docker-compose images --quiet | xargs docker inspect --format '{{.RepoTags}}' | tr -d '[] ' | sed 's/,/\n/g')
    
        # Stop any running containers associated with the project
        docker-compose down
    
        # Start the containers in detached mode
        docker-compose up -d --remove-orphans
    
        # Clean up any unused Docker images
        docker image prune -f
    
        # Output a notification with the names of the updated Docker images
        echo "Updates available for project $project_name"
        echo "Updated Docker images:"
        echo "$updated_images"
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Docker Compose Updates" -d "$project_name updated the images $updated_images" -i "normal"
      else
        echo "No updates available for project $project_name"
        /usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Docker Compose Updates" -d "$project_name no updates available" -i "normal"
      fi
    
      # Change back to the original directory
      popd > /dev/null
    done

     

  19. I get this errors with Unnraid 6.12 rc5

     

    May  2 23:53:53 Unraid nginx: 2023/05/02 23:53:53 [error] 5229#5229: *215414 connect() to unix:/var/tmp/compose_manager_action.sock failed (111: Connection refused) while connecting to upstream, client: 10.10.10.21, server: 10-10-11-5.e0eacb8sdasdasd0846d2ab8ec813.myunraid.net, request: "GET /logterminal/compose_manager_action/ HTTP/2.0", upstream: "http://unix:/var/tmp/compose_manager_action.sock:/", host: "10-10-10-5.e0eacb8df486sdfsdfb8048csdasd68d2ab8ec813.myunraid.net", referrer: "https://10-10-10-5.e0efsdfsdfsb8048c0asdas8d2ab8ec813.myunraid.net/webterminal/syslog/"