[Support] ich777 - Application Dockers


ich777

Recommended Posts

1 hour ago, PhiKirax said:

Ok, I thought he would connect as a classic client where I can access these web pages and other services with

The container is designed so that it actually connects to a VPN and let other containers use it's VPN connection so that you can route traffic from other containers through it, and of course you could access the WebGUI or WebInterface from the other containers then.

 

May I ask why do you want to put a whole server in a VPN network, a Server in my opinion needs exclusive access to the Internet and you can route other portions of it through a VPN.

Link to comment

Hello 

I have installed the "DebianBuster-Nvidia" Docker.
But when i try to change the Nvidia Fan Speed Settings i alway get the error "Failed to set new Fan Speed!".

Coobits in Section "Device" in xorg.conf is set to 12.

Running nvidia-settings with sudo also didnt worked.

GPU is a nvidia Quadro P2200

Any idea how to fix it?

Link to comment
7 minutes ago, fk_muck1 said:

Running nvidia-settings with sudo also didnt worked.

sudo isn't installed in the container, instead try to type in 'su' enter the root password and then open 'nvidia-settings' but I don't think this will work because it is strictly speaking a container that don't has control over every function from the card.

 

9 minutes ago, fk_muck1 said:

GPU is a nvidia Quadro P2200

I think they have pretty loud fans from what I heard, is this the reason why you would change the fan speed? Have you tried it with the persistenced mode from Nvidia if that changes anything?

Link to comment

You are right, also didnt work with user root.
The container brings the fan down from 70% to 46%. But I would have liked to lower it even further.

Yep the card is a little bit noisy. Its much better with these settings. But unnecessary that the fan keeps turning so fast when the card is not used ...  ;) . 20-30% would have been my goal because she probably doesn't have a zero fan mode.

Such a nice card but unfortunately so unruly :D

 

Link to comment
18 minutes ago, fk_muck1 said:

The container brings the fan down from 70% to 46%. But I would have liked to lower it even further.

Maybe that's the default idle value for this card and can't be lower.

 

18 minutes ago, fk_muck1 said:

Such a nice card but unfortunately so unruly :D

Try to shutdown the container and open up a terminal from unRAID and type in the following:

nvidia-persistenced

 

This should also bring the FAN down, I'm using this with my Nvidia T400.

  • Thanks 1
Link to comment
14 hours ago, ich777 said:

The container is designed so that it actually connects to a VPN and let other containers use it's VPN connection so that you can route traffic from other containers through it, and of course you could access the WebGUI or WebInterface from the other containers then.

 

May I ask why do you want to put a whole server in a VPN network, a Server in my opinion needs exclusive access to the Internet and you can route other portions of it through a VPN.

Hello :),

I do not redirect all the flows just establish a connection to access for example my motioneye (for example 192.168.10.3:8765) my server is on another machine (A personal vps)

I am use 

push "block-outside-dns"

settings in my configuration

Edited by PhiKirax
  • Thanks 1
Link to comment

@ich777

Since I didn't find a separete thread, I got a question / potential problem with your firefox docker:

 

I had my regular download/temp user share mounted to the Downloads folder inside the firefox appdata, this morning the whole share was emptied (nothing that I couldn't recover but still..). And I noticed the time of the last change was pretty much exactly the time the firefox container got started this mornging (on a schedule via a userscript). And I also noticed that the automatic updater must have run since the vnc password I set was reset.

 

Any chance the automatic update just deleted all of my share's content? I only had a quick look at the github repo and didn't see anything about the container updating itself.

 

Like I said, didn't lose any critical data, but imagine any other docker container acted like that, i.e. your regular plex container just deleting all your media..

 

Link to comment
@ich777
Since I didn't find a separete thread, I got a question / potential problem with your firefox docker:
 
I had my regular download/temp user share mounted to the Downloads folder inside the firefox appdata, this morning the whole share was emptied (nothing that I couldn't recover but still..). And I noticed the time of the last change was pretty much exactly the time the firefox container got started this mornging (on a schedule via a userscript). And I also noticed that the automatic updater must have run since the vnc password I set was reset.
 
Any chance the automatic update just deleted all of my share's content? I only had a quick look at the github repo and didn't see anything about the container updating itself.
 
Like I said, didn't lose any critical data, but imagine any other docker container acted like that, i.e. your regular plex container just deleting all your media..
 
To what directory have you mounted you temporary folder inside the container?

My recommendation is always to mount it to something like /mnt/YOURFOLDERNAME or if you really want to mount it in the data directory inside the container do it as a hidden folder woth a dot in front of it eg: /firefox/.YOURFOLDERNAME

Sent from my C64

Link to comment
4 minutes ago, MightyT said:

Yeah, I directly mounted it to /firefox/Downloads. So I'm guessing it just got overwritten on the update, which usually doesn't happen when the container is updated offline because the the folders are not "mounted" ?

Most of my containers work a little different and they check on updates when they are started and pull the update and install it when a new version is found.

That's the reason why your Downloads folder is wiped, the wipe of the directory is also done on purpose.

 

As said above I would recommend to mount the Downloads folder to /mnt/Downloads to avoid the wipe on a update or at least make it hidden when you really want to keep it in the main folder eg: /mnt/.Downloads

Link to comment

Hi @ich777 thanks for your hard work💪

I have a problem with owncast and nvenc encoding. I know the card works because I have it working in a jellyfin container, but in the owncast container i cant make it work.

First of all here is the docker run command:

1699477285_Capturadepantalla2021-11-03184800.thumb.png.3ec768b94064293a15b367375c23cd94.png

 

An here the error in owncast:

1175609765_Capturadepantalla2021-11-03184845.thumb.png.e0b083f8f8ee48385e140834d24a1418.png

 

And transcoder.log:

ffmpeg started on 2021-11-03 at 18:47:40
Report written to "data/logs/transcoder.log"
Log level: 32
Command line:
/usr/bin/ffmpeg -hide_banner -loglevel warning -hwaccel cuda -fflags +genpts -i pipe:0 -map v:0 -c:v:0 h264_nvenc -b:v:0 2700k -maxrate:v:0 2862k -g:v:0 96 -keyint_min:v:0 96 -r:v:0 24 -tune:v:0 ll -map "a:0?" -c:a:0 copy -sws_flags bilinear -filter:v:0 "scale=1600:900" -preset p4 -var_stream_map "v:0,a:0 " -f hls -hls_time 4 -hls_list_size 3 -segment_format_options "mpegts_flags=+initial_discontinuity:mpegts_copyts=1" -pix_fmt yuv420p -sc_threshold 0 -master_pl_name stream.m3u8 -strftime 1 -hls_segment_filename "http://127.0.0.1:41175/%v/stream-mtmr4EK7R%s.ts" -max_muxing_queue_size 400 -method PUT -http_persistent 0 "http://127.0.0.1:41175/%v/stream.m3u8"
[flv @ 0x55f4c6fc07c0] decoding for stream 0 failed
Input #0, flv, from 'pipe:0':
  Metadata:
    encoder         : Lavf57.56.100
  Duration: N/A, start: 62188.689000, bitrate: 2864 kb/s
  Stream #0:0: Video: h264 (High), yuvj420p(pc, bt709, progressive), 1920x1080, 2800 kb/s, 0.0000 fps, 30 tbr, 1k tbn, 60 tbc
  Stream #0:1: Audio: aac (LC), 48000 Hz, stereo, fltp, 64 kb/s
Stream mapping:
  Stream #0:0 -> #0:0 (h264 (native) -> h264 (h264_nvenc))
  Stream #0:1 -> #0:1 (copy)
[h264_nvenc @ 0x55f4c6fc8ec0] Lossless encoding not supported
[h264_nvenc @ 0x55f4c6fc8ec0] Provided device doesn't support required NVENC features
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height
Conversion failed!

 

I thought it could be because of the version of ffmpeg, or maybe bad build with some dependencies missing. The jellyfin docker im using has this version of ffmpeg:

ffmpeg version 4.3.2-Jellyfin Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 8 (Debian 8.3.0-6)
  configuration: --prefix=/usr/lib/jellyfin-ffmpeg --target-os=linux --extra-version=Jellyfin --disable-doc --disable-ffplay --disable-shared --disable-libxcb --disable-sdl2 --disable-xlib --enable-gpl --enable-version3 --enable-static --enable-libfontconfig --enable-fontconfig --enable-gmp --enable-gnutls --enable-libass --enable-libbluray --enable-libdrm --enable-libfreetype --enable-libfribidi --enable-libmp3lame --enable-libopus --enable-libtheora --enable-libvorbis --enable-libdav1d --enable-libwebp --enable-libvpx --enable-libx264 --enable-libx265 --enable-libzvbi --enable-libzimg --arch=amd64 --enable-opencl --enable-vaapi --enable-amf --enable-libmfx --enable-vdpau --enable-cuda --enable-cuda-llvm --enable-cuvid --enable-nvenc --enable-nvdec --enable-ffnvcodec
  libavutil      56. 51.100 / 56. 51.100
  libavcodec     58. 91.100 / 58. 91.100
  libavformat    58. 45.100 / 58. 45.100
  libavdevice    58. 10.100 / 58. 10.100
  libavfilter     7. 85.100 /  7. 85.100
  libswscale      5.  7.100 /  5.  7.100
  libswresample   3.  7.100 /  3.  7.100
  libpostproc    55.  7.100 / 55.  7.100
Hyper fast Audio and Video encoder

And owncast this:

ffmpeg version n4.4 Copyright (c) 2000-2021 the FFmpeg developers
  built with gcc 10 (Debian 10.2.1-6)
  configuration: --prefix=/usr --libdir=/usr/lib/x86_64-linux-gnu --enable-shared --enable-nonfree --enable-nvenc --enable-libx264 --enable-libx265 --enable-gpl --enable-cuda --enable-cuvid
  libavutil      56. 70.100 / 56. 70.100
  libavcodec     58.134.100 / 58.134.100
  libavformat    58. 76.100 / 58. 76.100
  libavdevice    58. 13.100 / 58. 13.100
  libavfilter     7.110.100 /  7.110.100
  libswscale      5.  9.100 /  5.  9.100
  libswresample   3.  9.100 /  3.  9.100
  libpostproc    55.  9.100 / 55.  9.100
Hyper fast Audio and Video encoder

 

Hope you can help me to figure it out

Link to comment
13 minutes ago, joroga22 said:

nvenc encoding

The last time I tried it it was working just fine because I made a custom ffmpeg version that supports NVENC for this container.

What card are you using? Are you transcoding any streams currently?

 

I will try it a little later, please see this post where I added NVNEC support for the container:

 

  • Thanks 1
Link to comment

I have your debianbuster-nvidia container installed. This worked perfectly fine before I swapped out my motherboard, cpu & flash device.

 

After the upgrade it keeps complaining about not being able to find display ":0".

Double checked the Nvidia GUID, also checked my plex docker container, which still works with Hardware acceleration during transcoding, so passing through the GPU still seems to work.

 

At first I thought it might be because I had no monitor connected to the GPU, but after adding a HDMI dummy (which I also used before the upgrade) and rebooted the server it is still throwing the same error.

 

Any ideas? The CPU virtualization features are only used for VM's right? I don't have VMs enabled in my Unraid installation.

 

This is a bit from the container log (please note I tried different values for the display. ie: ":0", ":0.0", ":1"):

WebSocket server settings:
- Listen on :8080
- Flash security policy server
- Web server. Web root: /usr/share/novnc
- No SSL/TLS support (no cert file)
- Backgrounding (daemon)
---Starting Pulseaudio server---
E: [pulseaudio] client-conf-x11.c: xcb_connection_has_error() returned true
Can't open display :0.0
----------------------------------------------------------------------------------------------------
Listing possible outputs and screen modes:

''
----------------------------------------------------------------------------------------------------
Can't open display :0.0
Can't open display :0.0



---Looks like your highest possible output on: '' is: ''---

 

Edited by xorinzor
Link to comment
2 hours ago, ich777 said:

The last time I tried it it was working just fine because I made a custom ffmpeg version that supports NVENC for this container.

What card are you using? Are you transcoding any streams currently?

 

I will try it a little later, please see this post where I added NVNEC support for the container:

 

 

Yeah I've already seen that post searching in the thread.

I'm not encoding, so the card was free at that moment.

I have an old gt710, and driver 470.74. Maybe it doesn't support your ffmpeg build.. but i wonder why it does work with the jellyfin ffmpeg.

Thanks @ich777

Link to comment
6 minutes ago, xorinzor said:

Double checked the Nvidia GUID, also checked my plex docker container, which still works with Hardware acceleration during transcoding, so passing through the GPU still seems to work.

This has nothing to do with the GUID, I think it's a different issue.

 

May I ask what Motherboard and CPU you've had before and what Motherboard and CPU you are running now?

 

7 minutes ago, xorinzor said:

Any ideas? The CPU virtualization features are only used for VM's right?

Exactly, this is not needed for this container.

 

Have you yet tried to change the Display in the template to 1?

I would recommend that you reboot the host if you change this value from one to another.

Link to comment
2 minutes ago, joroga22 said:

I have an old gt710, and driver 470.74. Maybe it doesn't support your ffmpeg build.. but i wonder why it does work with the jellyfin ffmpeg.

That depends on the Codec that they are using... My suspicion is that they use a fairly new codec/options to transcode the media and those old cards not very capable in terms of codecs, even h265 is not supported.

I will report back when I tested this on my Nvidia T400.

  • Thanks 1
Link to comment
11 minutes ago, ich777 said:

May I ask what Motherboard and CPU you've had before and what Motherboard and CPU you are running now?

 

Before: H55M-E33 with a Xeon L3426

Now: Asus ProArt X570-Creator Wifi with an AMD 5900X

 

11 minutes ago, ich777 said:

Have you yet tried to change the Display in the template to 1?

I would recommend that you reboot the host if you change this value from one to another.

 

I tried multiple values, but none seem to work. Is there any command to figure out what displays (if any) are available?

Link to comment
31 minutes ago, joroga22 said:

I have an old gt710

@joroga22 I think the GT710 is not compatible, tried it now with my Nvidia T400 and it works just fine.

 

Docker Log:

---Preparing Server---
---Starting Server---
time="2021-11-03T22:09:11+01:00" level=info msg="Owncast v0.0.10-linux-64bit (737dbf9b1a444b0b7d415da819c16d74d3d7812e)"
time="2021-11-03T22:09:11+01:00" level=info msg="Video transcoder started using x264 with 1 stream variants."
time="2021-11-03T22:09:12+01:00" level=info msg="RTMP is accepting inbound streams on port 1935."
time="2021-11-03T22:09:12+01:00" level=info msg="Web server is listening on IP 0.0.0.0 port 8080."
time="2021-11-03T22:09:12+01:00" level=info msg="The web admin interface is available at /admin."
time="2021-11-03T22:11:08+01:00" level=info msg="Inbound stream connected."
time="2021-11-03T22:11:08+01:00" level=info msg="Video transcoder started using nvidia nvenc with 1 stream variants."
time="2021-11-03T22:11:13+01:00" level=info msg="Inbound stream disconnected."
time="2021-11-03T22:11:27+01:00" level=info msg="Inbound stream connected."
time="2021-11-03T22:11:27+01:00" level=info msg="Video transcoder started using nvidia nvenc with 1 stream variants."
time="2021-11-03T22:11:27+01:00" level=info msg="Inbound stream connected."
time="2021-11-03T22:11:27+01:00" level=info msg="Video transcoder started using nvidia nvenc with 1 stream variants."

  

Owncast Log:

grafik.thumb.png.e0fd80d4311ad9a5b47838870672449f.png

 

nvidia-smi output:

grafik.png.766db8ccf4b7a07c3a3ba9daf126186b.png

  • Thanks 1
Link to comment
13 minutes ago, xorinzor said:

I tried multiple values, but none seem to work.

You have to reboot in between, since otherwise it won't work properly.

Is this the only graphics card in your system?

 

13 minutes ago, xorinzor said:

Is there any command to figure out what displays (if any) are available?

No, usually 0 is the default one and that should always work except if the container crashed or some other unusual thing happened.

I think you also have tried different values at DFP_NR or am I wrong?

As said above I would try to play around with the DFP_NR variable and reboot every time you change the value, I know this is a tedious process but I can't think of another way to do it.

 

Maybe something about AMD is preventing this from working correctly, that's just a wild guess from me, but in terms of Virtualization they are a little behind Intel from what I read here on the forums, but that should have nothing to do with this container but maybe it affects it in some way.

Link to comment
2 minutes ago, ich777 said:

You have to reboot in between, since otherwise it won't work properly.

Is this the only graphics card in your system?

 

No, usually 0 is the default one and that should always work except if the container crashed or some other unusual thing happened.

I think you also have tried different values at DFP_NR or am I wrong?

As said above I would try to play around with the DFP_NR variable and reboot every time you change the value, I know this is a tedious process but I can't think of another way to do it.

 

Maybe something about AMD is preventing this from working correctly, that's just a wild guess from me, but in terms of Virtualization they are a little behind Intel from what I read here on the forums, but that should have nothing to do with this container but maybe it affects it in some way.


Hm okay, will have to do some more digging then.

It's indeed the only GPU in the system.

 

I have not yet tried different DFP_NR values since the error really seems to be specifically about what display it's trying to use. (ie; the DISPLAY variable).

 

AMD could very well be a bit behind in virtualization, but the old xeon CPU is from 2009, whereas the 5900X from 2020. I sincerely hope they managed to catch up on the difference over all those years😅 

 

I'll just do some more digging, also with other docker containers to check if it's limited to this container, or if others are affected too.

 

Thanks

Link to comment




Hm okay, will have to do some more digging then.
It's indeed the only GPU in the system.
 
I have not yet tried different DFP_NR values since the error really seems to be specifically about what display it's trying to use. (ie; the DISPLAY variable).
 
AMD could very well be a bit behind in virtualization, but the old xeon CPU is from 2009, whereas the 5900X from 2020. I sincerely hope they managed to catch up on the difference over all those years 
 
I'll just do some more digging, also with other docker containers to check if it's limited to this container, or if others are affected too.
 
Thanks


I think the best bet would be DFP_NR since these are actually the output ports and this corresponds to the display output, if the container doesn't find the DFP_NR or better speaking the output port the X Server would also fail and complain about that display 0 is not available because well the output port is not found, very oversimplified and I also hope that makes a little sense to you... :/

Sent from my C64

Link to comment
1 hour ago, ich777 said:

I think the best bet would be DFP_NR since these are actually the output ports and this corresponds to the display output, if the container doesn't find the DFP_NR or better speaking the output port the X Server would also fail and complain about that display 0 is not available because well the output port is not found, very oversimplified and I also hope that makes a little sense to you... :/
 

 


Been trying a lot of values, with reboots inbetween. Unfortunately that didn't seem to fix it.

 

I did end up noticing that for the latest nvidia driver no unlock script exists yet, but after switching back to the latest available that still didn't fix it.

 

Really at a loss here. Will have a look at the BIOS tomorrow, see if anything else stands out.

 

1 hour ago, ich777 said:

Sent from my C64

 


Pics or it didn't happen ;)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.