[Support] ich777 - Application Dockers


ich777

Recommended Posts

57 minutes ago, Paddeh said:

Have this problem when im changing the video codec to use my IGPU. when i use the default then i don't have this issue. But i would and want to try and hit my iGPU.

Please give me a bit more details on what hardware are you using, can you maybe upload your Diagnostics?

Link to comment

Hello, Im using Lucky Backup and my jobs include snapshots.

 

The problem is:

If i run the tasks manually LB creates the snapshots, I can see them in .luckybackup-snapshots and in the gui but if i set them on a schedule no snapshots are created, or at least I dont see them. No extra folders created in .luckybackup-snapshots and  it isnt listed in manage snapshots in the gui. I know that the schedule works because i placed some files on the source and checked the destination after the scheduled time.

 

There are some logs that I can provide?

Link to comment
6 hours ago, exico said:

I know that the schedule works because i placed some files on the source and checked the destination after the scheduled time.

Did you restart the container after taking a snapshot?

 

The main reason why they don't show up after a scheduled task is because it runs in console mode and luckyBackup was never intended to be used in a Docker container and that's why you have to restart the container once so that it picks up all the new backups.

Link to comment
18 minutes ago, ich777 said:

Is HW transcoding working in other applications?

 

I assume it‘s because you are using a 13th gen CPU.

 

I just tried it and on 10th gen everything is working so far.

 For Plex its working, but some other services like youtube-dl i can't seem to get it working.

Link to comment
1 hour ago, Paddeh said:

 For Plex its working, but some other services like youtube-dl i can't seem to get it working.

The I assume it's the same for this container, maybe some dependencies are outdated or the Kernel module (driver) is not 100% compatible with the container.

 

Please wait for a newer Unraid version so that we can test this a bit more in depth, for the time being please don't use VA-API.

Link to comment
11 hours ago, ich777 said:

Did you restart the container after taking a snapshot?

 

The main reason why they don't show up after a scheduled task is because it runs in console mode and luckyBackup was never intended to be used in a Docker container and that's why you have to restart the container once so that it picks up all the new backups.

 

I checked, Im starting to understand how the snapshots works. From what I got the snapshots folders are created only when i delete a file/folder from the source that corresponds to that date. For example if I deleted today a file created Jan 13 2024 I will find it in the folder 20240113somethingsomething.

With that forget what i said about missing snapshots.

 

With that said it would be nice to have a timestamp in the LastCronLog and maybe an option to cycle the logs

Link to comment
1 minute ago, exico said:

With that said it would be nice to have a timestamp in the LastCronLog and maybe an option to cycle the logs

This is a thing that you have to request from the creator of luckyBackup on Sourceforge since I use it in the container.

Link to comment

just fyi had someone reach out to sab about your container with unrar being broken as is just wouldnt extract files during direct unpack or not:

2024-01-25 14:51:03,438::DEBUG::[directunpacker:347] DirectUnpack Unrar output: 
UNRAR 7.00 beta 3 freeware      Copyright (c) 1993-2023 Alexander Roshal

...

2024-01-25 14:51:31,550::DEBUG::[newsunpack:860] UNRAR output:
UNRAR 7.00 beta 3 freeware      Copyright (c) 1993-2023 Alexander Roshal
2024-01-25 14:51:31,550::INFO::[newsunpack:863] Unpacked 0 files/folders in 0 seconds


I had them switch to linuxserver docker to test and everything worked fine.

I sent them your side up to share relevant logs and info (dunno if they have done that just yet) - just sharing here in case they never make it.


btw, i personally run the unrar 7 betas without any issues.

  • Like 1
Link to comment
1 minute ago, zoggy said:

I sent them your side up to share relevant logs and info (dunno if they have done that just yet) - just sharing here in case they never make it.

Nope, haven't made a post here or on GitHub.

 

However I don't have any issues whatsoever with direct unpack.

Link to comment

Pulling my hair out a bit trying to get the debian-mirror working.

 

I keep getting apt update errors on the theme of:

Ign:1 http://192.168.1.21:980/debian bookworm InRelease
Ign:2 http://192.168.1.21:980/debian bookworm-updates InRelease
Ign:3 http://192.168.1.21:980 bookworm-security InRelease
Err:4 http://192.168.1.21:980/debian bookworm Release
  404  Not Found [IP: 192.168.1.21 980]
Err:5 http://192.168.1.21:980/debian bookworm-updates Release
  404  Not Found [IP: 192.168.1.21 980]
Err:6 http://192.168.1.21:980 bookworm-security Release
  404  Not Found [IP: 192.168.1.21 980]
E: The repository 'http://192.168.1.21:980/debian bookworm Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

 

Source file:

deb http://ftp.us.debian.org/debian bookworm main contrib
deb http://ftp.us.debian.org/debian bookworm-updates main contrib

#local mirror
#deb http://192.168.1.21:980/debian bookworm main contrib non-free
#deb http://192.168.1.21:980/debian bookworm-updates main contrib non-free

# security updates
deb http://security.debian.org bookworm-security main contrib

# local mirror
#deb http://192.168.1.21:980 bookworm-security main contrib

 

That works as is as it should on a bookworm VM.  If I comment out the debian.org and uncomment the local mirror pieces, i receive the errors above. Ping, curl etc. to that IP/port works.

 

What I have done:

mirror.list edits - Uncomment the amd64 and armel architectures.  Changed armel to arm64. (Have some rpi's running around i want to also update. Verified they're using arm64 with dpkg --print-architecture. They show similar errors on apt update.

 

Blown away docker, config, image and repo download and started from scratch. No joy.

 

I also have the ubuntu mirror docker setup separately (read a comment in this thread about that being best practice) and it works flawlessly.

 

Only other item of note is i've read in this thread and others about cnf issues with the ubuntu docker.  Upon reinstall/fetch of packages of the debian docker, i noticed this in the docker log:

Processing cnf indexes: [CCC]

Downloading 0 cnf files using 0 threads...
Begin time: Sat Jan 27 02:01:47 2024
[0]... 
End time: Sat Jan 27 02:01:47 2024

Again, ubuntu mirror docker works flawlessly. And from what i can tell the apt-mirror version in the debian docker is 5.4.2 which contained the fix for the cnf download issues.

 

Any ideas where to look for a solution?

Link to comment
1 hour ago, Glycerine said:
E: The repository 'http://192.168.1.21:980/debian bookworm Release' does not have a Release file.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Please read these lines, you can see that this repository is untrusted you can try this in your sources.list:

...
deb [trusted=yes] http://192.168.1.21:980/debian bookworm main contrib non-free
...

 

You should be also able to do it like so:

apt-get update --allow-insecure-repositories

 

But what I would recommend is that you route it through a reverse proxy, this is caused because you are not using https without any signed certificate and I heavily recommend to reverse proxy it since it is more easy to implement to your machines.

Link to comment
42 minutes ago, ich777 said:

Please read these lines, you can see that this repository is untrusted you can try this in your sources.list:

...
deb [trusted=yes] http://192.168.1.21:980/debian bookworm main contrib non-free
...

Thank you.  That didn't occur to me since the debian.org were http. Figured it was a key that wasn't getting downloaded. And the ubuntu docker works without that.

 

However, while that "can't be done securely" error no longer exists, I keep getting 404 errors for the particular additional repos I've enabled; amd64 for the vm and arm for the pi's. This seems like an apt/apt-mirror configuration error rather than something with your container however. I'll go hunting through the docs and report back for others if i find the issue.

 

Thanks for your help @ich777

 

Edited by Glycerine
Link to comment

Currently i have just one Nvidia GPU on my unraid server and it's passed through to my windows VM. Tomorrow i'll be getting a second Nvidia GPU to add to the server which will be used for dockers that need a GPU for acceleration (such as ML, transcoding, etc).

 

I see on the Nvidia-Driver plugin is it tells you 

 

"ATTENTION: If you plan to pass through your card to a VM don't install this plugin!"

 

How do i handle this situation where i want 1 GPU passed through to a VM and 1 to be managed by Nvidia-Driver plugin for my dockers?

Link to comment
17 minutes ago, Kilrah said:

You bind the one you want to pass to the VM to VFIO on the Tools->System Devices page. Then the driver won't even know it's there.

Okay cool, my current VM GPU is already bound to VFIO, just wanted to make sure that would be okay.

Link to comment

Does anyone have set up the ich777/torbrowser behind a nginx reverse proxy?

Would be great if someone could share the config.

At the moment mine looks like this:

location /torbrowser {
      include /config/nginx/proxy.conf;
      include /config/nginx/resolver.conf;
      include /config/nginx/authelia-location.conf;
      proxy_pass http://192.168.178.229:8080/;
      #proxy_http_version 1.1;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection "Upgrade";
      proxy_set_header Host $host;
}

 

VNC itself works but I get "Failed to connect to server".

 

In the Chrome Dev-Console I can see the following:

websock.js:231 WebSocket connection to 'wss://mydomain.tld/websockify' failed: 
open @ websock.js:231
rfb.js:686 WebSocket on-error event
_socketError @ rfb.js:686
rfb.js:941 Failed when connecting: Connection closed (code: 1006)
_fail @ rfb.js:941
6The resource <URL> was preloaded using link preload but not used within a few seconds from the window's load event. Please make sure it has an appropriate `as` value and it is preloaded intentionally.
websock.js:231 WebSocket connection to 'wss://mydomain.tld/websockify' failed: 
open @ websock.js:231
_connect @ rfb.js:551
_updateConnectionState @ rfb.js:903
RFB @ rfb.js:279
connect @ ui.js:1044
rfb.js:686 WebSocket on-error event
_socketError @ rfb.js:686
_websocket.onerror @ websock.js:268
error (async)
attach @ websock.js:266
open @ websock.js:231
_connect @ rfb.js:551
_updateConnectionState @ rfb.js:903
RFB @ rfb.js:279
connect @ ui.js:1044
rfb.js:941 Failed when connecting: Connection closed (code: 1006)

 

THX in advance! :-)

Link to comment

Trying to get Rustdesk-AIO running from my cloudlfare domain (have tried proxied and dns only).  I know that part seems to be working.  But as I understand it, Rustdesk shouldnt use a reverse proxy.  I have swag running, and every time I try to access my rustdesk domain I get the swag welcome page.  I have a router running OPNSense and believe I have the port forwarding correct.  However it's still not working.  What am I missing?

Link to comment
42 minutes ago, diehardbattery said:

However it's still not working.

If you want to use a reverse proxy you have to use the proxy stream module but I haven't tried that yet and that could be very, very difficult.

 

However you just forward the ports and that's it. Usually no proxy is used because RustDesk is not a http/https application instead it uses it's own protocol which makes use of not only TCP it also requires UDP.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.