[Support] ich777 - Application Dockers


ich777

Recommended Posts

Anyone else having issues getting torrents to complete with Sonarr?

 

I have paths mapped properly, and have tried qbittorrent, rtorrent, deluge, etc. without any luck :(
Sonarr sends torrents perfectly, and the torrent clients download them perfectly

Then the completion just doesn't happen and there's no logs

Link to comment
1 hour ago, ich777 said:

What is failing exactly?

 

Is the file in the download folder and Sonarr doesn't moves it to the destination?

 

I made things as simple as possible and I'm just using /torrents as my path mapping, so I don't think that's the issue.

Yes the file gets downloaded and I can see it sitting in the /torrents directory and it starts seeding

 

I've also run DockerSafeNewPerms a few times, but that hasn't changed anything.

I can get around it by using nzbToMedia scripts for the time being, but it's weird I can't get it working natively :(

Edited by zer0zer0
Link to comment
1 minute ago, zer0zer0 said:

/torrents

I think this is the problem my container is designed to use the /mnt/downloads folder but that should not be the problem if you mount both containers to the same path, your download client and also Sonarr have to share the same path inside the Container.

If you set Sonarr to /mnt/downloads and your download client to /torrents this will not work, eventually try to change the path of the download client to /mnt/downloads, just for testing purpose.

Link to comment

Greetings,

 

I'm using the DirSyncPro docker and it works now. It shows a Warning: I/O error: could not copy for every file but it does indeed copy it nevertheless so I don't care. However, when the container is restarted or similar the schedule engine is not running by default. So I have to remember to access it and activate it by hand. Is it possible to enable it by default?

Link to comment
8 hours ago, derferd1 said:

It shows a Warning: I/O error: could not copy for every file but it does indeed copy it nevertheless so I don't care.

Are these files from the Boot device? Keep in mind that the container runs as user 'nobody' and in the group 'users', eventually the files that you want to copy are not accessible by this user/group.

 

8 hours ago, derferd1 said:

However, when the container is restarted or similar the schedule engine is not running by default.

Eventually try the 'CMD Mode' in 'Show more settings' section:

  1. Start up the Container
  2. Create a syncronisation file in the main directory of DirSyncPro and save, it for example: 'Backup.dsc'
  3. Go to the template click on 'Show more settings'
  4. Set 'CMD Mode' to 'true'
  5. Enter the filename from your Synchronisation at 'CMD File' in this example 'Backup' (without the '.dsc' ending)
  6. Then click on 'Apply'

If you do it like that way the GUI isn't started and you get only the log output.

 

You can then create a task in the 'User Scripts' Plugin if you have that installed to restart the Container like: 'docker restart DirSyncPro' and every time the Container is restarted it will start the synchronisation and execute the CMD Mode.

 

This would be only a workaround but works too. ;)

 

8 hours ago, derferd1 said:

Is it possible to enable it by default?

I will also look into this ;)

Link to comment
2 hours ago, ich777 said:

Are these files from the Boot device? Keep in mind that the container runs as user 'nobody' and in the group 'users', eventually the files that you want to copy are not accessible by this user/group.

 

Eventually try the 'CMD Mode' in 'Show more settings' section:

  1. Start up the Container
  2. Create a syncronisation file in the main directory of DirSyncPro and save, it for example: 'Backup.dsc'
  3. Go to the template click on 'Show more settings'
  4. Set 'CMD Mode' to 'true'
  5. Enter the filename from your Synchronisation at 'CMD File' in this example 'Backup' (without the '.dsc' ending)
  6. Then click on 'Apply'

If you do it like that way the GUI isn't started and you get only the log output.

 

You can then create a task in the 'User Scripts' Plugin if you have that installed to restart the Container like: 'docker restart DirSyncPro' and every time the Container is restarted it will start the synchronisation and execute the CMD Mode.

 

This would be only a workaround but works too. ;)

 

I will also look into this ;)

Thank you very much for your fast response!

It happens with every single file it copies. Most are from normal user shares. But it works.

 

Also thank you for your workaround! If there is no other solution, I'll have to use the CMD mode although I'd prefer the GUI.

Link to comment
23 hours ago, derferd1 said:

Also thank you for your workaround! If there is no other solution, I'll have to use the CMD mode although I'd prefer the GUI.

Please do a grafik.png.30587b093d75b0ca056d7d3ba3588928.png (force update) on the Docker page of the Container. I've implemented a new variable and also updated the template.

 

In your case (because you already have the template downloaded) you have to click on 'Add another Path, Port, Variable, Label or Device' and create it like in the screenshot, after you created it click on 'Add' and then 'Apply', then the schedule engine starts automatically:

grafik.png.ff20550fc210f12545926d585f350ebc.png

Link to comment

Hi there! 

 

New to unraid world. So be patient with me... ;)


I would like to use "DebianBuster" as "jump server" to reach my Tower remotely. The installation of the container was flawless. thanks! But, the problem is that I can't reach the tower (tower.local, tower or the direct IP) throught/inside DebianBuster... always get a "can't connect". All other website works.. What I'm doing wrong?
 

thanks!!

 

Link to comment
27 minutes ago, PsykoB said:

I would like to use "DebianBuster" as "jump server" to reach my Tower remotely. The installation of the container was flawless. thanks!

Appreciated?

 

27 minutes ago, PsykoB said:

But, the problem is that I can't reach the tower (tower.local, tower or the direct IP) throught/inside DebianBuster... always get a "can't connect".

Where are you accessing it? From your local LAN or do you want it to access it from outside (over the internet)?

 

If you are want to accessing it from the LAN type in: YOURSERVERIP:PORTOFTHECONTAINER

Or simply press on the Icon on the Docker page and click on WebUI.

 

If you want to access it from outside you have to install SWAG or any other reverse proxy and configure it that you can access it from outside (I've put a example on the Readme.md on Github for something like SWAG).

 

Btw you can also use my Debian Bullseye container, since I'm only pushing minor fixes to Debian Buster and no longer maintaining it actively.

Bullseye is really stable it's only marked as Beta because it's not officially released yet.

Link to comment

Hello, I have a question regarding the debian docker on a fundamental level. I want to find out if this could work for my usecase. I want to run some python scripts regularly. Some people have told me that I should run those in a docker and not in unraid itself.

 

If I use debian as a docker and say install some dependencies like a few python librarys and the docker gets updated. Will those things I have installed stay? As I understand it normally this whole docker things works by storing the config files somewhere else so that when the docker gets updated it just needs to read these config files and off we go. But I haven't yet understood how this works with a whole OS as the docker. Should I rather use a VM?

Link to comment
1 hour ago, shrippen said:

I want to run some python scripts regularly.

The first question is what does this scripts do? Do they need access to Unraid or are these random scripts that do "stuff" and are not related to Unraid and need only access to the Internet. :D

 

1 hour ago, shrippen said:

If I use debian as a docker and say install some dependencies like a few python librarys and the docker gets updated.

If the container get's updated, what will happen later that day all your installed dependencies will be wiped, but nothing that it saved in the '/debian' folder (Desktop, Documents, Settings,...) will be wiped, you can simulate a update of the container if you click on the button grafik.png.3eddd41fd11e99968aaf6d8f7a831658.png to simulate a update and what will be wiped and what not.

 

You can circumvent that all by simply creating a script (user.sh) with the contents (for example):

#!/bin/bash
apt-get update
apt-get -y install python3 python3-pip

 

Mount that to '/opt/scripts/user.sh' like this in the Docker template:

grafik.png.3c902f07405836b559601ab6d22c7df0.png

 

 

If you do it like this the container will check on every restart if the packages are installed (so after a update of the container itself the packages will be installed again to the container - keep in mind that the start will take a bit longer).

 

Hope that makes sense to you, feel free to ask if something is unclear. :)

Link to comment
19 hours ago, ich777 said:

Appreciated?

 

Where are you accessing it? From your local LAN or do you want it to access it from outside (over the internet)?

 

If you are want to accessing it from the LAN type in: YOURSERVERIP:PORTOFTHECONTAINER

Or simply press on the Icon on the Docker page and click on WebUI.

 

If you want to access it from outside you have to install SWAG or any other reverse proxy and configure it that you can access it from outside (I've put a example on the Readme.md on Github for something like SWAG).

 

Btw you can also use my Debian Bullseye container, since I'm only pushing minor fixes to Debian Buster and no longer maintaining it actively.

Bullseye is really stable it's only marked as Beta because it's not officially released yet.

 

I think I explained it wrong ... even locally, if I type "tower" or the IP of unraid dashboard in firefox, I get an error. However, all other websites work. Is there something preventing the container from communicating with the unraid dashboard?

Link to comment
17 minutes ago, PsykoB said:

I think I explained it wrong ... even locally, if I type "tower" or the IP of unraid dashboard in firefox, I get an error. However, all other websites work. Is there something preventing the container from communicating with the unraid dashboard?

Is the container reachable when you click on the icon on the Docker page and click on WebUI, if not can you attach the log?

Link to comment
7 minutes ago, PsykoB said:

Yes.. I can reach the container through the WebUI.. but inside the container itself I can't reach the dashboard.

Uh yes, the other way around...

 

You have two options the first is to enable this option in the Docker settings (you have to first stop the Docker service and turn on Advanced View to see it):

grafik.png.ebca0d49d6d679ec2e1781a5835ea2d5.png

 

Or you assign the container a Custom IP (on the template page of the container):

grafik.thumb.png.116260986c6741b5d54d3e1a9a181952.png

  • Like 1
Link to comment
24 minutes ago, ich777 said:

Uh yes, the other way around...

 

You have two options the first is to enable this option in the Docker settings (you have to first stop the Docker service and turn on Advanced View to see it):

grafik.png.ebca0d49d6d679ec2e1781a5835ea2d5.png

 

Or you assign the container a Custom IP (on the template page of the container):

grafik.thumb.png.116260986c6741b5d54d3e1a9a181952.png

 

Oh! thanks!.. it works!! :) but from a security perspective.. not sure it's a good idea to do so... :) 

 

Thank you!!

Link to comment
3 minutes ago, PsykoB said:

not sure it's a good idea to do so... :) 

Most things are not a good idea to do on Unraid from a security perspective... The best way would be to setup a whole new subnet on a separate NIC and expose services that you want to reach from outside to that subnet/NIC, but that's for most people not practicable...

  • Like 1
Link to comment
1 minute ago, ich777 said:

Most things are not a good idea to do on Unraid from a security perspective... The best way would be to setup a whole new subnet on a separate NIC and expose services that you want to reach from outside to that subnet/NIC, but that's for most people not practicable...

 

Yeah. Like I said.. I'm new into the unraid world... since 15 days :) The problem is that my server will be far away from home (at my parents home with a 1gbps internet link). So, I need to be able to access the server remotely. One of the solution is to use your debian docker as "jump" server to reach the dashboard.. other solutions are Zerotier or Tailscale. I don't want to expose the server directly to internet without a vpn.

 

 

Link to comment
1 minute ago, PsykoB said:

So, I need to be able to access the server remotely.

Yes, this is the way I do it. I also secured it with a strong httpauth and reverse proxy through swag.

 

Just a recommendation: I also would recommend that you install a RaspberryPi with a Backup OpenVPN server so that you can connect to the network if something fails. :)

  • Like 1
Link to comment
12 minutes ago, ich777 said:

Yes, this is the way I do it. I also secured it with a strong httpauth and reverse proxy through swag.

 

Just a recommendation: I also would recommend that you install a RaspberryPi with a Backup OpenVPN server so that you can connect to the network if something fails. :)

 

Great Idea.. it's like 3 hours from my home.. so.. a backup solution is a great idea :)

 

thanks for all!

Link to comment
7 minutes ago, PsykoB said:

Great Idea.. it's like 3 hours from my home.. so.. a backup solution is a great idea :)

Yep and eventually think about a IP based Power Socket like: Click

(I think based on where you live the name can be different I think Blitzwolf SHP2 is also a common name for them)

 

So you actually can turn off or better speaking power cycle the server if it hard locks or any other problem occours.

 

I actually flash those Power Sockets with a custom firmware named "espurna" but I think "tasmota" is more widespread (pretty straightforward).

Link to comment
Just now, ich777 said:

Yep and eventually think about a IP based Power Socket like: Click

(I think based on where you live the name can be different I think Blitzwolf SHP2 is also a common name for them)

 

So you actually can turn off or better speaking power cycle the server if it hard locks or any other problem occours.

 

I actually flash those Power Sockets with a custom firmware named "espurna" but I think "tasmota" is more widespread (pretty straightforward).

 

I actually use something like that to "reboot" the server remotely..  (https://www.tp-link.com/ca/home-networking/smart-plug/hs103p2/) .. It works really well.. :)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.