[ARCHIVE] binhex docker repository


Recommended Posts

Hey everyone,

 

Just got my unraid setup today and I can't believe its gone so smoothly. Binhex your docker image's are amazing.

 

My only issue so far has that when I restart the docker instance of arch-delugevpn the plugins I enabled are all disabled.

 

Has anybody had any success with enabling plugins after restart? One other thing is that the settings of the plugin are retained just the enabled state is not persisted.

 

Edit: I've worked it out. In the config file manually edit core.conf to include the plugin you want e.g. "enabled_plugins": ["Scheduler"], 

Also if you want the scheduler plugin you will need to use an old version. I have attached the 1.3.11 version I complied then you should be smooth sailing

Scheduler-0.2-py2.7.egg.zip

Link to comment

LOL, thanks guyz! Glad my system is back up and feels dumb that the root issue was a tick mark.

 

Htpcnewbie I spotted your issue, if you look at your first screenshot you can see you must of accidentally unticked the privileged flag for sabnzbdvpn, that's why you were getting the permission denied messages, obviously going with a new template sets it to the correct config :-)

 

I didn't spot that....  But nuke and pave for the win! 

 

Too late binhex I already had the big hugz!  ;D

 

It's great to understand the reason behind it though, I'd not spotted that myself.

Link to comment

I have been in the same situation and the solution is straightforward but take some effort to implement. It is not recommended to directly store your seeds on user shares. Those disks will continuously run and reduce the life and not utilize the benefit of disk spin down. Here is what I do:

 

1.Have a cache drive and the torrents download to this cache drive.

2. In deluge you have an option of creating labels and setting the location for each of these labels. When you add a torrent and set label, deluge will move the torrent from incomplete directory to complete/labelname directory after the download completes.

3. For every label name I create, I have a symbolic link directory to usershare directory. For eg. for tv labelname, i have a symbolic link of tv_share to /mnt/user/tv.

4. Run a custom cron script few times a day to rsync directories complete/tv and complete/tv_share

 

This way, your raid drives do not run all the time. Instead of cache drive to house your torrents, you can have a spare drive and use unassigned plugin to assign and use it for torrents. I have been using this approach for few years and works quite well.

 

Maybe others will have an elegant solution, would love to hear about other ideas.

 

Is it possible (and if so how) to use Deluge to download directly to and seed directly from my disks or shares? I just have /config and /data both located on the cache but I cannot find where Deluge is downloading the files to (/homes/nobody/Downloads ??) and when I attempt to seed an already completed torrent by pointing to either the user share or the specific disk on the array, Deluge can't seem to follow the path so it says it's 0% finished.

 

Is there a more efficient and unraid-friendly way to do what I'm asking that maybe I'm unaware of?

Link to comment

I seem to be having odd issues with Madsonic docker. It randomly dies and the webgui is not accessible. A restart of the docker doesn't seem to bring it back.

 

2016-01-10 12:20:23,535 DEBG fd 8 closed, stopped monitoring (stderr)>

2016-01-10 12:20:23,535 DEBG fd 6 closed, stopped monitoring (stdout)>

2016-01-10 12:20:23,535 INFO exited: start (exit status 0; not expected)

2016-01-10 12:20:23,535 DEBG received SIGCLD indicating a child quit

2016-01-10 12:20:26,541 INFO spawned: 'start' with pid 89

2016-01-10 12:20:26,608 DEBG 'start' stdout output:

Disabling SSL for Madsonic

 

2016-01-10 12:20:27,383 DEBG 'start' stdout output:

Started Madsonic [PID , /config/madsonic_sh.log]

 

2016-01-10 12:20:27,384 DEBG fd 8 closed, stopped monitoring (stderr)>

2016-01-10 12:20:27,384 DEBG fd 6 closed, stopped monitoring (stdout)>

2016-01-10 12:20:27,384 INFO exited: start (exit status 0; not expected)

2016-01-10 12:20:27,384 DEBG received SIGCLD indicating a child quit

2016-01-10 12:20:28,385 INFO gave up: start entered FATAL state, too many start retries too quickly

 

Attached madsonic_sh.log

 

I also notice that it reports "No Transcoder Found", I guess that's necessary to play flac files via the web browser interface?

madsonic_sh.txt

Link to comment

ok.. I investigated a bit further, and I believe the issue is actually on your end Binhex.  Here's a sample screenshot from GitHub for your moviegrabber-icon.png

 

Untitled_zpsc4imfqaa.png

 

So, for whatever reason, it appears that the icon's you've got stored on GitHub for jenkins, jenkins-slave,minidlna, moviegrabber, and sonarr are corrupted or something.

 

hi squid, thanks a ton for looking into this!, i didnt realise this was the issue, your quite right they are showing as corrupt, the wierd thing is the date stamp for the commit hasnt changed so the file should be the same, the other odd thing is that the local copy of the repo i have i can view all the images, no problems, how very strange!!.

 

so did a quick bit of good old googling and found this rather useful article which shows of a more robust way of getting images uploaded to github without using the nasty raw format, thought i would share as i have just done this and it works a treat, now all i gotta do is change all my templates  :o

 

http://solutionoptimist.com/2013/12/28/awesome-github-tricks/

 

and the end result:-

 

https://github.com/binhex/docker-templates/issues/22

 

pretty sweet hu, and as you can see the images are ok, so not sure why github had an issue, but there ya go, ive learnt a new trick in the process :-).

 

Unfortunately this didn't work for me but I did find this little snippet about .gitattributes

 

http://stackoverflow.com/questions/19411981/images-corrupt-after-git-push

 

I just added the following from that and it seems to work.

# Binary
*.png binary

 

btw, Great work with the images, I'm adapting your Jenkins for my own use and it's a great starting point.

Link to comment

ok.. I investigated a bit further, and I believe the issue is actually on your end Binhex.  Here's a sample screenshot from GitHub for your moviegrabber-icon.png

 

Untitled_zpsc4imfqaa.png

 

So, for whatever reason, it appears that the icon's you've got stored on GitHub for jenkins, jenkins-slave,minidlna, moviegrabber, and sonarr are corrupted or something.

 

hi squid, thanks a ton for looking into this!, i didnt realise this was the issue, your quite right they are showing as corrupt, the wierd thing is the date stamp for the commit hasnt changed so the file should be the same, the other odd thing is that the local copy of the repo i have i can view all the images, no problems, how very strange!!.

 

so did a quick bit of good old googling and found this rather useful article which shows of a more robust way of getting images uploaded to github without using the nasty raw format, thought i would share as i have just done this and it works a treat, now all i gotta do is change all my templates  :o

 

http://solutionoptimist.com/2013/12/28/awesome-github-tricks/

 

and the end result:-

 

https://github.com/binhex/docker-templates/issues/22

 

pretty sweet hu, and as you can see the images are ok, so not sure why github had an issue, but there ya go, ive learnt a new trick in the process :-).

 

Unfortunately this didn't work for me but I did find this little snippet about .gitattributes

 

http://stackoverflow.com/questions/19411981/images-corrupt-after-git-push

 

I just added the following from that and it seems to work.

# Binary
*.png binary

 

btw, Great work with the images, I'm adapting your Jenkins for my own use and it's a great starting point.

 

hi codechimp, firstly many thanks for posting, it lead me to do a bit more reading, i found this post very useful!:-

 

https://help.github.com/articles/dealing-with-line-endings/

 

pretty basic stuff but something i never really spent much time looking into, shows how green i still am with github :-). in any case i put the changes into my gitattributes file and after a few stabs i got it correctly defined and the images are now up on github with no corruption, yay!.

 

out of curiosity you mentioned that "Unfortunately this didn't work for me" im assuming your talking about the idea of creating an issue and uploading images and then linking to these?, it certainly seems to work for me and squid's forum post now correctly shows the icons so im pretty sure its working correctly, but this is a workaround and i prefer the idea of defining binary artifacts much more.

 

glad your enjoying Jenkins, i use this at work for CI quite heavily now with Docker, having lots of fun running multiple dynamic Docker Jenkins nodes :-), the image could do with the inclusion of some tooling, such as git, python, ruby, mono etc, so might make that change at some point.

Link to comment

I seem to be having odd issues with Madsonic docker. It randomly dies and the webgui is not accessible. A restart of the docker doesn't seem to bring it back.

 

2016-01-10 12:20:23,535 DEBG fd 8 closed, stopped monitoring (stderr)>

2016-01-10 12:20:23,535 DEBG fd 6 closed, stopped monitoring (stdout)>

2016-01-10 12:20:23,535 INFO exited: start (exit status 0; not expected)

2016-01-10 12:20:23,535 DEBG received SIGCLD indicating a child quit

2016-01-10 12:20:26,541 INFO spawned: 'start' with pid 89

2016-01-10 12:20:26,608 DEBG 'start' stdout output:

Disabling SSL for Madsonic

 

2016-01-10 12:20:27,383 DEBG 'start' stdout output:

Started Madsonic [PID , /config/madsonic_sh.log]

 

2016-01-10 12:20:27,384 DEBG fd 8 closed, stopped monitoring (stderr)>

2016-01-10 12:20:27,384 DEBG fd 6 closed, stopped monitoring (stdout)>

2016-01-10 12:20:27,384 INFO exited: start (exit status 0; not expected)

2016-01-10 12:20:27,384 DEBG received SIGCLD indicating a child quit

2016-01-10 12:20:28,385 INFO gave up: start entered FATAL state, too many start retries too quickly

 

Attached madsonic_sh.log

 

I also notice that it reports "No Transcoder Found", I guess that's necessary to play flac files via the web browser interface?

 

hmm very odd, im running madsonic 24/7 here and have been for some weeks with no interruption in service, the error log isnt really telling me much so not sure whats going on, have you tried deleting container and image and re-pulling down?, if not then that might be a good starting point. The transcoders are included with madsonic, so this should just work for flac content, i will double check this later tonight if i get chance.

Link to comment

Going to ask this again cause I think it got lost in the fray during the vpn issue.

 

I have a Mylar docker that I am trying to have communicate with sab and it looks like Mylar is able to send info to sab, but when sab then tries to retrieve the nzb from Mylar its never able to connect.  I was wondering if like the delugevpn container it needs a lan range option to communicate and if that is the case can it be added?

 

first question, are you using sabnzbd or sabnzbdvpn?, if your using sabnzbdvpn then if you pull down the latest image then this includes the ability to do docker to docker communication now, so should work ok, if its just sabnzbd then this should work out of the box as there is no firewall rules (iptables) defined.

Link to comment

DELUGE:

 

I must be an idiot.

 

For the life of me I cannot get the deluge to download to the right folder. It keeps default to /home/nobody/Downloads

 

I have mapped the /data directory to my /mnt/user/deluge share

 

I even setup a new mapping to /Downloads

 

I have setup the path in preferences to all point to /data, /data/finished/, etc.

 

I've blown away the config, the template, the .xml file, the directory, started from scratch.

 

What is preventing deluge from mapping to the right directory. Permissions are all accurate as I can change the password and preferences without a problem.

 

I cheated and mapped /home/nobody/Downloads to my /mnt/user/deluge directory and I was able to download files to the directory. Deluge would not move completed files though.

 

Bizzare

 

Update 1: Now the container is downloading to the correct folder based on my /Downloads schema. Still moving the folder off to who knows where. I can manually move the folder to the right location for now as I continue to troubleshoot.

 

 

Link to comment

I've read all over the forum and can't seem to figure out how to get DelugeVPN setup with vpnsecure.me service.  I put the .ovpn file in the directory as instructed but I'm getting no love.  I'm new to Unraid and docker images but I just can't seem to figure it out.  Any help is much appreciated.

 

Thanks!

Link to comment

ok.. I investigated a bit further, and I believe the issue is actually on your end Binhex.  Here's a sample screenshot from GitHub for your moviegrabber-icon.png

 

Untitled_zpsc4imfqaa.png

 

So, for whatever reason, it appears that the icon's you've got stored on GitHub for jenkins, jenkins-slave,minidlna, moviegrabber, and sonarr are corrupted or something.

 

hi squid, thanks a ton for looking into this!, i didnt realise this was the issue, your quite right they are showing as corrupt, the wierd thing is the date stamp for the commit hasnt changed so the file should be the same, the other odd thing is that the local copy of the repo i have i can view all the images, no problems, how very strange!!.

 

so did a quick bit of good old googling and found this rather useful article which shows of a more robust way of getting images uploaded to github without using the nasty raw format, thought i would share as i have just done this and it works a treat, now all i gotta do is change all my templates  :o

 

http://solutionoptimist.com/2013/12/28/awesome-github-tricks/

 

and the end result:-

 

https://github.com/binhex/docker-templates/issues/22

 

pretty sweet hu, and as you can see the images are ok, so not sure why github had an issue, but there ya go, ive learnt a new trick in the process :-).

 

Unfortunately this didn't work for me but I did find this little snippet about .gitattributes

 

http://stackoverflow.com/questions/19411981/images-corrupt-after-git-push

 

I just added the following from that and it seems to work.

# Binary
*.png binary

 

btw, Great work with the images, I'm adapting your Jenkins for my own use and it's a great starting point.

 

hi codechimp, firstly many thanks for posting, it lead me to do a bit more reading, i found this post very useful!:-

 

https://help.github.com/articles/dealing-with-line-endings/

 

pretty basic stuff but something i never really spent much time looking into, shows how green i still am with github :-). in any case i put the changes into my gitattributes file and after a few stabs i got it correctly defined and the images are now up on github with no corruption, yay!.

 

out of curiosity you mentioned that "Unfortunately this didn't work for me" im assuming your talking about the idea of creating an issue and uploading images and then linking to these?, it certainly seems to work for me and squid's forum post now correctly shows the icons so im pretty sure its working correctly, but this is a workaround and i prefer the idea of defining binary artifacts much more.

 

glad your enjoying Jenkins, i use this at work for CI quite heavily now with Docker, having lots of fun running multiple dynamic Docker Jenkins nodes :-), the image could do with the inclusion of some tooling, such as git, python, ruby, mono etc, so might make that change at some point.

 

I use images within my repo's without corruption issue so knew it was possible, it was just a matter of tracking down the differences between my own and your repo.  I'm hardly a github expert either!

 

Yes I was talking about the images via the issue.  I may have grabbed it at the point where you were still experimenting so it might have worked if I had tried it now.  I just got into unraid/your docker images this weekend so probably bad timing.

 

I'm going to be using Jenkins for CI for my Android development.  Totally new to me and I did need some addtional tooling hence why I forked your repo and started extending it.  I'm a total noob to docker/jenkins so I'd not want to suggest pull requests until I know what I'm talking about.

Link to comment

I'm going to be using Jenkins for CI for my Android development.  Totally new to me and I did need some addtional tooling hence why I forked your repo and started extending it.  I'm a total noob to docker/jenkins so I'd not want to suggest pull requests until I know what I'm talking about.

 

i would recommend if you need specific tooling for certain projects, such as android dev then you really should consider creating a jenkins slave (node) and then install what you want on the node, installing everything on the master and running jobs on the master is not really the way to use jenkins.

Link to comment

I'm going to be using Jenkins for CI for my Android development.  Totally new to me and I did need some addtional tooling hence why I forked your repo and started extending it.  I'm a total noob to docker/jenkins so I'd not want to suggest pull requests until I know what I'm talking about.

 

i would recommend if you need specific tooling for certain projects, such as android dev then you really should consider creating a jenkins slave (node) and then install what you want on the node, installing everything on the master and running jobs on the master is not really the way to use jenkins.

That's my plan long term.  From what I've read slave nodes are the way to go though most tutorials use the master so I'll experiment like that to start with then re-fork and do things properly.

Link to comment

hmm very odd, im running madsonic 24/7 here and have been for some weeks with no interruption in service, the error log isnt really telling me much so not sure whats going on, have you tried deleting container and image and re-pulling down?, if not then that might be a good starting point. The transcoders are included with madsonic, so this should just work for flac content, i will double check this later tonight if i get chance.

 

Thanks binhex, I deleted and repulled, hopefully that will fix my issue.

 

As for the transcoders, madsonic reports no transcoder found.

 

http://i.imgur.com/EuB8Ahh.png

 

And I don't seem to see an entry for it here:

 

http://i.imgur.com/8LUNghu.png

Link to comment

Hi,

 

This docker itself is working great for me, however I am having 1 crippling issue that I cannot seem to figure out. When I setup the docker and link my media content, I am linking it to folders that are NFS shares from another server that I have on my network. However, for the life of me, I cannot not get the the media files and folders to appear when I try and add libraries from the WebUI. When I am adding the shares during docker set up, I can see the media content show up inside the folder, but when I get to the WebUI they are missing.

 

I have tried all kinds of things, mostly permissions related as I thought maybe it was an nfs permissions issue between client and host but I have not been able to isolate the issue.

 

Any help would be greatly appreciated.

 

**Edit: It appears that this issue is true for any docker that I install, so maybe this is a limitation of the docker service itself and not necessarily specific to any given docker application. In any case, it will be unfortunate if I am forced my move all of my storage to the unRaid server in order to take advantage of the unRaid docker service.

 

Bzq8GY.png

 

Ia6e6R.png

 

tpzw8j.png

Link to comment

Thanks binhex, I deleted and repulled, hopefully that will fix my issue.

 

As for the transcoders, madsonic reports no transcoder found.

 

http://i.imgur.com/EuB8Ahh.png

 

And I don't seem to see an entry for it here:

 

http://i.imgur.com/8LUNghu.png

 

hmm interesting, ok the webui reports this:-

 

"The actual transcoding is done by third-party command line programs which must be installed in /config/transcode."

 

so if you take a look at your volume mapping for /config on the host you should see there is a subdir called transcode that contains the transcoders, so im not sure why madsonic is reporting transcoders not found, how did you get to that screen btw?, just want to confirm i see the same here, just double check the /config/transcode directory does exist and that you can see the transcoders in there (ffmpeg, lame etc)

 

one question for you, what player are you using to playback?, im using the android app mainly but i know there are about 3 or 4 different built in web players you can use, i think transcoding options differ per player so make sure transcoding is enabled.

Link to comment

hmm interesting, ok the webui reports this:-

 

"The actual transcoding is done by third-party command line programs which must be installed in /config/transcode."

 

so if you take a look at your volume mapping for /config on the host you should see there is a subdir called transcode that contains the transcoders, so im not sure why madsonic is reporting transcoders not found, how did you get to that screen btw?, just want to confirm i see the same here, just double check the /config/transcode directory does exist and that you can see the transcoders in there (ffmpeg, lame etc)

 

one question for you, what player are you using to playback?, im using the android app mainly but i know there are about 3 or 4 different built in web players you can use, i think transcoding options differ per player so make sure transcoding is enabled.

 

I do see the transcoders in /config:

 

-rw-rw-rw- 1 nobody users 12882420 Jan 13 11:40 Audioffmpeg
-rw-rw-rw- 1 nobody users 28601120 Jan 13 11:40 ffmpeg
-rw-rw-rw- 1 nobody users     2262 Jan 13 11:40 ffmpeg.txt
-rw-rw-rw- 1 nobody users   372787 Jan 13 11:40 lame
-rw-rw-rw- 1 nobody users   523904 Jan 13 11:40 xmp

 

I just click the Madsonic logo in the top left to get to the info screen. You'd probably see the same.

 

I think you're right concerning the players/transcoding options. I don't seem to have a problem playing flac files in the DSub android  app, only in a browser using the Web Player.

 

http://i.imgur.com/oScNkuH.png

 

Perhaps its just a limitation of the integrated player. It's just odd I don't see a "audio->flag" option in active transcodings in the above screenshot.

 

No biggie, thanks again for the help.

Link to comment

Hi,

 

This docker itself is working great for me, however I am having 1 crippling issue that I cannot seem to figure out. When I setup the docker and link my media content, I am linking it to folders that are NFS shares from another server that I have on my network. However, for the life of me, I cannot not get the the media files and folders to appear when I try and add libraries from the WebUI. When I am adding the shares during docker set up, I can see the media content show up inside the folder, but when I get to the WebUI they are missing.

 

I have tried all kinds of things, mostly permissions related as I thought maybe it was an nfs permissions issue between client and host but I have not been able to isolate the issue.

 

ok ive never tried use a mount point to another server as my source for media but i cant see why it wouldnt work, i think your issue is most probably permissions based, all the docker images i have produced will run as user "nobody"so you will need to make sure user nobody can read your media.

Link to comment

I just click the Madsonic logo in the top left to get to the info screen. You'd probably see the same.

 

interestingly i dont see the same as you, output shown:-

 


Version	MADSONIC 5.1.5250.20150813.1155
MADSONIC REST API v1.12.0
License	Madsonic Free Edition, for personal use only as described below.
Server	jetty/8.y.z-SNAPSHOT, java 1.7.0_91, Linux (138.9 MB / 282.0 MB)
Size: Database: 121MB, Thumbs-Cache: 36MB, LastFM-Cache: 0MB, Lucene-Cache: 3MB
Spring.Framework v3.2.14, Spring.Security v3.2.8
HyperSQL DataBase v2.3.2
[b]ffmpeg version N-41404-g0e406ab- http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2014 the FFmpeg developers (18.12.2015)[/b]

 

so you can see some info shown regards ffmpeg, but info isnt picking this up for you, two thoughts firstly you posted saying the transcoders are in /config, i think thats just a typo right, you mean they are in /config/transcode/ yes?, if not then they should be in there. secondly can you check permissions are correct for the /config/transcode/ folder, all files in that folder should be obviously executable and should be owned by user nobody, if your not sure how to do this then let me know.

Link to comment

Hi,

 

This docker itself is working great for me, however I am having 1 crippling issue that I cannot seem to figure out. When I setup the docker and link my media content, I am linking it to folders that are NFS shares from another server that I have on my network. However, for the life of me, I cannot not get the the media files and folders to appear when I try and add libraries from the WebUI. When I am adding the shares during docker set up, I can see the media content show up inside the folder, but when I get to the WebUI they are missing.

 

Try stopping and restarting docker and see if it makes a difference.  I do this with SMB shares with no problem, but as a general rule, the shares have to be mounted prior to the docker service starting (why I use manual mount commands instead of using unassigned devices)
Link to comment

Hi,

 

This docker itself is working great for me, however I am having 1 crippling issue that I cannot seem to figure out. When I setup the docker and link my media content, I am linking it to folders that are NFS shares from another server that I have on my network. However, for the life of me, I cannot not get the the media files and folders to appear when I try and add libraries from the WebUI. When I am adding the shares during docker set up, I can see the media content show up inside the folder, but when I get to the WebUI they are missing.

 

I have tried all kinds of things, mostly permissions related as I thought maybe it was an nfs permissions issue between client and host but I have not been able to isolate the issue.

 

ok ive never tried use a mount point to another server as my source for media but i cant see why it wouldnt work, i think your issue is most probably permissions based, all the docker images i have produced will run as user "nobody"so you will need to make sure user nobody can read your media.

 

Hi,

 

This docker itself is working great for me, however I am having 1 crippling issue that I cannot seem to figure out. When I setup the docker and link my media content, I am linking it to folders that are NFS shares from another server that I have on my network. However, for the life of me, I cannot not get the the media files and folders to appear when I try and add libraries from the WebUI. When I am adding the shares during docker set up, I can see the media content show up inside the folder, but when I get to the WebUI they are missing.

 

Try stopping and restarting docker and see if it makes a difference.  I do this with SMB shares with no problem, but as a general rule, the shares have to be mounted prior to the docker service starting (why I use manual mount commands instead of using unassigned devices)

 

Thanks to both of you the reply`s. I for some reason was not able to resolve this issue, but I believe I have covered both of your suggestions. However, in running a bunch of tests on the server I noted that the server I was hosting some content on was a big choke point so I have decided to move all of my media from that other server to the unRaid server. In doing this I have solved the issue  ;D.

 

Maybe it was all for the best!  8)

Link to comment

interestingly i dont see the same as you, output shown:-

 


Version	MADSONIC 5.1.5250.20150813.1155
MADSONIC REST API v1.12.0
License	Madsonic Free Edition, for personal use only as described below.
Server	jetty/8.y.z-SNAPSHOT, java 1.7.0_91, Linux (138.9 MB / 282.0 MB)
Size: Database: 121MB, Thumbs-Cache: 36MB, LastFM-Cache: 0MB, Lucene-Cache: 3MB
Spring.Framework v3.2.14, Spring.Security v3.2.8
HyperSQL DataBase v2.3.2
[b]ffmpeg version N-41404-g0e406ab- http://johnvansickle.com/ffmpeg/ Copyright (c) 2000-2014 the FFmpeg developers (18.12.2015)[/b]

 

so you can see some info shown regards ffmpeg, but info isnt picking this up for you, two thoughts firstly you posted saying the transcoders are in /config, i think thats just a typo right, you mean they are in /config/transcode/ yes?, if not then they should be in there. secondly can you check permissions are correct for the /config/transcode/ folder, all files in that folder should be obviously executable and should be owned by user nobody, if your not sure how to do this then let me know.

 

I gave 777 perms to all the files in the /config/transcode directory and now I see the same as you, all is well. I may have inadvertently ran the newperms script some time ago in the docker/config directory thereby setting perms to 666 for all files. Stupid me. Thanks for the help.

Link to comment
Guest
This topic is now closed to further replies.