Jump to content
binhex

[Support] binhex - MiniDLNA

119 posts in this topic Last Reply

Recommended Posts

4 minutes ago, PeterB said:

Is it possible to approach this from another direction?  I know that after doing the network mode switch, minidlna will start up in a 'working' condition - I know that doing this causes the xml file to be changed, but is there anything else that gets changed/modified by doing this?

im not sure, i guess you could docker exec in and then issue a find to show all files changed within x mins, do you know what changes in the xml file?

Share this post


Link to post

Okay, I've proven that there's nothing magical about changing network mode.  Any edit to the container settings, resulting in the xml file being updated, will cause the container/app to be restarted in a functional mode.  Eg, typing a space into the 'Extra Parameters' field, then deleting it, followed by clicking the 'Apply' button, will restart the container in a functional mode.  This makes me wonder whether the container restart after 'Applying' a parameter change is somehow different to any other start/restart.  It's a little frustrating that clicking the 'Apply' button will always start the container even if it was stopped and 'autostart' is not enabled.

Edited by PeterB

Share this post


Link to post
17 hours ago, binhex said:

do you know what changes in the xml file?

That's a little difficult to determine since it appears to be impossible to edit the parameters without  the container restarting.

 

However, I'm beginning to convince myself that the problem is being caused by the way in which the container is started/restarted.

When the container is 'updated', either by editing the parameters, or by 'force update', the command which is used to launch the container is displayed in the web gui, and I can use the same command to launch the container from a console window.  This requires that the old container is removed, or else the name has to be changed.  In this case. the application always starts up in a functional mode.  This is the command:

Quote

/usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="binhex-minidlna" --net="host" -e TZ="Asia/Singapore" -e HOST_OS="unRAID" -e "UDP_PORT_1900"="1900" -e "TCP_PORT_8200"="8200" -e "SCAN_ON_BOOT"="no" -v "/mnt/docker/appdata/minidlna/":"/config":rw -v "/mnt/user/Photos/":"/media":ro --cpuset-cpus=6,7 binhex/arch-minidlna:latest

 

If the container is already running, and I issue: "docker restart binhex-minidlna" then the application is non-functional even though the container appears to be active. 

Clearly, for me/my system, docker restart does not leave the container (or its app) in the same condition as it is when the container is first run.  Now, I'm guessing that the autostart procedure doesn't use docker run - perhaps it does a docker load followed by docker start?

 

The big question, of course, is: Why does this problem only affect me and, perhaps, one other person?  What is the difference between my system and yours?

Share this post


Link to post
On 2017-5-6 at 11:13 AM, PeterB said:

That's a little difficult to determine since it appears to be impossible to edit the parameters without  the container restarting.

 

However, I'm beginning to convince myself that the problem is being caused by the way in which the container is started/restarted.

When the container is 'updated', either by editing the parameters, or by 'force update', the command which is used to launch the container is displayed in the web gui, and I can use the same command to launch the container from a console window.  This requires that the old container is removed, or else the name has to be changed.  In this case. the application always starts up in a functional mode.  This is the command:

 

If the container is already running, and I issue: "docker restart binhex-minidlna" then the application is non-functional even though the container appears to be active. 

Clearly, for me/my system, docker restart does not leave the container (or its app) in the same condition as it is when the container is first run.  Now, I'm guessing that the autostart procedure doesn't use docker run - perhaps it does a docker load followed by docker start?

 

The big question, of course, is: Why does this problem only affect me and, perhaps, one other person?  What is the difference between my system and yours?

 

ok im sure you've tried this but just incase you haven't, try deleting the "my template" from unraid and re-create the unraid template configuration from scratch, there arent many settings for this docker so it shouldnt take too long, it def looks related to the unraid template xml being the issue, possibly corruption crept in when you did the upgrade to 6.3.3?.

 

edit - before you delete it you could always post your "my template" file here and i can compare it to mine then.

Edited by binhex

Share this post


Link to post
On 5/9/2017 at 5:56 PM, binhex said:

 

ok im sure you've tried this but just incase you haven't, try deleting the "my template" from unraid and re-create the unraid template configuration from scratch, there arent many settings for this docker so it shouldnt take too long, it def looks related to the unraid template xml being the issue, possibly corruption crept in when you did the upgrade to 6.3.3?.

 

edit - before you delete it you could always post your "my template" file here and i can compare it to mine then.

 It seems to be nothing to do with the template, or my settings.  I have completely removed the container, including deleting the image and deleting the appdata directory.  I then re-installed and didn't change any settings whatsoever.

 

For a minute, or so, it is possible to restart the container and the dlna service will still be visible on the network.  However, after this time, restarting the container, results in the service no longer being visible on the network.

 

From this, I conclude that something that minidlna is doing at 'initial' start (perhaps initial scan/creating the files.db database?) prevents subsequent 'clean' restarts.  After a forced update, I note that files.db starts small and grows and also the art_cache directory is created (but only if I don't restrict /media to point to photos only).  Could it be some resource issue?

 

However, if I restrict the number of photos pointed to by the /media mapping, thereby restricting the size of files.db, the problem still persists.

Edited by PeterB
Add information

Share this post


Link to post

I'm not sure whether I'd noted this before (and if not, why not)? :

 

After Force Update:

root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna
nobody      62     1  0 12:34 ?        00:00:00 /usr/bin/minidlnad -f /config/mi
nobody      64    62  6 12:34 ?        00:00:01 /usr/bin/minidlnad -f /config/mi
root@Tower:/mnt/docker/appdata# 

About three minutes later:

root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna
nobody      62     1  0 12:34 ?        00:00:00 /usr/bin/minidlnad -f /config/mi
root@Tower:/mnt/docker/appdata# 

After Restart:

root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna
root@Tower:/mnt/docker/appdata#

 

So, after a forced update, a minidlna scan is spawned as a child process, and it completes after about three minutes.  This suggests that after a restart, either the minidlna process never launches or, if it does, it terminates very quickly.

 

After a restart, one entry is added to the minidlna.log:

[2017/05/21 13:12:10] minidlna.c:935: error: MiniDLNA is already running. EXITING.

at the same time, supervisord.log has these lines added:

Created by...
___.   .__       .__                   
\_ |__ |__| ____ |  |__   ____ ___  ___
 | __ \|  |/    \|  |  \_/ __ \\  \/  /
 | \_\ \  |   |  \   Y  \  ___/ >    < 
 |___  /__|___|  /___|  /\___  >__/\_ \
     \/        \/     \/     \/      \/
   https://hub.docker.com/u/binhex/

2017-05-21 13:12:07.891345 [info] Host is running unRAID
2017-05-21 13:12:07.924476 [info] System information Linux Tower 4.9.28-unRAID #1 SMP PREEMPT Sun May 14 11:03:53 PDT 2017 x86_64 GNU/Linux
2017-05-21 13:12:07.980942 [info] PUID defined as '99'
2017-05-21 13:12:08.054813 [info] PGID defined as '100'
2017-05-21 13:12:08.341885 [info] UMASK defined as '000'
2017-05-21 13:12:08.371429 [info] Permissions already set for volume mappings
2017-05-21 13:12:08.399995 [info] SCAN_ON_BOOT defined as 'no'
2017-05-21 13:12:08.428914 [info] SCHEDULE_SCAN_DAYS defined as '06'
2017-05-21 13:12:08.460774 [info] SCHEDULE_SCAN_HOURS defined as '02'
2017-05-21 13:12:09,120 CRIT Set uid to user 0
2017-05-21 13:12:09,120 INFO Included extra file "/etc/supervisor/conf.d/minidlna.conf" during parsing
2017-05-21 13:12:09,131 INFO supervisord started with pid 7
2017-05-21 13:12:10,134 INFO spawned: 'start' with pid 56
2017-05-21 13:12:10,135 INFO spawned: 'crond' with pid 57
2017-05-21 13:12:10,593 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47968944308664 for <Subprocess at 47968944310896 with name start in state STARTING> (stdout)>
2017-05-21 13:12:10,593 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47968944310824 for <Subprocess at 47968944310896 with name start in state STARTING> (stderr)>
2017-05-21 13:12:10,593 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2017-05-21 13:12:10,593 INFO exited: start (exit status 0; expected)
2017-05-21 13:12:10,593 DEBG received SIGCLD indicating a child quit
2017-05-21 13:12:11,594 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)

 

Edited by PeterB
Add information

Share this post


Link to post

Ha, it seems that if I delete the minidlna.pid file before I do the restart:

root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna rm /run/minidlna/minidlna.pid

then the minidlna process starts up perfectly on the restart!

root@Tower:/mnt/docker/appdata# docker exec -it binhex-minidlna ps -eaf | grep dlna
nobody      62     1  0 13:29 ?        00:00:00 /usr/bin/minidlnad -f /config/mi
root@Tower:/mnt/docker/appdata#

The problem appears to be that, for me, at least, the pid file persists over a restart and/or a system reboot.  As I understand it, the last update you issued was dealing with pid creation.  I'm not sure why rolling back to the previous release [1.1.6-1-04] didn't fix the problem.

 

Is this the complete answer?  Why is it a problem for me, but not a problem for you?  Confused!

Edited by PeterB

Share this post


Link to post
On 2017-5-21 at 6:35 AM, PeterB said:

Is this the complete answer?  Why is it a problem for me, but not a problem for you?  Confused!

 leav

i dunno why it only affects you but i can certainly put code in to cleanup the pid file on start/restart lets hope this finally fixes it for you!

Share this post


Link to post
3 hours ago, binhex said:

 leav

i dunno why it only affects you but i can certainly put code in to cleanup the pid file on start/restart lets hope this finally fixes it for you!

Err... but shouldn't the pid file be removed when the application quits?  Perhaps minidlna isn't closing down cleanly?

File permissions look okay:

-rw-rw-rw- 1 nobody users 3 May 21 13:31 /run/minidlna/minidlna.pid

... and directory permissions:

drwxrwxr-x 1 nobody users 24 May 21 13:31 minidlna

If I kill the minidlnad process, then the pid file does get removed and the container will restart in a fully functional condition.

 

If I restart the container, without killing the minidlnad process then, on restart, the pid file date/time is unchanged from the earlier start, proving that the file has survived the restart.

 

By what mechanism is/should the minidlnad process be terminated when the container is stopped or restarted?

Share this post


Link to post
40 minutes ago, PeterB said:

By what mechanism is/should the minidlnad process be terminated when the container is stopped or restarted?

 

firstly it will definitely be terminated on restart i can guarantee that, if you perform a stop/restart of the container then Tini (init manager) will gracefully shutdown the processes, if they fail to be gracefully shutdown then docker will kill the process, hard to say which scenario is happening here, maybe minidlna isn't gracefully shutting down due it being busy and thus its being forcefully killed off and thus no cleanup of the pid file, which then means on startup minidlna sees the pid file and assumes the process is already running? - theory

 

 

Edited by binhex

Share this post


Link to post
1 hour ago, binhex said:

 

firstly it will definitely be terminated on restart i can guarantee that, if you perform a stop/restart of the container then Tini (init manager) will gracefully shutdown the processes, if they fail to be gracefully shutdown then docker will kill the process, hard to say which scenario is happening here, maybe minidlna isn't gracefully shutting down due it being busy and thus its being forcefully killed off and thus no cleanup of the pid file, which then means on startup minidlna sees the pid file and assumes the process is already running? - theory

 

 

Well, I guess if docker is killing the process with a signal 9, then the pid file wouldn't get removed.

 

My photo frame is the only client of minidlna, so I turned it off, 'pkill'ed the minidlnad process (which removed the pid file), and restarted the container.  I waited more than ten minutes before doing a restart on the container.  When it started up again, the old pid file was still present and, of course, the minidlnad process was not running.  With no client, what could be keeping the process busy?

 

I have not seen a situation were (p)kill has failed to stop the minidlnad process cleanly, with deletion of the pid file.  This suggests to me that either:

1) the system isn't waiting long enough for the Tini-initiated shutdown to complete 

or, more likely in my view:

2) Tini isn't actually doing what it's meant to do

 

I see that tini is running with pid 1.

[root@Tower /]# ps -eaf
root         1     0  0 00:12 ?        00:00:00 /usr/bin/tini -- /bin/bash /root/init.sh
root         7     1  1 00:12 ?        00:00:00 /usr/bin/python2 /usr/bin/supervisord -c /etc/supervisor.conf -n
root        57     7  0 00:12 ?        00:00:00 /bin/bash /root/crond.sh
root        61    57  0 00:12 ?        00:00:00 crond -n
nobody      62     1  0 00:12 ?        00:00:00 /usr/bin/minidlnad -f /config/minidlna.conf
root        63     0  0 00:12 ?        00:00:00 bash
root        69    63  0 00:12 ?        00:00:00 ps -eaf
[root@Tower /]#

 

How could we prove whether tini is working correctly, or not?

 

I can see evidence, in supervisord.log, of crond being stopped, and waiting for it to die, but see no evidence of minidlnad being stopped, but perhaps that is not logged in the same way - as a child process, for instance?:

Created by...
___.   .__       .__                   
\_ |__ |__| ____ |  |__   ____ ___  ___
 | __ \|  |/    \|  |  \_/ __ \\  \/  /
 | \_\ \  |   |  \   Y  \  ___/ >    < 
 |___  /__|___|  /___|  /\___  >__/\_ \
     \/        \/     \/     \/      \/
   https://hub.docker.com/u/binhex/

2017-05-23 00:12:22.714187 [info] Host is running unRAID
2017-05-23 00:12:22.741692 [info] System information Linux Tower 4.9.28-unRAID #1 SMP PREEMPT Sun May 14 11:03:53 PDT 2017 x86_64 GNU/Linux
2017-05-23 00:12:22.800289 [info] PUID defined as '99'
2017-05-23 00:12:22.882640 [info] PGID defined as '100'
2017-05-23 00:12:23.168672 [info] UMASK defined as '000'
2017-05-23 00:12:23.218772 [info] Permissions already set for volume mappings
2017-05-23 00:12:23.256912 [info] SCAN_ON_BOOT defined as 'no'
2017-05-23 00:12:23.297819 [info] SCHEDULE_SCAN_DAYS defined as '06'
2017-05-23 00:12:23.327048 [info] SCHEDULE_SCAN_HOURS defined as '02'
2017-05-23 00:12:24,359 CRIT Set uid to user 0
2017-05-23 00:12:24,359 INFO Included extra file "/etc/supervisor/conf.d/minidlna.conf" during parsing
2017-05-23 00:12:24,376 INFO supervisord started with pid 7
2017-05-23 00:12:25,378 INFO spawned: 'start' with pid 56
2017-05-23 00:12:25,379 INFO spawned: 'crond' with pid 57
2017-05-23 00:12:26,003 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 47810346303928 for <Subprocess at 47810346306160 with name start in state STARTING> (stdout)>
2017-05-23 00:12:26,003 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 47810346306088 for <Subprocess at 47810346306160 with name start in state STARTING> (stderr)>
2017-05-23 00:12:26,003 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs)
2017-05-23 00:12:26,003 INFO exited: start (exit status 0; expected)
2017-05-23 00:12:26,003 DEBG received SIGCLD indicating a child quit
2017-05-23 00:12:27,004 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-05-23 00:22:11,142 WARN received SIGTERM indicating exit request
2017-05-23 00:22:11,155 DEBG killing crond (pid 57) with signal SIGTERM
2017-05-23 00:22:11,166 INFO waiting for crond to die
2017-05-23 00:22:12,167 DEBG fd 16 closed, stopped monitoring <POutputDispatcher at 47810345808600 for <Subprocess at 47810346306016 with name crond in state STOPPING> (stderr)>
2017-05-23 00:22:12,167 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 47810346305008 for <Subprocess at 47810346306016 with name crond in state STOPPING> (stdout)>
2017-05-23 00:22:12,167 INFO stopped: crond (terminated by SIGTERM)
2017-05-23 00:22:12,167 DEBG received SIGCLD indicating a child quit

 

I note that if I (p)kill minidlna, then the following line appears at the end of the minidlna.log file:

[2017/05/23 00:34:56] minidlna.c:154: warn: received signal 15, good-bye

However, that line never appears when the container is stopped or restarted.

Edited by PeterB
Add information about minidlna.log

Share this post


Link to post
59 minutes ago, PeterB said:

However, that line never appears when the container is stopped or restarted.

 

its possible the process gets shutdown before it gets chance to echo back, it could be Tini, but ive not seen any issues with it to date and its the defacto init manager now recommended by Docker, i think for now im just going to del that pid file on startup and see how we go.

Share this post


Link to post
3 hours ago, binhex said:

i think for now im just going to del that pid file on startup and see how we go.

Okay, that appears to have resolved the issue - many thanks.

Share this post


Link to post

Hi,

 

Excuse my stupid question, but where do I find the description of the Keys?

 

Br,

Johannes

Share this post


Link to post
Hi,
 
Excuse my stupid question, but where do I find the description of the Keys?
 
Br,
Johannes

Description of what keys?

Sent from my SM-G900F using Tapatalk

Share this post


Link to post
6 hours ago, ebnerjoh said:

Container Variable: SCHEDULE_SCAN_DAYS

Container Variable:

 

 

ahh you mean the env var keys and example values, ok well they are both to do with how often you want minidlna to rescan your library, the first is to allow you to scan on a certain day(s) of the week and the second value defines the time each scan of the week will occur. example:-

SCHEDULE_SCAN_DAYS=0,1,2 # run a scheduled scan on sun, mon, and tues
SCHEDULE_SCAN_HOURS=1,2,5 # run a scheduled scan on the days defined above at 1am, 2am, and 5am

 

 

Share this post


Link to post

Is this Scan really needed? Shouldn't the "inotify" config-setting find any new published files?

 

Br,

Johannes

Share this post


Link to post
On 2017-6-5 at 3:29 PM, ebnerjoh said:

Is this Scan really needed? Shouldn't the "inotify" config-setting find any new published files?

 

Br,

Johannes

 

it SHOULD yes, but on a new install if the user has scan on startup switched off then they will left with a empty library, im not sure at this point whether inotify would pick up the media and begin the scan or not, at least with the scheduled task in place it will then scan their media at a later date, i can look into that and do some testing around inotify, my gut feeling though is that inotify will only trigger if it sees a new file created whilst minidlna is running, so it wont retrospectively pick up changes and trigger the update.

Share this post


Link to post
55 minutes ago, binhex said:

 

it SHOULD yes, but on a new install if the user has scan on startup switched off then they will left with a empty library, im not sure at this point whether inotify would pick up the media and begin the scan or not, at least with the scheduled task in place it will then scan their media at a later date, i can look into that and do some testing around inotify, my gut feeling though is that inotify will only trigger if it sees a new file created whilst minidlna is running, so it wont retrospectively pick up changes and trigger the update.

 

Ah, now I understand. Not the case for me as I am running 24/7.

 

Thanks for the clarification!

 

Br,

Johannes 

Share this post


Link to post

Apart from ensuring that inotify is set to yes in /config/minidlna.conf, what else is necessary to make sure that the database is updated after adding a file to the media directory?

I've always relied on using minidlnad -R, but would like to get inotify working.

max_user_watches is already set to 524288

[root@Tower /]# cat /proc/sys/fs/inotify/max_user_watches
524288
[root@Tower /]# 

I have less than 524288 files in the /media directory:

[root@Tower /]# find /media -type f | wc -l
15411
[root@Tower /]# 

Adding a new file to the /media directory doesn't appear to get it added to the database.

Share this post


Link to post

Hello forum.. 

 

Just a note for anyone using SiliconDust HDHomerun TV Tuners. I was able to install the HDHR Record Engine on unRAID using the miniDLNA container and posted the instructions here: HDHR Engine on unRAID with MiniDLNA    I'm not sure if this is the only DLNA service that will work to install the HDHR DVR Engine, but minDLNA seems to work for me.  The current HDHR DVR software requires the DVR Engine be installed on a Local PC or on the NAS Server directly.  You cannot use networked share locations for DVR recordings currently.

 

 

Share this post


Link to post

I need some help with getting minidlna to work with my client. There are no files listed by my dlna app, and I'm thinking I don't have my path properly described. I am not at all knowledgeable about the pathing syntax Linux uses. My user share (Movies) is a collection of The contents of my disk shares. The path I entered for Host Path on the setup page is: /mnt/user/Movies/. I also edited the config file to match this entry. This path description is obviously just a guess

 

Note: I don't know if this information is needed, but the name of my unraid box is "Tower."

 

I have tried various other path descriptions, but all have been unsuccessful. I would appreciate any help with this.

Edited by Two Wire

Share this post


Link to post

@Two Wire

 

This configuration is working for me, you will see your cpu/memory usage shoot up when initial scan takes place. Include the full path to the dir/share you want scanned in both container path and host path also the minidlna.conf

 

 

Untitled.png

Untitled2.png

Edited by raidserver

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.