[Deprecated] tobbenb's Docker Template Repository - WebGrab+Plus


Recommended Posts

  • Replies 772
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

saarg...

 

Does your container support the use of picons?

 

s8l14Fd.jpg

 

John

I haven't tried, but I don't see why not. Since you can set the path you can choose to use an existing volume like data, recordings or config.

I would just create a picon folder in the appdata folder for tvheadend. Then in tvheadend config it would be /config/picon.

 

Link to comment

I'll say one thing...channel switching certainly faster with TVH than Myth.

 

Oh...and there is a TVH plugin for Emby Server.  I may need to make TVH my new default PVR.

 

Still no love from the WGP+ guys yet regarding my SchedulesDirect issue.

I signed up for an trail account for Schedules direct and got it working  ;)

What you are probably doing wrong is that you didn't remove the * in front of this line in schedulesdirect.org.ini

url_index.headers {credentials=ENTER_USERNAME,ENTER_PASSWORD}

 

The default is like this:

*url_index.headers {credentials=ENTER_USERNAME,ENTER_PASSWORD}

 

So they just forgot to mention in the howto that you need to remove the * in front of the credentials line.

 

This is the log from my test grab after the channels.xml was created and copied to the config:

 

processing /data/guide.xml ..............
update requested for - 9 - out of - 9 - channels for 2 day(s)
update mode - set per individual channel


i=index .=same c=change g=gab r=replace n=new 

ESPN updating, using site SCHEDULESDIRECT.ORG, mode incremental
innnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

WCBS updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

WNBC updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

ESPNEWS updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

WCBSDT (WCBS-DT) updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

ESPNHD updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

ESPNEWS HD updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

ESPN University HD updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update

WINS updating, using site SCHEDULESDIRECT.ORG, mode incremental
iinnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnnn
0.00 sec/update



job finished .. done in 76 seconds
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 20

Link to comment

Thanks for verifying but definitely not the issue.  My .ini file is below (I replaced username and password).

 

What system did you use to generate you channels.xml file?  I am using Win 8.1 and wonder if that could be a problem.  You need to first generate the channels and copy paste them into the webgrab_config.xml.  You did have to generate your own channels XML file first as outline here right.  This is the part that is failing for me.

 

C.
To generate the your own .channels.xml file (extra info can be found on http://webgrabplus.com/node/289)
1. in your siteini:
    for all the lines between @auto_xml_channel_start & @auto_xml_channel_end, remove the FIRST * at the beginning of the line (= uncomment)
Save schedulesdirect.org.ini
2. in your .config.xml:
    Add only one dummy channel in the WebGrab++.config.xml file
      <channel update="f" site="schedulesdirect.org" site_id="" xmltv_id="dummy">dummy</channel>
    Only grab for 1 day
      <timespan>0</timespan>
Save WebGrab++.config.xml

 

Can you outline a little more what you actually did?

 

**------------------------------------------------------------------------------------------------
* @header_start
* WebGrab+Plus ini for grabbing EPG data from TvGuide websites
* @Site: schedulesdirect.org
* @MinSWversion: V1.1.1/54
* @Revision 1 - [23/10/2014] Jan van Straaten
*   - adapted site changes
* @Revision 0 - [31/08/2013] Jan van Straaten / Francis De Paemeleere
*   - creation
* @Remarks: You need a login and password for this site
* @header_end
**------------------------------------------------------------------------------------------------

site {url=schedulesdirect.org|timezone=UTC|maxdays=16.1|cultureinfo=en-GB|charset=UTF-8|titlematchfactor=90|keepindexpage|firstshow=1}
site {ratingsystem=MPAA|subtitlestype=teletext|episodesystem=onscreen}
url_index {url|http://dd.schedulesdirect.org/schedulesdirect/tvlistings/xtvdService}
url_index.headers {method=SOAP}
url_index.headers {customheader=SOAPAction=urn:tvDataDelivery#download}
url_index.headers {customheader=Accept-Encoding=gzip,deflate}
*url_index.headers {host=dd.schedulesdirect.org}
*
url_index.headers {credentials=111111,222222}
*
url_index.headers {accept=text/xml|contenttype=text/xml;charset="utf-8"}
url_index.headers {postdata='index_variable_element'}

scope.range {(urlindex)|end}
** timespan calculation to enable to add the requested timespan from the config
index_variable_element.modify {calculate(format=F1)|'config_timespan_days' 1 +} * add 1 day because config_timespan_days is 0 based
index_variable_element.modify {calculate(format=timespan,hours)} * convert to the proper timespan string required for index_temp_2
index_temp_1.modify {calculate(format=date,yyyy-MM-dd)|'urldate'}
index_temp_2.modify {calculate(format=date,yyyy-MM-dd)|'urldate' 'index_variable_element' +}
index_variable_element.modify {clear} * clear the timespan value
index_variable_element.modify {addstart|<soapenv:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:urn="urn:tvDataDelivery"><soapenv:Header/><soapenv:Body><urn:download soapenv:encodingStyle="http://schemas.xmlsoap.org/soap/encoding/"><startTime xsi:type="xsd:string">'index_temp_1'T00:00:00Z</startTime><endTime xsi:type="xsd:string">'index_temp_2'T00:00:00Z</endTime></urn:download></soapenv:Body></soapenv:Envelope>}
end_scope

index_showsplit.scrub {multi||||} * copies the whole index page
scope.range {(splitindex)|end}
index_temp_1.modify {addstart|'index_showsplit'}  * contains the whole xml file
index_variable_element.modify {clear}
index_variable_element.modify {addstart|'config_site_id'}
index_showsplit.modify {substring(type=regex)|(<schedule [^>]* station=\''index_variable_element'\' [^>]*>)}
index_showsplit.modify {cleanup(removeduplicates=equal,100)}
index_variable_element.modify {clear}
index_variable_element.modify {addstart|'index_temp_1'}  * contains the whole xml file
end_scope
**
index_start.scrub {single|time='|T|Z'|Z'}
index_duration.scrub {single|duration='|PT|M'|M'}
index_temp_5.scrub {single|program='||'|'}
index_videoquality.scrub {single|hdtv='||'|'}
index_videoquality.modify {replace(not="")|'index_videoquality'|HDTV}

index_subtitles.scrub {single|closeCaptioned='||'|'}
index_subtitles.modify {replace(not="")|'index_subtitles'|true}

scope.range {(indexshowdetails)|end}
index_start.modify {calculate(format=utctime)}
index_duration.modify {replace|H|:}

** get the programs part
index_temp_4.modify {substring(type=regex)|'index_variable_element' "^.*<program id=\''index_temp_5'\'>(.*?)</program>"}
index_title.modify          {substring(type=regex)|'index_temp_4' "<title>([^<]*)"}
index_subtitle.modify       {substring(type=regex)|'index_temp_4' "<subtitle>([^<]*)"}
index_description.modify    {substring(type=regex)|'index_temp_4' "<description>([^<]*)"}
index_rating.modify         {substring(type=regex)|'index_temp_4' "<mpaaRating>([^<]*)"}
index_temp_2.modify         {substring(type=regex)|'index_temp_4' "<advisory>([^<]*)</advisory>"}
index_rating.modify         {addend|\|'index_temp_2'} *advisory added to rating
index_productiondate.modify {substring(type=regex)|'index_temp_4' "<year>([^<]*)"}
index_episode.modify        {substring(type=regex)|'index_temp_4' "<syndicatedEpisodeNumber>([^<]*)"}
index_starrating.modify     {substring(type=regex)|'index_temp_4' "<starRating>(\**)[\+]*</starRating>"}  * full stars
index_temp_1.modify         {substring(type=regex)|'index_temp_4' "<starRating>.*(\+)</starRating>"}      * half star
index_starrating.modify     {calculate(not="" type=char format=F0)|#}
index_starrating.modify     {addend('index_temp_1' not="")|.5}
index_starrating.modify     {addend(not="")| / 4}
index_category.modify       {substring(type=regex)|'index_temp_4' "<showType>([^<]*)"}

* get the productionCrew part
index_temp_4.modify {substring(type=regex)|'index_variable_element' "^.*<crew program=\''index_temp_5'\'>(.*?)</crew>"}
index_actor.modify          {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Actor</role>(.*?)</member>"}
index_actor.modify          {cleanup(tags="<"">")}
index_temp_1.modify         {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Guest Star</role>(.*?)</member>"}
index_temp_1.modify         {cleanup(tags="<"">")}
index_temp_1.modify         {addend(not "")| (Guest Star)}
index_actor.modify          {addend('index_temp_1' not "")|'index_temp_1'\|}
index_actor.modify          {cleanup(removeduplicates)}
index_presenter.modify      {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Host</role>(.*?)</member>"}
index_presenter.modify      {cleanup(tags="<"">")}
index_director.modify       {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Director</role>(.*?)</member>"}
index_director.modify       {cleanup(tags="<"">")}
index_producer.modify       {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Producer</role>(.*?)</member>"}
index_producer.modify       {cleanup(tags="<"">")}
index_temp_1.modify         {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Executive Producer</role>(.*?)</member>"}
index_temp_1.modify         {cleanup(tags="<"">")}
index_temp_1.modify         {addend(not "")| (Executive Producer)}
index_producer.modify       {addstart('index_temp_1' not="")|'index_temp_1'\|}
index_producer.modify       {cleanup(removeduplicates)}
index_writer.modify         {substring(type=regex)|'index_temp_4' "<member>[^<]*<role>Writer</role>(.*?)</member>"}
index_writer.modify         {cleanup(tags="<"">")}
* get the genres part
index_temp_4.modify {substring(type=regex)|'index_variable_element' "^.*<programGenre program=\''index_temp_5'\'>(.*?)</programGenre>"}
index_temp_1.modify       {substring(type=regex)|'index_temp_4' "<genre>.*?<class>(.*?)</class>"}
index_category.modify     {addstart('index_temp_1' not= "")|'index_temp_1'\|}
end_scope

**  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _  _
**      #####  CHANNEL FILE CREATION (only to create the xxx-channel.xml file)
**
** @auto_xml_channel_start
index_site_id.scrub  {regex||(<station .*?</station>)||}
scope.range {(channellist)|end}
index_site_channel.modify {substring(type=regex)|'index_site_id' "<name>(.*?)</name>"}
index_site_id.modify {substring(type=regex)|'index_site_id' "<station id=\'([0-9]*)\'>"}
index_site_id.modify {cleanup(removeduplicates=equal,100 link="index_site_channel")}
end_scope
** @auto_xml_channel_end

 

 

These are the instructions that I followed:

 

----------------------------------------------------------------------
SchedulesDirect.com                               revised October 2014
----------------------------------------------------------------------

Before you can use this siteini, you must:
A. Get a membership and register at schedulesdirect.org/account , get a login name, a password and create one or more lineups
B. add your login and password
C. generate your own SchedulesDirect.channels.xml file (because this is different for every unique login)

A. 
Follow the instructions at schedulesdirect.org
!! Important when chosing a lineup: Keep the number of channels low !! Only add channels that you are really going to use! Keep in mind that also for the channels you do not use to get an epg from,  the data is downloaded. That takes time and slows down the process.

B.
Open the SchedulesDirect.com.ini file and look for the next line and change the credentials to yours.
url_index.headers {credentials=ENTER_USERNAME,ENTER_PASSWORD}

C.
To generate the your own .channels.xml file (extra info can be found on http://webgrabplus.com/node/289)
1. in your siteini:
    for all the lines between @auto_xml_channel_start & @auto_xml_channel_end, remove the FIRST * at the beginning of the line (= uncomment)
Save schedulesdirect.org.ini
2. in your .config.xml:
    Add only one dummy channel in the WebGrab++.config.xml file
      <channel update="f" site="schedulesdirect.org" site_id="" xmltv_id="dummy">dummy</channel>
    Only grab for 1 day
      <timespan>0</timespan>
Save WebGrab++.config.xml
3. Now just run WG++ and your .channels.xml file should be generated, if all goes well.
4. You have now your .channels.xml file. The channel lines inside it, can be used to configure the WebGrab++.config.xml file.
5. Revert the changes made in C.1.

Link to comment

UPDATE:  I DID IT!!!

 

:)

 

So, the issue I was experiencing must have been a Windows or WIN8 issue.  I performed the same tasks in saarg's WGP+ container and got the full channel list in schedulesdirect.org.channels.xml.  I can now copy/paste those into the webgrab config XML and let 'er fly.

 

Thanks for the help saarg!

 

John

Link to comment

UPDATE:  I DID IT!!!

 

:)

 

So, the issue I was experiencing must have been a Windows or WIN8 issue.  I performed the same tasks in saarg's WGP+ container and got the full channel list in schedulesdirect.org.channels.xml.  I can now copy/paste those into the webgrab config XML and let 'er fly.

 

Thanks for the help saarg!

 

John

Great  :D

I guess we'll blame Win 8 then :)

Link to comment

;D ;D ;D

 

WebGrab+Plus/w MDB & REX Postprocess -- version 1.54.6/0.01 -- Jan van Straaten
-------------------------------------------------------------------------------
job started at 06/05/2015 08:39:10


reading config file: /config/WebGrab++.config.xml
loading timezone data
embedded timezones source: Webgrab_Plus.TimezonesData.txt
found: /config/schedulesdirect.org.ini -- Revision 1

running on Unix 4.0.4.0

input file /data/guide.xml not found   ... created a new one ... 
update requested for - 84 - out of - 84 - channels and for 8 day(s)
update mode set per individual channel

channel WMCNDT (WMCN-DT) site -- SCHEDULESDIRECT.ORG -- update mode incremental

   Summary for update of WMCNDT (WMCN-DT)
     missing shows added       0
     changed shows updated     0
     new shows added           326
     unchanged shows inspected 0
     total after update        326
     elapstime / updated show  0.00 seconds


channel KYWDT (KYW-DT) site -- SCHEDULESDIRECT.ORG -- update mode incremental

   Summary for update of KYWDT (KYW-DT)
     missing shows added       0
     changed shows updated     0
     new shows added           245
     unchanged shows inspected 0
     total after update        245
     elapstime / updated show  0.00 seconds


channel WPVIDT (WPVI-DT) site -- SCHEDULESDIRECT.ORG -- update mode incremental

   Summary for update of WPVIDT (WPVI-DT)
     missing shows added       0
     changed shows updated     0
     new shows added           244
     unchanged shows inspected 0
     total after update        244
     elapstime / updated show  0.01 seconds

Link to comment

It's time to open the champagne  ;D

 

Ok looks like I may have to give this a try.  So we are correct on what instructions to use could you post a step by step?

 

Thank you for figuring this out!

 

JM

There is a guide already that comes with the INI sitepack. You can find it in this post by johnodon http://lime-technology.com/forum/index.php?topic=37671.msg380532#msg380532

The only thing that is not in the guide is to remove the * in front of the credentials line.

Link to comment

It's time to open the champagne  ;D

 

Ok looks like I may have to give this a try.  So we are correct on what instructions to use could you post a step by step?

 

Thank you for figuring this out!

 

JM

There is a guide already that comes with the INI sitepack. You can find it in this post by johnodon http://lime-technology.com/forum/index.php?topic=37671.msg380532#msg380532

The only thing that is not in the guide is to remove the * in front of the credentials line.

 

Agreed.  they are not very clear on the * for the user/password line.

 

JM...if you run into any snags, just PM me.

 

John

Link to comment

It's time to open the champagne  ;D

 

Ok looks like I may have to give this a try.  So we are correct on what instructions to use could you post a step by step?

 

Thank you for figuring this out!

 

JM

There is a guide already that comes with the INI sitepack. You can find it in this post by johnodon http://lime-technology.com/forum/index.php?topic=37671.msg380532#msg380532

The only thing that is not in the guide is to remove the * in front of the credentials line.

 

How long does it take to update the guide.xml the first time?

Link to comment

Yeah...I'm trying to pull 14 days of data for 70 channels.  I am on a day and a half of chugging through it and probably only 3/4 finished.  Myth does this in seconds.  :S

This does not sound right. Is there any error messages in the container log?

Did Myth pull the data from schedules direct or you used tv_grab before?

 

Edit: I pulled 9 channels and 15 days in 52 minutes.

Link to comment

Yeah...I'm trying to pull 14 days of data for 70 channels.  I am on a day and a half of chugging through it and probably only 3/4 finished.  Myth does this in seconds.  :S

I just tried to grab with tv_grab_na_dd and it is way faster, but it downloads 0.5MB less data than wg++ does. From a quick scroll through the xml files, it looks like there is more info on each show in the xml file created by wg++.

But wg++ looks to be a turtle regarding grabbing from SD. 

Link to comment

Thanks so much for this docker BTW, nice work!  :D Ive been dying to find a way to migrate my separate mythtv box into unRAID as it will eliminate an entire computer for me.

 

So wg++ is always going to be extremely slow when grabbing from schedulesdirect? I haven't gotten that far yet, but I use schedulesdirect, and still have another 11 months left in my subscription so I'd like to use it if I can. That's kind of a bummer if it doesn't work, but as long as it's done in the middle of the night or something I guess it's fine. I have around 150 channels. Mythtv seemed to do this in no time at all.

 

Also have a question for you saarg. My plan is to add a 1TB HDD outside of the array for use for recordings (and transcoding if possible and it makes sense) as I don't want them stored on the array for a few different reasons, the primary one being performance and the possibility of spinning up multiple drives etc. So if I add another 1TB HDD, but do not add it to the array, what would be the best filesystem etc. to format this drive? Should I just use XFS ? Or should I try to use some type of ext format that the docker uses inside? Then also, in the parameters for the docker paths to mount, should I just add /recordings as /mnt/disk# ?

Link to comment

Thanks so much for this docker BTW, nice work!  :D Ive been dying to find a way to migrate my separate mythtv box into unRAID as it will eliminate an entire computer for me.

 

So wg++ is always going to be extremely slow when grabbing from schedulesdirect? I haven't gotten that far yet, but I use schedulesdirect, and still have another 11 months left in my subscription so I'd like to use it if I can. That's kind of a bummer if it doesn't work, but as long as it's done in the middle of the night or something I guess it's fine. I have around 150 channels. Mythtv seemed to do this in no time at all.

 

Also have a question for you saarg. My plan is to add a 1TB HDD outside of the array for use for recordings (and transcoding if possible and it makes sense) as I don't want them stored on the array for a few different reasons, the primary one being performance and the possibility of spinning up multiple drives etc. So if I add another 1TB HDD, but do not add it to the array, what would be the best filesystem etc. to format this drive? Should I just use XFS ? Or should I try to use some type of ext format that the docker uses inside? Then also, in the parameters for the docker paths to mount, should I just add /recordings as /mnt/disk# ?

 

I do not know why WG++ is so slow grabbing from Schedules Direct or if it's docker related. I just grabbed from Schedules direct in my tvheadend VM on my other Unraid server and it took about 20 minutes to grab the same amount as the container did in 52 minutes.

I will try to see if there is anything I can do to speed things up or it's a WG++ problem. I might add tv_grab_na_dd to the tvheadend container, but that means you have to configure it on another system or exec into the docker to make the config.

 

I do not think it matters much which file system you choose.

I haven't tried to mount a disk outside the array as I'm not running a parity disk. In the template for tvheadend you should choose the path where the disk is mounted in Unraid for the /recordings volume.

Link to comment

Hi, Saarg.

 

Thanks for the tvheadend docker template. It's working great for me. (tvheadend unstable, extra parameters added separately)

 

CHBMB asked me to try out his RC4 OE MediaBuild kernel  drivers to see if it works with my Hauppauge HVR-2250. Though your MediaTreeCheck plugin detects the Media Tree and DVB adapter, tvheadend won't start. I get the following error in the log:

tvheadend: client.c:626: avahi_client_free: Assertion `client' failed.
2015-06-07 17:57:59.646 [ ALERT] CRASH: Signal: 6 in PRG: /usr/local/bin/tvheadend (4.1-41~gef5b43a) [c727ae071e0c42d3c46fd6c57ca337a401bbf0f0] CWD: /etc/service/tvheadend 
2015-06-07 17:57:59.646 [ ALERT] CRASH: Fault address 0x630000000e (N/A)
2015-06-07 17:57:59.646 [ ALERT] CRASH: Loaded libraries: /usr/lib/libhdhomerun.so.1 /lib/x86_64-linux-gnu/libssl.so.1.0.0 /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 /lib/x86_64-linux-gnu/libz.so.1 /usr/lib/liburiparser.so.1 /usr/lib/x86_64-linux-gnu/libavahi-common.so.3 /usr/lib/x86_64-linux-gnu/libavahi-client.so.3 /lib/x86_64-linux-gnu/libdbus-1.so.3 /lib/x86_64-linux-gnu/libdl.so.2 /lib/x86_64-linux-gnu/libpthread.so.0 /lib/x86_64-linux-gnu/libm.so.6 /lib/x86_64-linux-gnu/librt.so.1 /lib/x86_64-linux-gnu/libc.so.6 /lib64/ld-linux-x86-64.so.2 /lib/x86_64-linux-gnu/libnss_compat.so.2 /lib/x86_64-linux-gnu/libnsl.so.1 /lib/x86_64-linux-gnu/libnss_nis.so.2 /lib/x86_64-linux-gnu/libnss_files.so.2 
2015-06-07 17:57:59.646 [ ALERT] CRASH: Register dump [23]: fefefefefefefe0000002ba510000de00000000000000008000000000000020200002ba4ea50891900002ba4ea508fe000002ba4ef7a39c000002ba4ef7a3700000000000000000e000000000000002f00002ba4eb40383000002ba4e94750000000000000000006000000000000000000002ba4eb2b9cc900002ba4ef7a274800002ba4eb2b9cc90000000000000202000000000000003300000000000000000000000000000000fffffffe7ffbba130000000000000000
2015-06-07 17:57:59.646 [ ALERT] CRASH: STACKTRACE
2015-06-07 17:57:59.818 [ ALERT] CRASH: /tmp/tvheadend/src/trap.c:148 0x462e59
2015-06-07 17:57:59.841 [ ALERT] CRASH: ??:0 0x2ba4eab67340
2015-06-07 17:57:59.841 [ ALERT] CRASH: gsignal+0x39 (/lib/x86_64-linux-gnu/libc.so.6)
2015-06-07 17:57:59.841 [ ALERT] CRASH: abort+0x148 (/lib/x86_64-linux-gnu/libc.so.6)
2015-06-07 17:57:59.866 [ ALERT] CRASH: ??:0 0x2ba4eb2b2b86
2015-06-07 17:57:59.889 [ ALERT] CRASH: ??:0 0x2ba4eb2b2c32
2015-06-07 17:57:59.913 [ ALERT] CRASH: ??:0 0x2ba4ea50106b
2015-06-07 17:57:59.971 [ ALERT] CRASH: /tmp/tvheadend/src/avahi.c:270 0x4edb3d
2015-06-07 17:58:00.020 [ ALERT] CRASH: /tmp/tvheadend/src/wrappers.c:145 0x436d9e
2015-06-07 17:58:00.042 [ ALERT] CRASH: ??:0 0x2ba4eab5f182
2015-06-07 17:58:00.042 [ ALERT] CRASH: clone+0x6d (/lib/x86_64-linux-gnu/libc.so.6)
*** Killing all processes...

 

He suggested I post this here as well in case it was something with the plugin.    I'll test the RC5 in the next day or two to see what happens with it.

 

Like I said, I'm currently using your docker template myself by adding my adapter firmare into bzroot and that is all that is required...I don't need a media tree build.

 

Thanks again.

Doug

Link to comment

Doug, I've been giving this some thought and I'm going to try and do something a little different with the mediabuild and see if that works.  I'll get to it tonight hopefully. Mrs CHBMB permitting. (I'm in the UK, so tonight is within the next 8 hours, still at work at the moment) I'll PM you when I have something.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.