[Support] Linuxserver.io - TVHeadend


Recommended Posts

4 minutes ago, plantsandbinary said:

You aren't the only one. It's not much to go on though.

Maybe supply more information if you want help. You managed to post information in binhex thread. 

Log from the container, run command and address you used to access. 

Link to comment
  • 1 month later...

I can't get this docker to open in host mode even on a fresh install. In bridge mode it works fine but in host mode I get this error on the webui

 

No server or forwarder data received

Your request for http://192.168.0.198:9981/extjs.html could not be fulfilled, because the connection to 192.168.0.198 (192.168.0.198) has been closed before Privoxy received any data for this request.

 

This is often a temporary failure, so you might just try again.

If you get this message very often, consider disabling connection-sharing (which should be off by default). If that doesn't help, you may have to additionally disable support for connection keep-alive by setting keep-alive-timeout to 0.

Link to comment
11 minutes ago, hgelpke said:

I can't get this docker to open in host mode even on a fresh install. In bridge mode it works fine but in host mode I get this error on the webui

 

No server or forwarder data received

Your request for http://192.168.0.198:9981/extjs.html could not be fulfilled, because the connection to 192.168.0.198 (192.168.0.198) has been closed before Privoxy received any data for this request.

 

This is often a temporary failure, so you might just try again.

If you get this message very often, consider disabling connection-sharing (which should be off by default). If that doesn't help, you may have to additionally disable support for connection keep-alive by setting keep-alive-timeout to 0.

Try without using privoxy.

Link to comment

Hi...having a struggle with figuring out how to get the script I have for updating the epg to run as a cron job. The script sucessfully downloads an xml file which TVH sucessfully reads and turns into the epg. What my searching has not revealed is what I should call the script, which folder I should place it in and how to add a cron job to call it a couple of times a day. Somehow I am totally failing to understand what I should be doing to automate this! 

My apologies if this is a stupid request but I am banging my head off the desk here!

Link to comment
On 4/23/2019 at 1:08 AM, Richie12a said:

Hi...having a struggle with figuring out how to get the script I have for updating the epg to run as a cron job. The script sucessfully downloads an xml file which TVH sucessfully reads and turns into the epg. What my searching has not revealed is what I should call the script, which folder I should place it in and how to add a cron job to call it a couple of times a day. Somehow I am totally failing to understand what I should be doing to automate this! 

My apologies if this is a stupid request but I am banging my head off the desk here!

If this is a script run on unraid, you can use the custom scripts plugin (or something similar) to run it. 

In TVheadend you can set when the grabber fetches the xml file. Default is 2 times a day. 

Link to comment
  • 3 weeks later...

SchedulesDirect issues.

 

I've had to rebuild my server after some water issues with flooding in my house and it's been a few years since I set up TvHeadend so I'm not sure what I've done wrong here but I can't get the epg to work.  I've SSH in and ran the config script

Quote

docker exec -it -u abc tvheadend /usr/bin/tv_grab_zz_sdjson --configure
Cache file for lineups, schedules and programs.
Cache file: [/config/.xmltv/tv_grab_zz_sdjson.cache]
If you are migrating from a different grabber selecting an alternate channel ID format can make the migration easier.
Select channel ID format:
0: Default Format (eg: I12345.json.schedulesdirect.org)
1: tv_grab_na_dd Format (eg: I12345.labs.zap2it.com)
2: MythTV Internal DD Grabber Format (eg: 12345)
Select one: [0,1,2 (default=0)]
As the JSON data only includes the previously shown date normally the XML output should only have the date. However some programs such as older versions of MythTV also need a time.
Select previously shown format:
0: Date Only
1: Date And Time
Select one: [0,1 (default=0)]
Schedules Direct username.
Username: *****
Schedules Direct password.
Password:
** POST https://json.schedulesdirect.org/20141201/token ==> 200 OK (1s)
** GET https://json.schedulesdirect.org/20141201/status ==> 200 OK
** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK
This step configures the lineups enabled for your Schedules Direct account. It impacts all other configurations and programs using the JSON API with your account. A maximum of 4 lineups can by added to your account. In a later step you will choose which lineups or channels to actually use for this configuration.
Current lineups enabled for your Schedules Direct account:
#. Lineup ID | Name | Location | Transport
1. USA-OTA-78749 | Local Over the Air Broadcast  | 78749 | Antenna
Edit account lineups: [continue,add,delete (default=continue)]
Choose whether you want to include complete lineups or individual channels for this configuration.
Select mode: [lineups,channels (default=lineups)]
** GET https://json.schedulesdirect.org/20141201/lineups ==> 200 OK
Choose lineups to use for this configuration.
USA-OTA-78749 [yes,no,all,none (default=no)] all

I checked my apdata folder and it created a file and this is the contents

Quote

cache=/config/.xmltv/tv_grab_zz_sdjson.cache
channel-id-format=default
previously-shown-format=date
username=********
password=*******
mode=lineup
lineup=USA-OTA-78749

username and password have been edited for posting.

 

I then went back into TvHeadend EPG Grabber Module and enabled "internal: XMLTV.....Schedules Direct JSON....

 

I then re-run internal epg grabbers and I get this every time.

 

Quote

2019-05-08 20:43:15.494 xmltv: /usr/bin/tv_grab_zz_sdjson_sqlite: grab /usr/bin/tv_grab_zz_sdjson_sqlite

2019-05-08 20:43:15.496 spawn: Executing "/usr/bin/tv_grab_zz_sdjson_sqlite"

2019-05-08 20:43:18.509 spawn: Database not defined in config file /config/.xmltv/tv_grab_zz_sdjson_sqlite.conf.

2019-05-08 20:43:18.509 spawn: Please run 'tv_grab_zz_sdjson_sqlite --configure'

2019-05-08 20:43:18.553 xmltv: /usr/bin/tv_grab_zz_sdjson_sqlite: no output detected

2019-05-08 20:43:18.553 xmltv: /usr/bin/tv_grab_zz_sdjson_sqlite: grab returned no data

what did I miss here?

Link to comment
17 minutes ago, Japes said:

SchedulesDirect issues.

 

I've had to rebuild my server after some water issues with flooding in my house and it's been a few years since I set up TvHeadend so I'm not sure what I've done wrong here but I can't get the epg to work.  I've SSH in and ran the config script

I checked my apdata folder and it created a file and this is the contents

username and password have been edited for posting.

 

I then went back into TvHeadend EPG Grabber Module and enabled "internal: XMLTV.....Schedules Direct JSON....

 

I then re-run internal epg grabbers and I get this every time.

 

what did I miss here?

 

Did you activate the wrong grabber? You see that there are sqlite at the end of the grabber in the log?

  • Like 1
Link to comment
23 hours ago, saarg said:

 

Did you activate the wrong grabber? You see that there are sqlite at the end of the grabber in the log?

Geez...now I feel like an idiot.  Thanks a ton.  I didn't realize there were two grabbers listed for Schedules Direct and I activated the first one I saw.  I changed that and the log is showing that it's running now.

 

EDIT: Just posting to follow up and say I'm back up and running.  Thanks again.

Edited by Japes
Link to comment

Hello,

A small problem I m facing with the linuxserver tvheadend container.

I m trying to export the channel list in a m3u file.

 

I m using : https://user:[email protected]:443/playlist

This used to work when I used the tvheadend unraid plugin instead of the docker container. The plugin would generate url s with a ticket parameter and that was used for authentication.

 

Using the docker the urls in the m3u file look like 

https://myserver.com:443/stream/channelid/1047947111?profile=pass

And this does not work.

 

However when I m in the services section in the configuration an click the play icon the generated urls look like :

https://myserver.com:443/stream/service/7dfsdfdsb61fe99ea?ticket=5c0ec8bc13cab1d84f476e9428886dcbd67b748a

And this works.

 

So to keep it short, the server works, but I m unable to generate a m3u list with correct urls for each channel.

 

Maybe somebody has a suggestion.

Thanks !

 

 

Link to comment
7 hours ago, suyac said:

thanks for your answer

 

it is for personal usage but adding user password to the generated url doesn t work either

eg https://user:password@myserver.com:443/stream/channelid/42343218?profile=pass

 

 

i just tried here also, with and without reverse proxy ..

 

so, using directly external links as described it works, through reverse proxy here also not, but i guess thats a "stream" thing to add to reverse proxy ...

 

in case u figure it, let me know ;)

Link to comment
3 minutes ago, Glassed Silver said:

Anyone got this working with MagentaTV in Germany?


When I feed it the m3u for MagentaTV from this list it will find the muxes fine, but scanning FAILs and I cannot create services from the muxes.

 

The same m3u works just fine in VLC for example.

Most likely you are running the container in bridge network. If you use a multicast source, you need to use host networking.

 

Link to comment
40 minutes ago, saarg said:

Most likely you are running the container in bridge network. If you use a multicast source, you need to use host networking.

 

Nope, didn't help. :(

 

Edit: attached screenshot of VLC's info dialogue about a sample stream.

 

Maybe it's a codec issue? The problem is going from mux to service. I don't even see a button to map the mux as service. (but maybe that's just the old, outdated UI in the video tutorial I tried to partially adapt)

Bildschirmfoto 2019-05-13 um 8.25.28 pm ♥.png

Edited by Glassed Silver
Link to comment
2 hours ago, saarg said:

Then I don't know. Probably best to ask on the tvheadend forum.

the m3u file (with pipes) expects the ffmpeg binary in /usr/bin/ffmpeg. I guess that's not where it is, right?

The other playlist is rtp based.

 

I did find this thread https://tvheadend.org/boards/4/topics/37025?r=37053#message-37053.

 

What I don't get however is, when I add a new port in the docker's settings it's doing nothing.

 

505261783_Bildschirmfoto2019-05-13um10_53_56pm.png.021fcb48ac583cbef6d32101d4030d9b.png

Link to comment
7 minutes ago, Glassed Silver said:

the m3u file (with pipes) expects the ffmpeg binary in /usr/bin/ffmpeg. I guess that's not where it is, right?

The other playlist is rtp based.

 

I did find this thread https://tvheadend.org/boards/4/topics/37025?r=37053#message-37053.

 

What I don't get however is, when I add a new port in the docker's settings it's doing nothing.

 

505261783_Bildschirmfoto2019-05-13um10_53_56pm.png.021fcb48ac583cbef6d32101d4030d9b.png

If you still run it in host mode, you don't need to add a port. It's only valid for bridge mode.

ffmpeg is in /usr/bin/

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.