[Support] alturismo - Repos


Recommended Posts

11 minutes ago, alturismo said:

@ethan9482 here its just


myusernamehere
mypasswordhere

like in the sample ...

 

its not

username=myusername

OR

username='myusername'

 

so when i take your sample from up there


email@address
password

idk how you come to username=email@address ...

Cheers - nothing other than my own stupidity - will go and try this now and hopefully it then loads up!

Link to comment

@mihcox may post your docker run command please

 

and may use as advices

 

guide2go -configure /guide2go/"your_epg_name".yaml

guide2go -configure /guide2go/"your_epg_name".yaml

u miss the /guide2go/ in your commandm your sample should look like this

guide2go -configure /guide2go/Frontier.yaml

 

Edited by alturismo
Link to comment
15 hours ago, alturismo said:

@mihcox may post your docker run command please

 

and may use as advices

 

guide2go -configure /guide2go/"your_epg_name".yaml


guide2go -configure /guide2go/"your_epg_name".yaml

u miss the /guide2go/ in your commandm your sample should look like this


guide2go -configure /guide2go/Frontier.yaml

 

Solved my issue, thank you so much! 

 

I do see a lot my lines returning the following, is this something i should be concerned about:

 

2020/05/27 14:51:58 [ERROR] json: cannot unmarshal object into Go struct field SDMetadata.data of type []struct { Aspect string "json:\"aspect\""; Height string "json:\"height\""; Size string "json:\"size\""; URI string "json:\"uri\""; Width string "json:\"width\"" }

 

2020/05/27 14:54:34 [G2G  ] Deleted Program Informations: 0
{
  "status": true
}{
  "status": true
}<html><head><title>Unauthorized</title></head><body><h1>401 Unauthorized</h1></body></html>

Edited by mihcox
Link to comment
  • 2 weeks later...

Some serious problems with the updated plugin here. I thought it was working and now I've run out of EPG. 😯

 

I've tried recreating this with a new lineup and it's repeatable. I just got G2G SD to create the yaml for a new xml:

guide2go -configure z.yaml

 

Then added all channels, no edits to the YAML. Ran:

guide2go -config z.yaml

 

This output is produced:

2020/06/06 11:39:37 [G2G  ] Version: 1.1.1
2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/token
2020/06/06 11:39:37 [SD   ] Login...OK

2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/status
2020/06/06 11:39:37 [SD   ] Account Expires: 2020-09-24 16:52:44 +0000 UTC
2020/06/06 11:39:37 [SD   ] Lineups: 2 / 4
2020/06/06 11:39:37 [SD   ] System Status: Online [No known issues.]
2020/06/06 11:39:37 [G2G  ] Channels: 163
2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000080-DEFAULT
2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000203-DEFAULT
2020/06/06 11:39:38 [G2G  ] Download Schedule: 7 Day(s)
2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/schedules
2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value
2020/06/06 11:39:38 [G2G  ] Download Program Informations: New: 0 / Cached: 0
2020/06/06 11:39:38 [G2G  ] Download missing Metadata: 0 
2020/06/06 11:39:38 [G2G  ] Create XMLTV File [z.xml]
2020/06/06 11:39:38 [G2G  ] Clean up Cache [z_cache.json]
2020/06/06 11:39:38 [G2G  ] Deleted Program Informations: 0

Clearly the line:

2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value

is a problem. There is no < in the YAML. If I pick a single channel it seems to work, so it looks like the plugin is not properly protecting itself from weird characters in the SD data.

 

Edit: the XML output is produced, but is just the channel list.

 

Any ideas? This is quite urgent (wife complaining!) so I'll need to sort something ASAP. Thanks! Redacted YAML is attached.

z.txt

Edited by Rick Gillyon
Link to comment
  • 2 weeks later...
On 6/6/2020 at 7:01 AM, Rick Gillyon said:

Some serious problems with the updated plugin here. I thought it was working and now I've run out of EPG. 😯

 

I've tried recreating this with a new lineup and it's repeatable. I just got G2G SD to create the yaml for a new xml:

guide2go -configure z.yaml

 

Then added all channels, no edits to the YAML. Ran:

guide2go -config z.yaml

 

This output is produced:


2020/06/06 11:39:37 [G2G  ] Version: 1.1.1
2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/token
2020/06/06 11:39:37 [SD   ] Login...OK

2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/status
2020/06/06 11:39:37 [SD   ] Account Expires: 2020-09-24 16:52:44 +0000 UTC
2020/06/06 11:39:37 [SD   ] Lineups: 2 / 4
2020/06/06 11:39:37 [SD   ] System Status: Online [No known issues.]
2020/06/06 11:39:37 [G2G  ] Channels: 163
2020/06/06 11:39:37 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000080-DEFAULT
2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/lineups/GBR-1000203-DEFAULT
2020/06/06 11:39:38 [G2G  ] Download Schedule: 7 Day(s)
2020/06/06 11:39:38 [URL  ] https://json.schedulesdirect.org/20141201/schedules
2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value
2020/06/06 11:39:38 [G2G  ] Download Program Informations: New: 0 / Cached: 0
2020/06/06 11:39:38 [G2G  ] Download missing Metadata: 0 
2020/06/06 11:39:38 [G2G  ] Create XMLTV File [z.xml]
2020/06/06 11:39:38 [G2G  ] Clean up Cache [z_cache.json]
2020/06/06 11:39:38 [G2G  ] Deleted Program Informations: 0

Clearly the line:

2020/06/06 11:39:38 [ERROR] invalid character '<' looking for beginning of value

is a problem. There is no < in the YAML. If I pick a single channel it seems to work, so it looks like the plugin is not properly protecting itself from weird characters in the SD data.

 

Edit: the XML output is produced, but is just the channel list.

 

Any ideas? This is quite urgent (wife complaining!) so I'll need to sort something ASAP. Thanks! Redacted YAML is attached.

z.txt 11.54 kB · 0 downloads

I am having this exact same issue. I actually came here to post this almost word for word.

 

When I did everything last night it seemed ok, I woke up today and most EPG data is gone and I noticed the exact same error.

Link to comment
4 minutes ago, relink said:

I am having this exact same issue. I actually came here to post this almost word for word.

When I did everything last night it seemed ok, I woke up today and most EPG data is gone and I noticed the exact same error.

SD is borked again. Raise a lineup issue on their website and hope for a quick fix.

Link to comment
19 minutes ago, relink said:

I am having this exact same issue. I actually came here to post this almost word for word.

 

When I did everything last night it seemed ok, I woke up today and most EPG data is gone and I noticed the exact same error.

and you also followed the correct way to readd ...

 

image.thumb.png.cf01100b28a7ae0eaddf113ad6075eaa.png

 

so, simple update would be

 

goto config, rename cronjob.sh, force update (new cronjob.sh is created), fill out the fields again to match your settings (old cronjob.sh may as source),

start a new g2g config as stated, guide2go -configure /guide2go/whatevername.yaml

 

if you then want to force new xml data, start cronjob.sh manually like stated /config/cronjob.sh

 

all done ... also, all mappings etc should be still there when u took the same lineups

Link to comment

I did make sure I was in the correct directory when running the command. I have 3 lineups and 2 of them seem to be working.

 

Like I said when I first did the new setup yesterday I edited my new yaml files and created the xml files and everything seemed to have worked just fine. it wasn't until this morning that all the data for 1 of my lineups is gone, and i cant get it to download again.

Link to comment
1 minute ago, relink said:

Ive submitted a ticket with SD. Hopefully they will be quick.

A bit OT here, but I've changed my userscript that downloads xml from SD, so that it downloads to intermediate files and only overwrites the in-use xml if it's bigger than a minimum size:

if [ -n "$(find "$fil" -prune -size +20000000c)" ]; then
  mv freenew.xml free.xml
  mv skynew.xml sky.xml
  outMsg="Success"
else
  outMsg="SD BORKED"
fi

This way I don't lose my current data when SD breaks. This has happened twice in 10 days.

Link to comment
20 minutes ago, Rick Gillyon said:

A bit OT here, but I've changed my userscript that downloads xml from SD, so that it downloads to intermediate files and only overwrites the in-use xml if it's bigger than a minimum size:


if [ -n "$(find "$fil" -prune -size +20000000c)" ]; then
  mv freenew.xml free.xml
  mv skynew.xml sky.xml
  outMsg="Success"
else
  outMsg="SD BORKED"
fi

This way I don't lose my current data when SD breaks. This has happened twice in 10 days.

ok, wonders me due when you use xteve, xteve will keep its data (cached) and wont empty the current data ... if it does empty the data, may open a topic at xteve github cause that would be a bug ;)

Link to comment
1 minute ago, alturismo said:

ok, wonders me due when you use xteve, xteve will keep its data (cached) and wont empty the current data ... if it does empty the data, may open a topic at xteve github cause that would be a bug ;)

The problem is that when downloading the xml, it downloads channels but no programme data, so it is a valid non-empty xml file, just with nothing useful. You think that's something I should raise with xteve?

Link to comment

it shouldnt affect the current data cause xteve caches its data (u see the cache files), so the current epg state shouldnt be affected as new programs aint added (as failed) but wiping existing data is not the way it should work.

 

i cant sadly force it cause my lineups seems pretty stable and i never experienced a complete lineup loss due 1 update, only if it fails for like 14 days (which is my setup).

 

definately something to at least point at ... and may next time u encounter this issue, samples would be nice to look at. But u would have to build a routing into your small script lines so it proceeds the fail.

Link to comment

Just wanted to update, I heard back from SD and it does appear the issue was on there end. I re-ran the cron job, and everything loaded just fine!

 

I also wanted to chime in on the issue being discussed of nearly empty data taking the place of the correct cached data when its not supposed to. I have had the issue a couple of times as well, where SD has an issue and it will pull down a mostly empty file and xteve loads it in anyway erasing whatever correct data it had cached.

 

The last time this happened was yesterday, I hadn't noticed the update to using the new yaml files so I went to go watch TV and had no data. I came here saw there was a change, and immediately generated the new yaml files I needed, ran the cron job and everything was fine. but over night guide2go pulled in new data from SD, which was having issues with one of my lineups, so every channel using that lineup now had no data. Thus why I posted here this morning.

 

The time prior to that, all of my channels lost their icons because SD was having an issue.

Link to comment
2 hours ago, BzzLghtyr said:

How can I make xTeVe utilize my Nvidia GPU for buffering/transcoding and ffmpeg the same way I have Plex doing so?

i think u missunderstand something about buffering or no buffering and remuxing / transcoding.

 

xteve does not transcode at all, without buffer it just parses a link to the client (i.e. plex server) which does the transcode if needed.

 

with buffer ON (either xteve, ffmpeg, vlc) the stream is remuxed only to a standard .ts stream, no transcode in progress ...

this remuxed stream is then parsed to the client (i.e. plex server) which does the transcode if needed.

 

so, the transcode process if needed is done in your media server (plex, emby, ...) and not in xteve, which would be senseless anyway.

 

in terms you want to achive a different approach by manually editing the ffmpeg or vlc command in xteve to transcode, then u could

win something with a nvidia build ... but only then, and this is not supported in the current builds.

sample usage would be to use vlc external connected to xteve and you want to transcode for bandwith lowering, but this is not supported

with this docker.

 

when using at home only, then u should look for direct stream anyway for best quality and 0 transcodings.

Link to comment
4 minutes ago, alturismo said:

i think u missunderstand something about buffering or no buffering and remuxing / transcoding.

 

xteve does not transcode at all, without buffer it just parses a link to the client (i.e. plex server) which does the transcode if needed.

 

with buffer ON (either xteve, ffmpeg, vlc) the stream is remuxed only to a standard .ts stream, no transcode in progress ...

this remuxed stream is then parsed to the client (i.e. plex server) which does the transcode if needed.

 

so, the transcode process if needed is done in your media server (plex, emby, ...) and not in xteve, which would be senseless anyway.

 

in terms you want to achive a different approach by manually editing the ffmpeg or vlc command in xteve to transcode, then u could

win something with a nvidia build ... but only then, and this is not supported in the current builds.

sample usage would be to use vlc external connected to xteve and you want to transcode for bandwith lowering, but this is not supported

with this docker.

 

when using at home only, then u should look for direct stream anyway for best quality and 0 transcodings.

Thank you so much for the response.

 

I think I had believed it was performing a transcode based on a couple different factors. They may not pertain here since they're not your plugins (Plex, Nvidia Unraid, gpustat) but the information on what I was seeing has been added to the bottom of this reply. 

 

I will change to seeing if an Ethernet connection helps, because my main issue was that Roku and Chromecast were buffering endlessly but I have no problems at all when watching on an Ethernet-conencted device through Chrome. My Netgear r9000 is usually plenty capable but it's possible there's some issue in my pipes. 

 

 

 

_________________

Notes on weird stuff i'm seeing

when I watch in Windows/Chrome, the GPU statistics plugin [gpustat] on the Unraid dashboard shows 1 session and I have no issues watching any stream.

when I watch with Chromecast, I can't keep a stream from buffering repeatedly

On Roku, the stream tries to buffer a few times then just dies while gpustat shows 0 sessions. Roku both had issues and the GPU statistics plugin was showing 0 sessions.

Also, and xTeVe stream I use shows in tautulli as a 10.0 Gbps stream (and occasionally thinks I'm watching a movie from my library instead of the DVR stream)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.