Huxy

Community Developer
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Huxy

  1. For clarification, the docker image installs the scraper and it's possible XML format has changed or the SD grabber needs updating. If I remember correctly the SD grabber was actually integrated in to the XMLTV project and the GitHub repo may not be required. I don't have an active SD subscription at the moment, so guide2go at this point is sound advice. That said if I get the chance I'll have a look and see if I can troubleshoot issue. I also have a young family and a full-time job so I can't guarantee this will be the highest of priorities.
  2. <previously-shown start="20200121" /> The listing you provided above is a repeated show. You can see the previously-shown tag is present and is the day before this episode is being aired. I would not expect it to be flagged as new.
  3. Actually the command runs on the Unraid prompt. The docker container can not start without the config and it is not possible (easy) to offer interactive configuration at this time. Just launch the Unraid terminal from within the UI (top right looks like >_ ) and then run the docker command above. It assumes that your data path is "/mnt/cache/appdata/XMLTVSchedulesDirect" which is the default unless you changed it.
  4. My understanding is the plug-in still works fine. That said I no longer watch any LiveTV so don't actively use the grabber. XMLTV requires a pre-configured file in order to operate. In this instance you can either drop an existing working config file into the appdata directory or you can generate one using an interactive prompt. Instructions on how to config and use the plug-in can be located on the GitHub and docker pages: https://github.com/HuxyUK/docker-xmltv-sd-json/ **hint** sudo docker run -ti --rm -v /mnt/cache/appdata/XMLTVSchedulesDirect:/config huxy/xmltv-sd-json /usr/local/bin/tv_grab_sd_json --configure
  5. Does it work if you just include the UK listings? I've not tried grabbing multiple line-ups before, I always just scraped a single but I think it's important to establish if it's related to two line-ups being processed or the line-up itself.
  6. I think this would just need an upgrade to a newer version of Debian as it's packaged with XMLTV. I'll have a look at upgrading. I originally wanted to move to alpine anyway, but it needs updated scripts and proper testing, so this won't be completed anytime soon. What makes you think it will work better for European users?
  7. Hi, I've tested the docker and it works fine. The grabber script hasn't been updated in a couple of years, so it's not as if a new bug would have been introduced. Looking at your screenshot, there's a significant number of issues with the lineup data being retrieved. It's possible that the data is corrupt or not in an expected format from Schedules Direct. I would use the client and delete all your lineups, then add one and retry the process.
  8. Hi, I'm a bit busy with work at the moment, but is this still an issue? If it is, I should be able to free some time at the weekend to have a look at it. As far as I'm aware the docker is still working fine though. Cheers. Huxy
  9. There are two ways of dealing with this. You either drop a pre-configured XMLTV file into the config mount or you configure one using the command line. sudo docker run -ti -v 'your config dir':/config --rm huxy/xmltv-sd-json /usr/local/bin/tv_grab_sd_json --configure This will launch a docker based on the XMLTV grabber image and run the configuration for tv_grab_sd_json. Once this is complete you can launch the docker via the GUI. If you want to use a different grabber view GitHub link for all the supported ones. Cheers. Huxy
  10. Which is what I noticed hence the questions about the docker container.
  11. Which docker are you using for TVH? I'm using Tvheadend-Unstable-DVB-Tuners and I don't have any issues. If you're not using docker for TVH and the install isn't on the same machine as unRAID it won't work. That's because Unix BSD sockets are local access only. If that's the case you'll have to use XMLTV file grabber instead.
  12. As saarg said make sure the module is actually enabled. I can see the external XMLTV option unticked in your screenshot. When using sockets it's important to realise that TVH doesn't grab the listings on a schedule, instead you push data to the socket which in turn get's processed by TVH. I would approach this as follows: 1. Stop the XMLTV schedules docker 2. Enable the XMLTV socket option in TVH 3. Check to make sure the the socket file has been created 4. Launch XMLTV docker and watch the output. You should see it identify the socket.
  13. I'm still seeing similar issues. Everything was fine and then I started getting a number of containers that state there's always an update available. I too had an unclean shutdown around the time this started happening, but I don't know if this is coincidence.
  14. Just a heads-up; The Schedules Direct grabber was updated around 7 days ago (DateTime change). This change broke the script (on reboot) in the docker environment. I've now updated the image to include support for the latest version of the grabber. All other XMLTV scripts will continue to work as normal. https://github.com/kgroeneveld/tv_grab_sd_json/commit/32aebd4e82e93b995b67780629ab16f860e7c915
  15. Hi, This is actually an easy fix. You just need to include the bind mounting as part of the command line. You can do this using the -v parameter. This will allow you to store persistent data. sudo docker run -ti --rm -v /mnt/cache/appdata/XMLTVSchedulesDirect:/config huxy/xmltv-sd-json /usr/local/bin/tv_grab_sd_json --configure If you want to use TVHeadend and my docker you can enable the XMLTV socket, point the data mount point to TVHeadends epggrab directory and the output file as xmltv.sock. My docker will then detect the socket and push data directly into TVHeadend. Cheers. Huxy
  16. [glow=green,2,300]Thanks to Squid this is now available for installation.[/glow] Change Log 2016.06.17 - Added DateTime::Format::DateParse. - Support for updated tv_grab_sd_json grabber. 2016.06.01 - Unix socket support for grabber output. 2016.05.30 - Lots of refactoring behind the scenes. - JSON grabber now updated on every start. 2016.05.29 - Added crontab template generation. - Added check to see if grabber was succesful. - Added timestamp to logs. - Used a temp file for downloads to prevent existing data being erased if the grabber fails. - Various bug fixes and scripts improvements. 2016.05.27 - Initial Release based on Debian Jessie.
  17. This is an XMLTV install with a JSON script added for Schedules Direct. n.b: You may change the grabber used and the number of days to be grabbed using environment variables. Please see the README on github for a more detailed explanation. Application Name: XMLTV Schedules Direct Application Site: https://sourceforge.net/projects/xmltv/ JSON Grabber: https://github.com/kgroeneveld/tv_grab_sd_json/ Docker Hub: https://hub.docker.com/r/huxy/xmltv-sd-json/ Github: https://github.com/HuxyUK/docker-xmltv-sd-json/ This project has stemmed from my own personal requirements and is inspired by tobbenb's work on WebGrabPlus+ and TVHeadend. Please post any questions regarding the docker here and I'll do my best to answer them.
  18. Just wanted to say, that since I moved to automatic builds on a public repo that the updates are being detected correctly and it no longer tells me an update is available when it isn't.
  19. I've not had the issue reoccur yet either. I've disabled the bond as well as I did have an instance where the br0 disappeared and eth0 was present causing my VM's to have difficulty connecting to my LAN. I'm going to do some large scale copies between my VM (passthrough drive) and the Unraid share, so will be interesting to see if it all works! I'll report back with my findings. Interesting! I'll not change any stripe settings yet (don't want to make too many changes before testing) and see how things go!
  20. Thanks for the response. I'm running 6.2 and the only docker exhibiting this issue is my own, but as it seems to be a known bug I'll not spend any more time trying to work out what's going on Out of interest, do you know if the version number matters in the XML file? I wasn't able to find a clear posting detailing all the schema and the meanings of each tag. I think this would be of benefit to someone starting out as I was having to read through lengthy threads trying to gleam the information needed. If one does exist, I apologise as my search skills must suck!
  21. Hi, I'm not sure where the best place to post this question is, but I'm hoping this is the correct sub-forum. I'm new to docker but I've created an image which I can deploy using the Docker section in Unraid. It installs and runs fine but it constantly says there's an upgrade ready even though I'm using the latest version from my docker hub. I'm not sure what's causing this, I've tried adding a version and date tag (as detailed in the XML schema thread), but it hasn't helped. Whilst it's not a huge issue, it would be good to ascertain what's causing the issue so any guidance would be appreciated! Cheers. Huxy
  22. Yes I am. I'm moving from a Xen virtualised Dom0 and there seemed to be some notable improvements in the beta regarding virtualisation. To be honest, I did read the known issues before deciding whether to install and I don't remember reading about the hanging array. If that's the case, at least it's known. Once it's fixed I'll post my findings on stability again for direct passthrough. Thanks for the headsup!
  23. Interesting! I'm just testing out the trial version before I part with my wonga and I had the same problem yesterday evening. All network activity died and the UI stopped responding. The weird thing was that I could still telnet in. I've sinced changed the NIC and uninstalled the Unassigned plugin thinking they were at fault. However, after reading this, I realised that I also had a VM open and was directly passing an unassigned drive through (it's old data in software raid). I'm going to have to start the VM again to try and transfer the data off of my drive and will keep an eye on things. It will be a real shame if this does cause issues, as I prefer to pass through devices rather than use shares.