[Support] Linuxserver.io - Mylar


Recommended Posts

Mylar stopped working properly for me sometime in April. Prior to that it worked like a champ, since then I've had maybe 5 downloads total. Logs attached, any ideas? Sab seems fine, Mylar has just stopped finding stuff, but can be found manually from my indexers no problem. Sonarr etc. still working fine with Sab. Thanks.

 

UPDATE: For others getting this same issue, I've found a solution. Edit Mylar's config.ini and remove the provider_order line completely. Sometimes this gets screwed up. So far I'm back to working again.

 

Funny that @CHBMB replied to this just as I was typing out the update... xD

 

Edited by Rick Gillyon
Link to comment
Mylar stopped working properly for me sometime in April. Prior to that it worked like a champ, since then I've had maybe 5 downloads total. Logs attached, any ideas? Sab seems fine, Mylar has just stopped finding stuff, but can be found manually from my indexers no problem. Sonarr etc. still working fine with Sab. Thanks.
logs.zip
Sounds like something you should be approaching the Mylar team with

Sent from my Mi A1 using Tapatalk

Link to comment
  • 2 weeks later...

Hey, I'm fairly new to Mylar and Unraid, so I'll say all this with the caveat that it's probably user error. I just haven't been able to figure it out, so I'm hoping someone may be able to point me in the right direction.

 

I got Mylar set up, and it's working mostly fine, using my own Newznab indexer and the experimental indexer. But certain issues will be marked as snatched, sit in my download folder, while the rest of the series gets processed and moved to my comics folder. For example, I recently put the 1996 DC series Hitman on my Wanted list. 62-issue run, including the annual. Indexers found everything but the annual, processed the remaining 60 issues, but issue No. 1 is just chillin' in the downloads folder.

 

If there are relevant log lines with an error, I'm having trouble ferreting them out. I just have entries for when it couldn't find issue 1, then that Mylar found the issue. But nothing about post-processing one way or the other.

 

Any thoughts?

Link to comment
  • 3 weeks later...
5 hours ago, eurlin said:

My docker has Python 2.7.16 , is there a way to upgrade to 2.7.9? 

You mean to downgrade python?

If the version you want to downgrade to was used by the container, you can look at dockerhub and try the earlier tags. But all other packages will be downgraded also.

Link to comment
3 hours ago, saarg said:

You mean to downgrade python?

If the version you want to downgrade to was used by the container, you can look at dockerhub and try the earlier tags. But all other packages will be downgraded also.

I'm an idiot, because the Auth mode wasn't working for me so I just assumed 2.7.16 was lower lol thanks for clearing up the version. 

Link to comment
  • 2 weeks later...
On 7/21/2019 at 9:35 PM, SicSemper said:

Hey, I'm fairly new to Mylar and Unraid, so I'll say all this with the caveat that it's probably user error. I just haven't been able to figure it out, so I'm hoping someone may be able to point me in the right direction.

 

I got Mylar set up, and it's working mostly fine, using my own Newznab indexer and the experimental indexer. But certain issues will be marked as snatched, sit in my download folder, while the rest of the series gets processed and moved to my comics folder. For example, I recently put the 1996 DC series Hitman on my Wanted list. 62-issue run, including the annual. Indexers found everything but the annual, processed the remaining 60 issues, but issue No. 1 is just chillin' in the downloads folder.

 

If there are relevant log lines with an error, I'm having trouble ferreting them out. I just have entries for when it couldn't find issue 1, then that Mylar found the issue. But nothing about post-processing one way or the other.

 

Any thoughts?

Did you by chance figure this out?  I'm having the same issue

Link to comment
  • 1 month later...

I'm running into issues where Mylar regularly disables my search providers.  After digging into the logs, it looks like it occurs after daily API limits are hit (WARNING DAILY API limit reached. Disabling provider usage until 12:01am) however the provider usage is not automatically re-enabled.  I have always had to manually enable the provider.  Is this something I should go to the Mylar team on, or something that can be managed at the docker level (force re-enabling search providers)

Link to comment
  • 5 weeks later...

Is there a way to kill the mylar process and re-run with the -v flag from within the container's console? I'm trying to pull the verbose logs to troubleshoot a DB problem with the developer. 

 

I can't seem to get a  "kill" to stick within the console so far. I'd really rather not create an entirely new image - it's beyond my skills.

 

Thanks in advance.

 

Edit: I never did figure out how to kill the process - the s6-supervisor gets in the way I think. I was able to copy the run file to my cache drive, make the edit, then copy it back successfully and have the verbose logging now (in case anyone else ever runs into this). I imagine that'll go away the next time the container is updated, but it's persistent through container restarts for now.

Edited by billbord
Link to comment
  • 1 month later...

I'm sure this must be me being unobservant, but I can't seem to find where I have this incorrect.

 

Here is the error in Mylar

image.png.ffabd696178eb0b15e0238763a088ffc.png

 

"It's a permissions issue" You say.  I agree!  Here is the setup of Mylar:

image.png.10db54ac7085949d30b19989bcc61945.png

 

"But where is comics pointing to?" you ask.  Great question:

 

image.thumb.png.06a98aa670e5f48366f06058ebe50139.png

 

I've also changed permissions to 777 and owner to nobody.

 

Clearly I'm being an idiot.  Can someone point me to where I'm going wrong?

 

 

Link to comment
  • 1 month later...

Hello! I keep running into the following error when attempting to connect Mylar to Deluge:

[Deluge] Could not establish connection to 192.168.0.1:58846. Error returned: Password does not match, localclient
Traceback (most recent call last):
File "/usr/lib/python3.8/site-packages/deluge/core/rpcserver.py", line 267, in dispatch
ret = component.get('AuthManager').authorize(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/deluge/core/authmanager.py", line 125, in authorize
raise BadLoginError('Password does not match', username)
deluge.error.BadLoginError: Password does not match
  • I attempted to add the Deluge user "mylar:mylar:5" to /mnt/user/appdata/deluge -> auth but the username is not recognized as existing,
  • I attempted to use the username "localclient" as it appears in the auth file and it is recognized as valid but I'm unable to get the password right: tried the dockers' default, tried copy-pasting the one shown in the auth file, and tried replacing the one shown in the auth file with no luck.

Thanks for your time.

Edited by Tzundoku
Link to comment
  • 1 month later...
On 12/10/2019 at 6:22 PM, igeekus said:

I'm sure this must be me being unobservant, but I can't seem to find where I have this incorrect.

 

Here is the error in Mylar

image.png.ffabd696178eb0b15e0238763a088ffc.png

 

"It's a permissions issue" You say.  I agree!  Here is the setup of Mylar:

image.png.10db54ac7085949d30b19989bcc61945.png

 

"But where is comics pointing to?" you ask.  Great question:

 

image.thumb.png.06a98aa670e5f48366f06058ebe50139.png

 

I've also changed permissions to 777 and owner to nobody.

 

Clearly I'm being an idiot.  Can someone point me to where I'm going wrong?

 

 

having the same issue

Link to comment
On 10/8/2019 at 11:00 PM, jamesp469 said:

I'm running into issues where Mylar regularly disables my search providers.  After digging into the logs, it looks like it occurs after daily API limits are hit (WARNING DAILY API limit reached. Disabling provider usage until 12:01am) however the provider usage is not automatically re-enabled.  I have always had to manually enable the provider.  Is this something I should go to the Mylar team on, or something that can be managed at the docker level (force re-enabling search providers)

Anyone else having this issue?

Link to comment
  • 4 weeks later...
  • 2 weeks later...

Curious to know if this LS docker version has moved over to Mylar3. I can't tell from the version listed in the application itself because all I see for the version looks to be a 256 hash. The application site link above, the community apps installer for the LS docker, and in the application itself still point to the original Mylar github page which says development has been stopped and moved to Mylar3. Anyway, I have been planning on wiping my appdata and rebuilding to see if I can fix an issue I've been having and wanted the latest version being developed/supported, so I figured I'd ask. thanks.

Edited by Ookami313
Link to comment
18 hours ago, Ookami313 said:

Curious to know if this LS docker version has moved over to Mylar3. I can't tell from the version listed in the application itself because all I see for the version looks to be a 256 hash. The application site link above, the community apps installer for the LS docker, and in the application itself still point to the original Mylar github page which says development has been stopped and moved to Mylar3. Anyway, I have been planning on wiping my appdata and rebuilding to see if I can fix an issue I've been having and wanted the latest version being developed/supported, so I figured I'd ask. thanks.

Mylar 3 looks like it's in early development stages still. No idea if/when we will make container for mylar 3.

Link to comment
On 4/15/2020 at 2:42 AM, saarg said:

Mylar 3 looks like it's in early development stages still. No idea if/when we will make container for mylar 3.

Fully understood. I'd also welcome a Mylar 3 docker (or seeing this one updated from 2 to 3), but of course understand time constraints and potentially limited interest.

Link to comment
  • 1 month later...

Wondering if anyone can help me with the ComicRN script?  Whenever the script runs in SABNZBD the script just hangs infinitely and sab doesn't continue any other downloads. The only thing I can do to stop it is to stop the sabNZBD docker and restart it. The download shows as failed. It's been like this for a few years now (see my post on Page 3 of this thread that was unanswered). As a work around I've just been manually moving comics from the completed folder to their media folder but would like to get this sorted out.

 

Per my previous post, if in the cfg file, if I set the host to something wrong (127.0.0.1) the script fails but doesn't hang up sab. If I set the host to my server ip address (192.168.1.114), then the script hangs.

Link to comment
  • 1 month later...
  • 1 month later...

Hello everyone, I am not new to Mylar but new to using it in unraid. I have not touched mylar in years and recently stood up Ubooquity so naturally I needed to get Mylar back up and running, in unraid of course.... LOL. Any help or suggestions would be greatly appreciated. 

 

Mylar is not pulling in my comics. I do not know if its how the file structure is or something wrong I am doing in mylar. 

Physically they are stored in folder with in UNRAID, then attached to the container as the host path of /comics

The location where I have my comics is set up like the following attached to the mylar container.

/comics

When I console into mylar I see /comics and it sees all the comics attached.

My files structure is 

/comics/publisher/series/

 

example continued 

/comics/DC Comics

/comics/Marvel

/comics/Dark Horse Comics

/comics/IDW Publishing

/comics/Vertigo

/comics/Zenescope Entertainment

 

Farther down in a screen shot is the structured continued. 

 

 

 


image.thumb.png.d62f3998d37ae27dc4528127462c05a4.png



 

image.thumb.png.c83ac9853ba9fcd18eaad5d1066dc6fe.png

 

image.thumb.png.8189c3939a95aab53349d276e568f5bc.png

 

 

image.png.11d5d4e07e6a7fc24375258c39505b5c.png

Link to comment
  • 2 weeks later...

After some more exploring, the container for Mylar3 can not reach the internet. This is why its not working 

ylar.locg.58 : ThreadPoolExecutor-0_0 : HTTPSConnectionPool(host='walksoftly.itsaninja.party', port=443): Max retries exceeded with url: /newcomics.php?week=39&year=2020 (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x1481dab4bdc0>: Failed to establish a new connection: [Errno 110] Operation timed out'))

it is my only container that can not reach the net. Not sure why

Link to comment

So is this container supported at all anymore?

 

Looks like there are issues with running the included scripts with sab because of mismatched python versions. Tried copying over from mylar 3 but still doesn't work

 

Update: actually copying over the code seems to have fix the processing issue. Still errors out but seems like it's not important. Would be nice though if this container was updated so that at least the scripts are updated to work with Python 3. 

Edited by andyd
Link to comment
  • 1 month later...

so I just switched over to the mylar3 docker by hotio, and its awesome. Much has been improved in mylar3 and seems to be working with no issues :)

 

some tips from my experience...

1. I added path for my comics

2. I added the PUID = 99 and PGID = 100 variables

3. I had to move old mylar files from appdata/mylar/mylar to appdata/mylar/app, apparently hotio has that "app" subfolder hard coded in there.

4. start up mylar 3 and all was well

 

Shout out and thanks to hotio!!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.