Newznab


Recommended Posts

I noticed that the script out of the box only post processes 100 nzb, one of the mods on that page shows where to increase that and I think I'm going to on my install. So far my system seems to be doing well overall and I think I'm actually okay on disk usage :-O

 

I have a justpostprocess script running from the site I posted. I really like it as it makes the normal script go faster as it only processes 1 release and the postprocess script does the rest. So far it's been working really well.

Link to comment
  • Replies 122
  • Created
  • Last Reply

Top Posters In This Topic

This is a short summary of my findings:

[*]Update binaries searches usenet for headers.

[*]Update releases consolidates the headers into releases.

[*]Post Processing looks up the releases (NFO's) on the internet and fills the information on the web interface.

By default, newznab works sequentially (assuming you use the screen or init scripts):

[*]Updating binaries

[*]Updating releases

[*]Post processing 100 releases, regardless of how many you actually have

 

However if you follow these instructions you will make post processing to run continuously, kind of multitasking. It will post process old releases while it's grabbing headers or sorting releases.

 

Benefits:

  • Post processing will run continuously and will, potentially, post process more releases per day
  • You will not spam Amazon's API, assuming of course you change the postprocess.php file in www/lib to process just one release at a time

 

However, as the postprocess.php script will also be called during standard operations, there is a chance (don't know how big) that they will clash and get killed, so you'll have to re-start it again.

 

I hope it's clear  :)!

Link to comment

I recommend adding:

 

update_parsing

And remove special to your script as well at the end.

 

I'll try to take a look tonight at it, thanks! I'd be interested in setting my system up to background process continuously as well. Mostly I'm just concerned that I process all new incoming and not pound external API if possible :-)

Link to comment

I recommend adding:

 

update_parsing

And remove special to your script as well at the end.

 

I would be careful about running the removespecial script if you have anime as release groups usually have their names in brackets in the beginning of the name. I believe this would remove them.

 

Still trying to find a way to get rid of foreign releases. I'm going to be massively updating my blacklist.

Link to comment

Here are the rules I am using 0. through 3.

 

However, I included some more that are useful for testing.  I will also soon publish a script that I have to scan your entire database for matches to new blacklist rules, useful for testing blacklist/regex rules and then blacklisting them if you find it appropriate.  ;)

 

A feature I often hear people ask about is "will the blacklist go back and remove matches" and the short answer is no, not in its current state.. but this script will do it for you. 

 

I'll post soon.

 

Again (plugging myself) if anyone wants access to my index, read the details in the lounge nzb private index post and I'll get you hooked up.  Will add a more formal post on the subject soon.

Link to comment
  • 1 month later...

shat, or anyone else

 

How do I stop database bloat or remove old stuff that is being left behind? I had trouble with a release that wouldn't process and just ended up putting some settings that removed all the nzb files and headers but the database size didn't change by much. There's really no reason for the database to be 150gig give or take but it still is.

Link to comment
  • 2 weeks later...

Finally got this update script working on my unraid box and its well worth the effort.

 

https://github.com/jonnyboy/newznab-tmux

 

I'm running 10 post processing threads at once which really helps churn through the backfilled releases that need to be processed.

 

As long as you have the CPU to support it. It also uses 1 NNTP connection per postproc thread consumed. That in addition to the default 10 for backfill_threaded and 10 for update_binaries_threaded. Make sure you have more than 30 connections from your USP to avoid errors.

 

Another great limit to increase performance is to limit your parts table. I keep my parts tables down to just enough to capture 2 days of retention on binaries plus 10-12 million for binaries while backfilling (2-million for each indexed USP).

 

As far as specifying more than one location, you cannot. I also would not recommend trying unless you understand all the potential benefits and weigh them against the seemingly endless cons.

 

My index has been modified severely and no longer runs the release version from newznab. It has been entirely rewritten, excluding the method in which nn generates releases/NZB files. Newsman acts only as a small piece of a much larger and to robust platform now.  It utilizes nodeJS and socketIO as an API and JSON is fed back through AngularJS to build the front end.

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.