CrashnBrn Posted December 27, 2012 Share Posted December 27, 2012 I noticed that the script out of the box only post processes 100 nzb, one of the mods on that page shows where to increase that and I think I'm going to on my install. So far my system seems to be doing well overall and I think I'm actually okay on disk usage :-O I have a justpostprocess script running from the site I posted. I really like it as it makes the normal script go faster as it only processes 1 release and the postprocess script does the rest. So far it's been working really well. Quote Link to comment
dheg Posted December 28, 2012 Share Posted December 28, 2012 This is a short summary of my findings: [*]Update binaries searches usenet for headers. [*]Update releases consolidates the headers into releases. [*]Post Processing looks up the releases (NFO's) on the internet and fills the information on the web interface. By default, newznab works sequentially (assuming you use the screen or init scripts): [*]Updating binaries [*]Updating releases [*]Post processing 100 releases, regardless of how many you actually have However if you follow these instructions you will make post processing to run continuously, kind of multitasking. It will post process old releases while it's grabbing headers or sorting releases. Benefits: Post processing will run continuously and will, potentially, post process more releases per day You will not spam Amazon's API, assuming of course you change the postprocess.php file in www/lib to process just one release at a time However, as the postprocess.php script will also be called during standard operations, there is a chance (don't know how big) that they will clash and get killed, so you'll have to re-start it again. I hope it's clear ! Quote Link to comment
shat Posted December 28, 2012 Share Posted December 28, 2012 Blkmgk: I have a script that I cron to keep post proc running 24/7 in 100 release batches. I have the script somewhere. I'll link once I'm at a pc Quote Link to comment
shat Posted December 28, 2012 Share Posted December 28, 2012 Didn't read dheg's posted link. I've never had any "clashes" myself in two plus years. Quote Link to comment
shat Posted December 28, 2012 Share Posted December 28, 2012 I recommend adding: update_parsing And remove special to your script as well at the end. Quote Link to comment
dheg Posted December 28, 2012 Share Posted December 28, 2012 Didn't read dheg's posted link. I've never had any "clashes" myself in two plus years. fair enough, the mod @ newznab IRC couldn't confirm whether this could happen often. He only said "Sometimes it may clash, especially if using MP3 samples" Quote Link to comment
shat Posted December 28, 2012 Share Posted December 28, 2012 I'll bring it up in the dev chat and let you know what's up. Quote Link to comment
CrashnBrn Posted December 28, 2012 Share Posted December 28, 2012 I took all of the testing scripts and put all the descriptions in one place. Though a couple of them did not say what they did. http://pastebin.com/WK6AFNNg Quote Link to comment
dheg Posted December 28, 2012 Share Posted December 28, 2012 I took all of the testing scripts and put all the descriptions in one place. Though a couple of them did not say what they did. http://pastebin.com/WK6AFNNg Thanks! Great work! Are you in the newznab forums? Would you mind if I post this there? Quote Link to comment
CrashnBrn Posted December 28, 2012 Share Posted December 28, 2012 I took all of the testing scripts and put all the descriptions in one place. Though a couple of them did not say what they did. http://pastebin.com/WK6AFNNg Thanks! Great work! Are you in the newznab forums? Would you mind if I post this there? I am! I'll post it there and save you the work Quote Link to comment
BLKMGK Posted December 28, 2012 Author Share Posted December 28, 2012 I recommend adding: update_parsing And remove special to your script as well at the end. I'll try to take a look tonight at it, thanks! I'd be interested in setting my system up to background process continuously as well. Mostly I'm just concerned that I process all new incoming and not pound external API if possible :-) Quote Link to comment
CrashnBrn Posted December 29, 2012 Share Posted December 29, 2012 I recommend adding: update_parsing And remove special to your script as well at the end. I would be careful about running the removespecial script if you have anime as release groups usually have their names in brackets in the beginning of the name. I believe this would remove them. Still trying to find a way to get rid of foreign releases. I'm going to be massively updating my blacklist. Quote Link to comment
shat Posted December 29, 2012 Share Posted December 29, 2012 I have some regex entries that I will publish on my indexers blog. You will find them most useful Quote Link to comment
CrashnBrn Posted December 29, 2012 Share Posted December 29, 2012 I have some regex entries that I will publish on my indexers blog. You will find them most useful Shat you're awesome! I might have missed it can you please post a link to your blog? Quote Link to comment
shat Posted December 29, 2012 Share Posted December 29, 2012 Here are the rules I am using 0. through 3. However, I included some more that are useful for testing. I will also soon publish a script that I have to scan your entire database for matches to new blacklist rules, useful for testing blacklist/regex rules and then blacklisting them if you find it appropriate. A feature I often hear people ask about is "will the blacklist go back and remove matches" and the short answer is no, not in its current state.. but this script will do it for you. I'll post soon. Again (plugging myself) if anyone wants access to my index, read the details in the lounge nzb private index post and I'll get you hooked up. Will add a more formal post on the subject soon. Quote Link to comment
shat Posted January 1, 2013 Share Posted January 1, 2013 Link to blog, http://usenetnews.tumblr.com Quote Link to comment
jbrodriguez Posted February 2, 2013 Share Posted February 2, 2013 shat, r u running the private indexer ? if that's the case, do you still have invites available ? Quote Link to comment
lionelhutz Posted February 3, 2013 Share Posted February 3, 2013 Again (plugging myself) if anyone wants access to my index, read the details in the lounge nzb private index post and I'll get you hooked up. Will add a more formal post on the subject soon. the thread is gone Quote Link to comment
shat Posted February 4, 2013 Share Posted February 4, 2013 Yes. The index is still available. Email me for invite [email protected] Quote Link to comment
lionelhutz Posted February 4, 2013 Share Posted February 4, 2013 shat, or anyone else How do I stop database bloat or remove old stuff that is being left behind? I had trouble with a release that wouldn't process and just ended up putting some settings that removed all the nzb files and headers but the database size didn't change by much. There's really no reason for the database to be 150gig give or take but it still is. Quote Link to comment
PsiCzar Posted February 4, 2013 Share Posted February 4, 2013 Shat, You mention on your blog that you use multiple news providers for your newznab server. How can you specify more than 1? Quote Link to comment
PsiCzar Posted February 4, 2013 Share Posted February 4, 2013 Finally got this update script working on my unraid box and its well worth the effort. https://github.com/jonnyboy/newznab-tmux I'm running 10 post processing threads at once which really helps churn through the backfilled releases that need to be processed. Quote Link to comment
shat Posted February 13, 2013 Share Posted February 13, 2013 Finally got this update script working on my unraid box and its well worth the effort. https://github.com/jonnyboy/newznab-tmux I'm running 10 post processing threads at once which really helps churn through the backfilled releases that need to be processed. As long as you have the CPU to support it. It also uses 1 NNTP connection per postproc thread consumed. That in addition to the default 10 for backfill_threaded and 10 for update_binaries_threaded. Make sure you have more than 30 connections from your USP to avoid errors. Another great limit to increase performance is to limit your parts table. I keep my parts tables down to just enough to capture 2 days of retention on binaries plus 10-12 million for binaries while backfilling (2-million for each indexed USP). As far as specifying more than one location, you cannot. I also would not recommend trying unless you understand all the potential benefits and weigh them against the seemingly endless cons. My index has been modified severely and no longer runs the release version from newznab. It has been entirely rewritten, excluding the method in which nn generates releases/NZB files. Newsman acts only as a small piece of a much larger and to robust platform now. It utilizes nodeJS and socketIO as an API and JSON is fed back through AngularJS to build the front end. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.