[DOCKER] Pynab NZB Indexer


Recommended Posts

  • Replies 92
  • Created
  • Last Reply

Top Posters In This Topic

how do you add any groups

 

 

there is a json file in the mapped folder

 

/config/groups.json

 

edit that to add groups, follow the layout of previous groups, pay attention to the commas, adding a group requires adding a comma on the closing } for the previous group, the last group in the list, there should be no comma after the closing }

 

 

stop/start the container.

Link to comment
  • 4 weeks later...
  • 3 weeks later...

Just to add, I am having exactly the same issues!

 

Tony

 

Here is my log...

 

*** Running /etc/my_init.d/001-fix-the-time.sh...

 

Current default time zone: 'America/Los_Angeles'

Local time is now: Wed Jun 3 01:45:59 PDT 2015.

Universal Time is now: Wed Jun 3 08:45:59 UTC 2015.

 

*** Running /etc/my_init.d/002-set-the-config.sh...

config.js exists in /config, may require editing

config.py exists in /config, may require editing

groups.json exists in /config, may require editing

*** Running /etc/my_init.d/003-postgres-initialise.sh...

initialising empty databases in /data

completed initialisation

2015-06-03 01:46:06,085 CRIT Supervisor running as root (no user in config file)

2015-06-03 01:46:06,088 INFO supervisord started with pid 55

2015-06-03 01:46:07,091 INFO spawned: 'postgres' with pid 59

2015-06-03 01:46:07,103 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:08,105 INFO spawned: 'postgres' with pid 60

2015-06-03 01:46:08,117 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:10,121 INFO spawned: 'postgres' with pid 61

2015-06-03 01:46:10,133 INFO exited: postgres (exit status 2; not expected)

setting up pynab user and database

2015-06-03 01:46:13,138 INFO spawned: 'postgres' with pid 87

2015-06-03 01:46:13,150 INFO exited: postgres (exit status 2; not expected)

2015-06-03 01:46:14,151 INFO gave up: postgres entered FATAL state, too many start retries too quickly

pynab user and database created

building initial nzb import

THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER

IMPORT COMPLETED

*** Running /etc/my_init.d/004-set-the-groups.sh...

Testing whether database is ready

database appears ready, proceeding

Traceback (most recent call last):

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get

return self._pool.get(wait, self._timeout)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get

raise Empty

sqlalchemy.util.queue.Empty

 

During handling of the above exception, another exception occurred:

 

Traceback (most recent call last):

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect

return fn()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect

return _ConnectionFairy._checkout(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout

fairy = _ConnectionRecord.checkout(pool)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout

rec = pool._do_get()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get

self._dec_overflow()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__

compat.reraise(exc_type, exc_value, exc_tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise

raise value

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get

return self._create_connection()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection

return _ConnectionRecord(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__

self.connection = self.__connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect

connection = self.__pool._invoke_creator(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect

return dialect.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect

return self.dbapi.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect

conn = _connect(dsn, connection_factory=connection_factory, async=async)

psycopg2.OperationalError: could not connect to server: No such file or directory

Is the server running locally and accepting

connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

 

 

The above exception was the direct cause of the following exception:

 

Traceback (most recent call last):

File "/opt/pynab/pynab.py", line 258, in

group_list()

File "/opt/pynab/pynab.py", line 177, in group_list

groups = pynab.groupctl.group_list()

File "/opt/pynab/pynab/groupctl.py", line 72, in group_list

for group in groups:

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2515, in __iter__

return self._execute_and_instances(context)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2528, in _execute_and_instances

close_with_result=True)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/query.py", line 2519, in _connection_from_session

**kw)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 882, in connection

execution_options=execution_options)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 887, in _connection_for_bind

engine, execution_options)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/orm/session.py", line 334, in _connection_for_bind

conn = bind.contextual_connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2034, in contextual_connect

self._wrap_pool_connect(self.pool.connect, None),

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2073, in _wrap_pool_connect

e, dialect, self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 1403, in _handle_dbapi_exception_noconnection

exc_info

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 188, in raise_from_cause

reraise(type(exception), exception, tb=exc_tb, cause=exc_value)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 181, in reraise

raise value.with_traceback(tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/base.py", line 2069, in _wrap_pool_connect

return fn()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 376, in connect

return _ConnectionFairy._checkout(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 708, in _checkout

fairy = _ConnectionRecord.checkout(pool)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 480, in checkout

rec = pool._do_get()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1049, in _do_get

self._dec_overflow()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/langhelpers.py", line 60, in __exit__

compat.reraise(exc_type, exc_value, exc_tb)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/compat.py", line 182, in reraise

raise value

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1046, in _do_get

return self._create_connection()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 323, in _create_connection

return _ConnectionRecord(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 449, in __init__

self.connection = self.__connect()

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 602, in __connect

connection = self.__pool._invoke_creator(self)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/strategies.py", line 97, in connect

return dialect.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/engine/default.py", line 377, in connect

return self.dbapi.connect(*cargs, **cparams)

File "/usr/local/lib/python3.4/dist-packages/psycopg2/__init__.py", line 164, in connect

conn = _connect(dsn, connection_factory=connection_factory, async=async)

sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not connect to server: No such file or directory

Is the server running locally and accepting

connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

Link to comment

depends on the drive storing the data.  I guess I was just trying to figure out if it was 0-20GB, 20-100GB or something absurd.  I have no idea how much space it needs, so i thought to ask

 

did find this: http://www.reddit.com/r/usenet/comments/2ruizv/pynab_a_fast_python_postgres_usenet_indexer_with/

 

i really don't know is the answer, beyond testing it out, i don't personally use it and made it in return for a favour of someone helping me out with another docker.

Link to comment

understood.  but it does appear that it comes down simply to what you are indexing.  could be 20GB, could be way more, so maybe your first response is the answer...if you have a lot of disk space, index everything, if not, be selective...

 

Ultimately lets call it 0GB to unlimited, determined by what you index.

Link to comment

I'm having a bit of trouble with this dock atm.

 

When i try and start its, fresh docker or one thats been running for a few hours i get the following.....

 

*** Running /etc/my_init.d/003-postgres-initialise.sh...
initialising empty databases in /data
completed initialisation
2015-07-24 13:40:25,818 CRIT Supervisor running as root (no user in config file)
2015-07-24 13:40:25,821 INFO supervisord started with pid 44
2015-07-24 13:40:26,822 INFO spawned: 'postgres' with pid 48
2015-07-24 13:40:26,832 INFO exited: postgres (exit status 2; not expected)
2015-07-24 13:40:27,834 INFO spawned: 'postgres' with pid 49
2015-07-24 13:40:27,843 INFO exited: postgres (exit status 2; not expected)
2015-07-24 13:40:29,847 INFO spawned: 'postgres' with pid 50
2015-07-24 13:40:29,856 INFO exited: postgres (exit status 2; not expected)
setting up pynab user and database
2015-07-24 13:40:32,859 INFO spawned: 'postgres' with pid 76
2015-07-24 13:40:32,869 INFO exited: postgres (exit status 2; not expected)
2015-07-24 13:40:33,870 INFO gave up: postgres entered FATAL state, too many start retries too quickly
pynab user and database created
building initial nzb import
THIS WILL TAKE SOME TIME, DO NOT STOP THE DOCKER
IMPORT COMPLETED
*** Running /etc/my_init.d/004-set-the-groups.sh...
Testing whether database is ready
database appears ready, proceeding
Traceback (most recent call last):
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/pool.py", line 1033, in _do_get
    return self._pool.get(wait, self._timeout)
  File "/usr/local/lib/python3.4/dist-packages/sqlalchemy/util/queue.py", line 145, in get
    raise Empty
sqlalchemy.util.queue.Empty

 

Checking the postgresql log after manually starting the service...

 

cat /var/log/postgresql/postgresql-9.4-main.log
2015-07-24 12:42:32 UTC [1979-1] LOG:  could not create IPv6 socket: Address family not supported by protocol
2015-07-24 12:42:32 UTC [1980-1] LOG:  database system was shut down at 2015-06-02 16:42:26 UTC
2015-07-24 12:42:32 UTC [1979-2] LOG:  database system is ready to accept connections
2015-07-24 12:42:32 UTC [1984-1] LOG:  autovacuum launcher started
2015-07-24 12:42:32 UTC [1986-1] [unknown]@[unknown] LOG:  incomplete startup packet
2015-07-24 12:42:42 UTC [2001-1] pynab@pynab LOG:  provided user name (pynab) and authenticated user name (www-data) do not match
2015-07-24 12:42:42 UTC [2001-2] pynab@pynab FATAL:  Peer authentication failed for user "pynab"
2015-07-24 12:42:42 UTC [2001-3] pynab@pynab DETAIL:  Connection matched pg_hba.conf line 90: "local   all             all                                     peer"

 

 

I'm not entirely sure where to take this now....

Link to comment
  • 3 months later...

I just tried to install this and nothing happens when I click the Create button. A friend of mine tried on his server as well and he get's the same issue. I filled in the Host paths for /config and /data and set the Host port, but when I click Create nothing happens in IE. I tried Firefox and I do see a popup that says "Please fill out this field", but it's pointing to the top left corner of my browser, not any specific field in the form.

 

Unraid 6.1.3

Capture.PNG.e387955ca13da4de0194ded9498f7a5e.PNG

Link to comment

I just tried to install this and nothing happens when I click the Create button. A friend of mine tried on his server as well and he get's the same issue. I filled in the Host paths for /config and /data and set the Host port, but when I click Create nothing happens in IE. I tried Firefox and I do see a popup that says "Please fill out this field", but it's pointing to the top left corner of my browser, not any specific field in the form.

 

Unraid 6.1.3

 

 

click advanced and fill out the variables.

Link to comment

Just as an example, I've been running it since the day he put it on the beta repo, and currently have an uptime of about 53 days (not sure if it clears the database on startup, or if it continues with the same database really, never paid any attention).  I'm currently using about 53gb of space for the database.. 

 

Current stats:

 

pynab%20stats.jpg

 

I do have a NN+ account as well.

 

 

understood.  but it does appear that it comes down simply to what you are indexing.  could be 20GB, could be way more, so maybe your first response is the answer...if you have a lot of disk space, index everything, if not, be selective...

 

Ultimately lets call it 0GB to unlimited, determined by what you index.

Link to comment
  • 1 month later...

Has anyone set this up recently? Just tried to get it going and its not working at all, given the basic setup I am at a loss as to how it is broken.

 

Seems to be working here for me (unRAID 6.1.4); the docker creation was clean, and upon startup it created the database and is now currently in the process of back filling the default list of groups (14 days).  The {appdata}/pynab/data/main/ is currently 1.5GB's...

Link to comment
  • 3 weeks later...

Thanks :)

 

What happens if you use two providers like I do? A main and a backup.

 

Also, what does this "backfill_days :- number of days you want to backfill" mean?

No idea on using two providers..

 

backfill days I would imagine mean how many days do you want to go back and index...

 

But I don't use Pynab.

 

Once it's up and running, you can look for generic Pynab config info on the www as it's not Unraid specific.  That's my general strategy with containers.

Link to comment

Thanks :)

 

What happens if you use two providers like I do? A main and a backup.

 

Also, what does this "backfill_days :- number of days you want to backfill" mean?

No idea on using two providers..

 

backfill days I would imagine mean how many days do you want to go back and index...

 

But I don't use Pynab.

 

Once it's up and running, you can look for generic Pynab config info on the www as it's not Unraid specific.  That's my general strategy with containers.

 

Do you run similar software on your unRAID?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.