[Support] Linuxserver.io - Musicbrainz


Recommended Posts

I got impatient and made edits to the dockerfile to grab postgresql 9.5 and then updated the dbdefs.pm file to the new schema number.  Started a fresh container with the changes and everything seemed to download and setup OK.  Its running fine at the moment interacting with headphones.

 

Do you mind sharing your method?

 

Best to wait until the linuxserver guys get things corrected. They are working on it.  It's more than just a couple edits to keep everything working correctly.

Link to comment

Container has been updated by Sparklyballs, should be working again now.... BUT It requires a completely fresh install.  This is from the Musicbrainz app and not our container so completely out of our control.

 

Also you must use /mnt/cache/ or /mnt/diskx/ for your mappings /mnt/user/ or /mnt/user0/ will not work.

 

Read about it here.[/size]

 

Due to unforeseen problems with the Live Data Feed (AKA replication), users with slave databases will be required to first import a fresh data dump into their new 9.5 installation. We apologize that this is the case, but even had this stream not been broken, doing a clean import is faster and easier than doing the migration. For details on what happened during this rather lengthy schema change release, stay tuned for a post mortem blog post that covers the details.
Link to comment

I have upgraded and I get the below message in the docker log window:

 

[348] 15 Jun 17:26:15.558 # Server started, Redis version 2.8.4
[348] 15 Jun 17:26:15.558 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

 

I can try to do this manually...?

 

OK tried what it says on the message and no luck. I have 64GB of memory in my system.

 

When I go to the Web Interface, this is the error message at the top:

 

08006 DBI connect('dbname=musicbrainz_db','abc',...) failed: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

 

Any help is greatly appreciated.

EDIT

OK.... Trying on clean folders for the config and the data... both on my cache drive. It is now "fetching the dump data..." I will report later on.

 

SECOND EDIT

OK I got the same result from the beginning.... Now I need help please.

 

Thanks,

 

H.

Link to comment

I am having the exactly the same issue. Did you find a solution to the problem?

 

Can you post your docker mappings... And we'll take a look at this...

 

It's

/mnt/cache/appdat/config and

/mnt/cache/appdat/data

 

Could it be somehow related to unRAID version? I am running v6.1.9, not the 6.2 series...

 

Edit: I was just removing the previous version which has been installed flawlessly and working perfectly (other than it was not updating itself any more due to the schema changes) - So I have a precedent to see this docker working :)

Link to comment

Seems like a Redis issue from what little I know, musicbrainz site has had some problems recently but they appear to be fixed.  We're looking at it...

 

there's no redis "issue" , it's a conseqence of running it in a docker.

the older version never showed the error cos i piped it to null.

Link to comment

Any news on this working?

 

Hernandito, just tried a fresh pull and it's working for me, as the problem is with the musicbrainz site there's nothing we can do, so don't necessarily expect us to keep tabs on it or update this thread.  ;)

Link to comment
  • 2 weeks later...
  • 4 weeks later...
  • 5 weeks later...
  • 2 weeks later...

I've just set this up because without it, Headphones is painfully slow... anyway, it completed it's stuff (from what I can tell) within a couple of hours and I can access the server's web interface just fine. But Headphones is still taking absolutely ages to actually get anything out of it.

 

Just a few questions...

 

When the web interface is available, databases searchable etc., is it done? Or is it still doing stuff in the background?

Do I have to manually enter any commands to get the database to optimise or to build a search index?

Is there any way to easily test API calls into it?

...or is it just that Headphones is insanely slow and there's nothing anyone can do about it?

 

These are the last few lines I see in the log, and there hasn't been anything since:

 

Mon Sep 12 13:53:41 2016 : Creating search indexes ... (CreateSearchIndexes.sql)
Mon Sep 12 14:09:33 2016 : Setting up replication ... (ReplicationSetup.sql)
Mon Sep 12 14:09:33 2016 : Optimizing database ...
VACUUM
Mon Sep 12 14:11:09 2016 : Initialized and imported data into the database.
Mon Sep 12 14:11:09 2016 : InitDb.pl succeeded
INITIAL IMPORT IS COMPLETE, MOVING TO NEXT PHASE
LOG: received fast shutdown request
waiting for server to shut down...LOG: aborting any active transactions
.LOG: shutting down
.........LOG: database system is shut down
done
server stopped
[cont-init.d] 30-initialise-database: exited 0.
[cont-init.d] 40-config-redis: executing... 
[cont-init.d] 40-config-redis: exited 0.
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[614] 12 Sep 14:11:20.992 # Server started, Redis version 2.8.4
[614] 12 Sep 14:11:20.993 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

 

Thanks in advance. All of your plugins for unRAID are awesome by the way!

Link to comment
  • 2 weeks later...

Similar to WonderfulSlipperyThing I just set this up and querying via the web page is painfully slow.  Is anyone else seeing the same thing?  I should note that I'm running this on a core i3 based system.

 

In my tests querying the db directly took 265ms-380ms, while an artist search via the web took 10s-17s + another 18s-45s to load an artist's page.  Even if they db query triggered by the web is different this doesn't seem right to me.

 

1 - Direct Query

 

musicbrainz_db=# explain analyze select * from artist where lower(name) like '%pearl jam%';
                                                QUERY PLAN
----------------------------------------------------------------------------------------------------------
Seq Scan on artist  (cost=0.00..33420.71 rows=111 width=98) (actual time=93.738..264.023 rows=3 loops=1)
   Filter: (lower((name)::text) ~~ '%pearl jam%'::text)
   Rows Removed by Filter: 1121111
Planning time: 0.095 ms
Execution time: 264.041 ms
(5 rows)

 

2 - Chrome 'Inspect->Network' timings:

 


- Search:

search?query=pearl+jam&type=artist&method=indexed	200	document	Other	77.5?KB	10.70?s
search?query=sublime&type=artist&method=indexed	200	document	Other	167?KB	10.93?s

- Artist Page:

83b9cbe7-9857-49e2-ab8e-b57b01038103	200	document	Other	227?KB	6.18?s
95f5b748-d370-47fe-85bd-0af2dc450bc0	200	document	Other	147?KB	11.39?s

- Wikipedia Extract and Image (requested after the initial page):

wikipedia-extract	200	xhr	jquery.js:9659	2.4?KB	5.98?s
commons-image	200	xhr	jquery.js:9659	703?B	11.96?s

wikipedia-extract	200	xhr	jquery.js:9659	2.7?KB	16.73?s
commons-image	200	xhr	jquery.js:9659	388?B	33.29?s

 

Rebuilding the indexes via the following command in the docker did not improve anything (side note I hit a db deadlock and had to disable the update crontab):

 

root@1f8dca2ee516:/usr/bin# ./reindexdb -U abc -a

 

Figured I'd post this in case anyone had any ideas.  If/when I have some time I'd like to try and figure out what layer is introducing the delay.  Most of this is new to me so I'm not sure how much time I'll spend on it.

 

Link to comment
  • 1 month later...
Guest
This topic is now closed to further replies.