[Support] Linuxserver.io - Musicbrainz


Recommended Posts

 

 

linuxserver_medium.png

 

Application Name: Musicbrainz

Application Site: https://musicbrainz.org/

Docker Hub: https://hub.docker.com/r/linuxserver/musicbrainz/

Github: https://github.com/linuxserver/docker-musicbrainz

Metabrainz Account Sign Up: https://metabrainz.org/supporters/account-type

 

Please post any questions/issues relating to this docker you have in this thread.

 

If you are not using Unraid (and you should be!) then please do not post here, instead head to linuxserver.io to see how to get support.

 

*Please make sure you use /mnt/cache/ or /mnt/diskx/ for your mappings /mnt/user/ or /mnt/user0/ will not work.*

 

EDIT: 04/03/2017

AFTER iniatilisation is complete you will need to edit the line sub WEB_SERVER { "localhost:5000" } in file /config/DBDefs.pm changing localhost to the ip of your host, this is to allow css to display properly.

EDIT: 18/05/2017
With the update to schema24, please remove the contents of your /config and /data folders, pull the latest image and reinitialise the database.

Edited by linuxserver.io
  • Like 1
Link to comment

This docker requires a code from musicbrainz so that the database can be updated from their servers.

 

BEFORE ADDING THIS DOCKER

 

Go the metabrainz site and click non-commercial.

 

1oR73VJ.jpg

 

Then create an account like so.

 

BDBj4So.jpg

 

Go through the pages, filling in as appropriate..

 

2sCI5Ur.jpg

 

Then generate your access token like so...

 

bSmfQRb.jpg

 

Select, and copy this access token, you'll need it for the next bit.

 

NOW ADD THE DOCKER

 

You'll need to click the advanced tab and paste your access token into the BRAINZCODE section under Environmental Variables like so.

 

HeCoFbu.jpg

 

Map your two folders to appropriate locations as normal and then click create.

 

The docker will then generate a blank postgres data structure, download the latest data dump from musicbrainz and then import the data.

 

THE DOWNLOAD MAY BE UP TO 5GB AND THE PROCESS OF DOWNLOADING AND SUBSEQUENT IMPORT CAN TAKE A LONG TIME.

 

The docker may appear unresponsive and the logs can look stuck on

 

BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED

 

This is normal, just go off and start another civilisation on a small uninhabited island, or drink a metric ton of coffee and be patient.  ;D

 

Every hour or so check the log an eventually it should show something like this

 

IMPORT IS COMPLETE, MOVING TO NEXT PHASE
*** Running /etc/my_init.d/004-import-databases--and-or-run-everything.sh...
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/rc.local...
*** Booting runit daemon...
*** Runit started as PID 609
May 23 12:34:49 1cd29f5762de syslog-ng[614]: syslog-ng starting up; version='3.5.3'
May 23 12:59:01 1cd29f5762de /USR/SBIN/CRON[695]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 13:00:01 1cd29f5762de /USR/SBIN/CRON[718]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 13:17:01 1cd29f5762de /USR/SBIN/CRON[873]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 23 13:59:01 1cd29f5762de /USR/SBIN/CRON[1266]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 14:00:01 1cd29f5762de /USR/SBIN/CRON[1283]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 14:17:01 1cd29f5762de /USR/SBIN/CRON[1444]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)
May 23 14:59:01 1cd29f5762de /USR/SBIN/CRON[1622]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 15:00:01 1cd29f5762de /USR/SBIN/CRON[1634]: (root) CMD (/bin/bash /root/update-script.sh)
May 23 15:17:01 1cd29f5762de /USR/SBIN/CRON[1694]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly)

 

It's done...

  • Like 1
Link to comment
  • 2 weeks later...

I installed this today and here is my logfile. Once it started to sync the database, I left it alone. When I came back to it the docker was off. I restarted it and got the message about memory. Before installing this version I deleted the old container, image and config folder.

 

-----------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \ 
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid: 99
User gid: 100
-----------------------------------

We are now refreshing packages from apt repositorys, this *may* take a while
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED

-----------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \ 
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid: 99
User gid: 100
-----------------------------------

We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
[114] 22 Dec 21:06:47.910 # Server started, Redis version 2.8.4
[114] 22 Dec 21:06:47.910 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Dec 22 21:06:47 ff1e49e9239a syslog-ng[113]: syslog-ng starting up; version='3.5.3'

Link to comment

Attached are my mappings etc as requested from the other thread.

 

This drive is mounted with unassigned devices and works for the other dockers I have, it was just this one docker that fails after grabbing.

 

even tried mapping the files to the cache drive with the below log messages. Refresh the dashboard page and the docker is shown as off.

 

-----------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \ 
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid: 99
User gid: 100
-----------------------------------

We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED

Untitled.png.9cd7e016922d0670c0c67ccacff8e578.png

Link to comment

I woke up and the docker was off. Here is the latest log from the webgui. Can you point me to the other log location?

 

Webgui

-----------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \ 
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid: 99
User gid: 100
-----------------------------------

We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED
Tue Dec 22 21:48:35 2015 : InitDb.pl starting
Tue Dec 22 21:48:35 2015 : Creating database 'musicbrainz_db'

Failed to create language plpgsql -- it's likely to be already installed, continuing.
CREATE SCHEMA
CREATE SCHEMA
CREATE SCHEMA
CREATE SCHEMA
CREATE SCHEMA
CREATE SCHEMA
CREATE SCHEMA
Tue Dec 22 21:48:38 2015 : Installing extensions (Extensions.sql)
Tue Dec 22 21:48:38 2015 : Creating tables ... (CreateTables.sql)
Tue Dec 22 21:48:40 2015 : Creating tables ... (caa/CreateTables.sql)
Tue Dec 22 21:48:40 2015 : Creating documentation tables ... (documentation/CreateTables.sql)
Tue Dec 22 21:48:41 2015 : Creating tables ... (report/CreateTables.sql)
Tue Dec 22 21:48:41 2015 : Creating sitemaps tables ... (sitemaps/CreateTables.sql)
Tue Dec 22 21:48:41 2015 : Creating statistics tables ... (statistics/CreateTables.sql)
Tue Dec 22 21:48:41 2015 : Creating wikidocs tables ... (wikidocs/CreateTables.sql)
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-cdstubs.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-cover-art-archive.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-derived.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-editor.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-sitemaps.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-stats.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump.tar.bz2
Tue Dec 22 21:48:42 2015 : Pre-checking /data/import/20151219-005423/mbdump-wikidocs.tar.bz2
Tue Dec 22 21:48:42 2015 : tar -C /data/import/20151219-005423/MBImport-kNldDTaH --bzip2 -xvf /data/import/20151219-005423/mbdump-cdstubs.tar.bz2
TIMESTAMP
COPYING
README
REPLICATION_SEQUENCE
SCHEMA_SEQUENCE
mbdump/cdtoc_raw
mbdump/release_raw
mbdump/track_raw
Tue Dec 22 21:48:50 2015 : tar -C /data/import/20151219-005423/MBImport-BXN0k9ak --bzip2 -xvf /data/import/20151219-005423/mbdump-cover-art-archive.tar.bz2
TIMESTAMP
COPYING
README
REPLICATION_SEQUENCE
SCHEMA_SEQUENCE
mbdump/cover_art_archive.art_type
mbdump/cover_art_archive.image_type
mbdump/cover_art_archive.cover_art
mbdump/cover_art_archive.cover_art_type
mbdump/cover_art_archive.release_group_cover_art
Tue Dec 22 21:48:53 2015 : tar -C /data/import/20151219-005423/MBImport-OoUEdP3W --bzip2 -xvf /data/import/20151219-005423/mbdump-derived.tar.bz2
TIMESTAMP
COPYING
README
REPLICATION_SEQUENCE
SCHEMA_SEQUENCE
mbdump/annotation
mbdump/area_annotation
mbdump/area_tag
mbdump/artist_annotation
mbdump/artist_meta
mbdump/artist_tag
mbdump/event_annotation
mbdump/event_meta
mbdump/event_tag
mbdump/instrument_tag
mbdump/label_annotation
mbdump/label_meta
mbdump/label_tag
mbdump/place_annotation
mbdump/place_tag
mbdump/recording_annotation
mbdump/recording_meta
mbdump/recording_tag
mbdump/release_annotation
mbdump/release_meta
mbdump/release_group_annotation
mbdump/release_group_meta
mbdump/release_group_tag
mbdump/release_tag
mbdump/series_annotation
mbdump/series_tag
mbdump/tag
mbdump/tag_relation
mbdump/medium_index
mbdump/work_annotation
mbdump/work_meta
mbdump/work_tag
Tue Dec 22 21:49:11 2015 : tar -C /data/import/20151219-005423/MBImport-UQ8DeGir --bzip2 -xvf /data/import/20151219-005423/mbdump-editor.tar.bz2
TIMESTAMP
COPYING
README
REPLICATION_SEQUENCE
SCHEMA_SEQUENCE
mbdump/editor_sanitised
Tue Dec 22 21:49:14 2015 : InitDb.pl failed

 

 

Link to comment

docker logs musicbrainz as requested.

 

*** Running /etc/my_init.d/005_set_time.sh...

Current default time zone: 'America/New_York'
Local time is now:      Wed Dec 23 22:17:03 EST 2015.
Universal Time is now:  Thu Dec 24 03:17:03 UTC 2015.

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_add_user_abc.sh...

-----------------------------------
          _     _ _
         | |___| (_) ___
         | / __| | |/ _ \
         | \__ \ | | (_) |
         |_|___/ |_|\___/
               |_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid:    99
User gid:    100
-----------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
*** Running /etc/my_init.d/30_set_config.sh...
*** Running /etc/my_init.d/40_initialise.sh...
initialising empty databases
*** /etc/my_init.d/40_initialise.sh failed with status 1

*** Killing all processes...
*** Running /etc/my_init.d/005_set_time.sh...
*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_add_user_abc.sh...

-----------------------------------
          _     _ _
         | |___| (_) ___
         | / __| | |/ _ \
         | \__ \ | | (_) |
         |_|___/ |_|\___/
               |_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid:    99
User gid:    100
-----------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
We are now refreshing packages from apt repositorys, this *may* take a while
*** Running /etc/my_init.d/30_set_config.sh...
*** Running /etc/my_init.d/40_initialise.sh...
initialising empty databases
*** /etc/my_init.d/40_initialise.sh failed with status 1

*** Killing all processes...
root@Tower:~#

 

@Splnut

 

This was found by ssh into the unraid tower, typing 'docker logs musicbrainz' without quote

 

Link to comment

You need something like 40GB - 50GB free for the /data mapping, if you don't have that it will fail most likely.

 

if it does fail , no amount of restarting will help, if you don't delete the /data/dbase folder first.

 

try not nesting the two volumes /config /data under one subfolder and not using unassigned devices for appdata.

Link to comment

So tried the root of the cache drive, root of the SSD outside of the array that's mounted with Unassigned devices both with the same results. Finally I dropped it directly on one of the data disks in my array and finally it's pulling data as described.

 

Both the drives above had 100+GB free but both were SSD's, one in the array, the other outside, any idea why the docker isn't able to be properly mapped to either of the devices?

 

Update: Spoke too soon. Docker stopped with the below details, specifically

 

curl: (78) RETR response: 550 *** /etc/my_init.d/40_initialise.sh failed with status 78

 

*** Running /etc/my_init.d/005_set_time.sh...

Current default time zone: 'America/New_York'
Local time is now:      Fri Dec 25 23:48:43 EST 2015.
Universal Time is now:  Sat Dec 26 04:48:43 UTC 2015.

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_add_user_abc.sh...

-----------------------------------
          _     _ _
         | |___| (_) ___
         | / __| | |/ _ \
         | \__ \ | | (_) |
         |_|___/ |_|\___/
               |_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid:    99
User gid:    100
-----------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
*** Running /etc/my_init.d/30_set_config.sh...
*** Running /etc/my_init.d/40_initialise.sh...
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE                                                                                                                                                              DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 64.5M  100 64.5M    0     0  4738k      0  0:00:13  0:00:13 --:--:-- 7804k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.0M  100 18.0M    0     0  2407k      0  0:00:07  0:00:07 --:--:-- 3906k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  115M  100  115M    0     0  5910k      0  0:00:20  0:00:20 --:--:-- 7791k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
curl: (78) RETR response: 550
*** /etc/my_init.d/40_initialise.sh failed with status 78

*** Killing all processes...

Link to comment

Does this docker sort, tag, and identify music files in it's database? Is this a web front end for Picard by MusicBrainz?

 

I'm looking for something to sort through all my mp3s and retag them and was hoping this works for what I want to do.

This docker is used to create a local mirror of the musicbrainz database, that's all. Can be used with headphones / beets etc 

Link to comment

I just tried installing the docker but it keeps getting to the same spot and failing, and by failing I mean when the page is refreshed it shows the docker as stopped.

 

-----------------------------------
_ _ _
| |___| (_) ___
| / __| | |/ _ \ 
| \__ \ | | (_) |
|_|___/ |_|\___/
|_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid: 99
User gid: 100
-----------------------------------

We are now refreshing packages from apt repositorys, this *may* take a while
(Reading database ... 27044 files and directories currently installed.)
Preparing to unpack .../git-core_1%3a1.9.1-1ubuntu0.2_all.deb ...
Unpacking git-core (1:1.9.1-1ubuntu0.2) over (1:1.9.1-1ubuntu0.1) ...
Setting up git-core (1:1.9.1-1ubuntu0.2) ...
initialising empty databases

 

screenshot of the container settings attached.

Untitled.jpg.360358f40cea72ee7fea2e5ca454d837.jpg

Link to comment

My logs looked exactly like yours, so I tried directly on a disk and it is working for me as well.  But does anyone have a solution for this cause I don't really want two drives on my server on at all times.  Was hoping it could just be on the cache like everything else.

 

mine is on the cache, always has been.

 

 

Link to comment

It appears that the latest version of the Musicbrainz DB dump is missing the mbdump.tar.bz2 file. it was the in previous release from the 26th Dec

 

the secondary mirror is permission denied which I guess explains the http 550 error when pulling the files.

 

Is there anyway to download the previous version/copy the files manually?

Link to comment

I'm still having issues getting this going. I have tried:

 

  • Nested and non-nested folders
  • Cache, external, array drive
  • Deleted/Recreated Docker
  • Deleted/Recreated folders
  • Created new MetaBrainz key

 

root@Tower:/mnt/cache/appdata# docker logs musicbrainz
*** Running /etc/my_init.d/005_set_time.sh...

Current default time zone: 'America/New_York'
Local time is now:      Wed Dec 30 12:16:41 EST 2015.
Universal Time is now:  Wed Dec 30 17:16:41 UTC 2015.

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_add_user_abc.sh...

-----------------------------------
          _     _ _
         | |___| (_) ___
         | / __| | |/ _ \
         | \__ \ | | (_) |
         |_|___/ |_|\___/
               |_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid:    99
User gid:    100
-----------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
We are now refreshing packages from apt repositorys, this *may* take a while
*** Running /etc/my_init.d/30_set_config.sh...
*** Running /etc/my_init.d/40_initialise.sh...
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 64.5M  100 64.5M    0     0  4996k      0  0:00:13  0:00:13 --:--:-- 7325k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.1M  100 18.1M    0     0  1528k      0  0:00:12  0:00:12 --:--:-- 2832k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  116M  100  116M    0     0  6350k      0  0:00:18  0:00:18 --:--:-- 7342k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 61.1M  100 61.1M    0     0  2987k      0  0:00:20  0:00:20 --:--:-- 4356k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 25.2M  100 25.2M    0     0  3792k      0  0:00:06  0:00:06 --:--:-- 5348k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7110  100  7110    0     0   3444      0  0:00:02  0:00:02 --:--:--  3448
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
curl: (78) RETR response: 550
*** /etc/my_init.d/40_initialise.sh failed with status 78

*** Killing all processes...

 

Link to comment

I'm still having issues getting this going. I have tried:

 

  • Nested and non-nested folders
  • Cache, external, array drive
  • Deleted/Recreated Docker
  • Deleted/Recreated folders
  • Created new MetaBrainz key

 

root@Tower:/mnt/cache/appdata# docker logs musicbrainz
*** Running /etc/my_init.d/005_set_time.sh...

Current default time zone: 'America/New_York'
Local time is now:      Wed Dec 30 12:16:41 EST 2015.
Universal Time is now:  Wed Dec 30 17:16:41 UTC 2015.

*** Running /etc/my_init.d/00_regen_ssh_host_keys.sh...
*** Running /etc/my_init.d/10_add_user_abc.sh...

-----------------------------------
          _     _ _
         | |___| (_) ___
         | / __| | |/ _ \
         | \__ \ | | (_) |
         |_|___/ |_|\___/
               |_|

Brought to you by linuxserver.io
-----------------------------------
GID/UID
-----------------------------------
User uid:    99
User gid:    100
-----------------------------------

*** Running /etc/my_init.d/20_apt_update.sh...
We are now refreshing packages from apt repositorys, this *may* take a while
*** Running /etc/my_init.d/30_set_config.sh...
*** Running /etc/my_init.d/40_initialise.sh...
initialising empty databases
completed postgres initialise
BEGINNING INITIAL DATABASE IMPORT ROUTINE, THIS COULD TAKE SEVERAL HOURS AND THE DOCKER MAY LOOK UNRESPONSIVE
DO NOT STOP DOCKER UNTIL IT IS COMPLETED
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 64.5M  100 64.5M    0     0  4996k      0  0:00:13  0:00:13 --:--:-- 7325k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 18.1M  100 18.1M    0     0  1528k      0  0:00:12  0:00:12 --:--:-- 2832k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  116M  100  116M    0     0  6350k      0  0:00:18  0:00:18 --:--:-- 7342k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 61.1M  100 61.1M    0     0  2987k      0  0:00:20  0:00:20 --:--:-- 4356k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 25.2M  100 25.2M    0     0  3792k      0  0:00:06  0:00:06 --:--:-- 5348k
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  7110  100  7110    0     0   3444      0  0:00:02  0:00:02 --:--:--  3448
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:--  0:00:01 --:--:--     0
curl: (78) RETR response: 550
*** /etc/my_init.d/40_initialise.sh failed with status 78

*** Killing all processes...

 

at the moment the latest dump isn't fully uploaded to the musicbrainz ftp site.

 

to retry with a partial pull you have to delete /data/dbase

Link to comment

Once this is done and all the dbs are complete I will try moving the files back to the cache drive and bringing up the docker and see if there are issues.

If it works for you let me know and I'll try the same but back to the drive mounted outside my array using unassigned devices.

 

It did indeed appear to work.  Know Im not sure if its going to update as it should, but Im not getting errors and I am able to load the webgui after the change.

Link to comment
Guest
This topic is now closed to further replies.