[Support] Crocs - Tube Archivist


Crocs

Recommended Posts

On 5/14/2023 at 7:27 PM, mathomas3 said:

I was able to apply the update by forcing it... seems like this issue isnt caused by the docker... hoping that others will read this and follow suit... im sure there are others

There was a bug recently in unraid's docker that made containers regularly report "unavailable" and be unable to update automatically, but still able to update manually.  It's been resolved as far as I know in the latest version, but versions that came before that needed a patch plugin from the community store to rpevent this.

 

Regardless when the issue is corrected, you still need to manually update all of the affected containers before automatic can work again.

Link to comment
  • 4 weeks later...

So I setup the containers according to: https://docs.tubearchivist.com/installation/unraid/

The redis and ES container are running fine, no erros in the logs.

 

But everytime I try to start the TA container it just stops itself immediately. I also checked the permissions of the folders:
 

01-permissions.PNG.1a28566ae933b45d4409fe5a17999268.PNG

 

The TA config

02-config.thumb.png.f20c10355b693009e84a3ecd12e4c386.png

 

I checked the ES Password multiple times and it's definitely correct. Then I changed the port to 8001 hoping that maybe there's an issue with another service running on 8000 which prevents the container to start. It's hard to identify the problem because I can't inspect the logs.

 

 

 

 

Edited by andreey
Link to comment

Tubearchivist is working great so far, except the tubearchivist container does not autostart. TubeArchivist-RedisJSON and TubeArchivist-ES autostart at boottime, but TubeArchivist does not. When starting the container manual everything works great.

I've already tried to set wait time to 30 seconds without any luck.

Link to comment
54 minutes ago, burgsth said:

Tubearchivist is working great so far, except the tubearchivist container does not autostart. TubeArchivist-RedisJSON and TubeArchivist-ES autostart at boottime, but TubeArchivist does not. When starting the container manual everything works great.

I've already tried to set wait time to 30 seconds without any luck.

 

I see the same thing after my weekly CA appdata backup. 

I've been getting around this by simply using a userscript to run at 7:30AM daily (30 7 * * *)

 

#!/bin/bash
docker start TubeArchivist

 

Link to comment
  • 1 month later...

Getting this error now on RedisJSON 

 

8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> Failed loading RedisAI API.
8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> RedisGears v2.0.11, sha='0aa55951836750ceabd9733decb200f8a5e7bac3', build_type='release', built_for='Linux-ubuntu22.04.x86_64'.
8:M 25 Aug 2023 13:50:52.558 * <redisgears_2> Registered backend: js.
8:M 25 Aug 2023 13:50:52.559 * Module 'redisgears_2' loaded from /opt/redis-stack/lib/redisgears.so
8:M 25 Aug 2023 13:50:52.559 * Server initialized
8:M 25 Aug 2023 13:50:52.560 * <search> Loading event starts
8:M 25 Aug 2023 13:50:52.560 * <redisgears_2> Got a loading start event, clear the entire functions data.
8:M 25 Aug 2023 13:50:52.560 * Loading RDB produced by version 6.2.13
8:M 25 Aug 2023 13:50:52.560 * RDB age 794829 seconds
8:M 25 Aug 2023 13:50:52.560 * RDB memory usage when created 1.10 Mb
8:M 25 Aug 2023 13:50:52.560 # The RDB file contains AUX module data I can't load: no matching module 'graphdata'

Link to comment
On 8/25/2023 at 9:51 AM, kri kri said:

Getting this error now on RedisJSON 

 

8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> Failed loading RedisAI API.
8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> RedisGears v2.0.11, sha='0aa55951836750ceabd9733decb200f8a5e7bac3', build_type='release', built_for='Linux-ubuntu22.04.x86_64'.
8:M 25 Aug 2023 13:50:52.558 * <redisgears_2> Registered backend: js.
8:M 25 Aug 2023 13:50:52.559 * Module 'redisgears_2' loaded from /opt/redis-stack/lib/redisgears.so
8:M 25 Aug 2023 13:50:52.559 * Server initialized
8:M 25 Aug 2023 13:50:52.560 * <search> Loading event starts
8:M 25 Aug 2023 13:50:52.560 * <redisgears_2> Got a loading start event, clear the entire functions data.
8:M 25 Aug 2023 13:50:52.560 * Loading RDB produced by version 6.2.13
8:M 25 Aug 2023 13:50:52.560 * RDB age 794829 seconds
8:M 25 Aug 2023 13:50:52.560 * RDB memory usage when created 1.10 Mb
8:M 25 Aug 2023 13:50:52.560 # The RDB file contains AUX module data I can't load: no matching module 'graphdata'

 

me too. And then TA shuts down after a series of connection attempts.

 

I really am not a fan of multiple container operations for a single service to function.

Link to comment
On 8/25/2023 at 4:51 PM, kri kri said:

Getting this error now on RedisJSON 

 

8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> Failed loading RedisAI API.
8:M 25 Aug 2023 13:50:52.551 * <redisgears_2> RedisGears v2.0.11, sha='0aa55951836750ceabd9733decb200f8a5e7bac3', build_type='release', built_for='Linux-ubuntu22.04.x86_64'.
8:M 25 Aug 2023 13:50:52.558 * <redisgears_2> Registered backend: js.
8:M 25 Aug 2023 13:50:52.559 * Module 'redisgears_2' loaded from /opt/redis-stack/lib/redisgears.so
8:M 25 Aug 2023 13:50:52.559 * Server initialized
8:M 25 Aug 2023 13:50:52.560 * <search> Loading event starts
8:M 25 Aug 2023 13:50:52.560 * <redisgears_2> Got a loading start event, clear the entire functions data.
8:M 25 Aug 2023 13:50:52.560 * Loading RDB produced by version 6.2.13
8:M 25 Aug 2023 13:50:52.560 * RDB age 794829 seconds
8:M 25 Aug 2023 13:50:52.560 * RDB memory usage when created 1.10 Mb
8:M 25 Aug 2023 13:50:52.560 # The RDB file contains AUX module data I can't load: no matching module 'graphdata'

 

On 8/28/2023 at 9:51 PM, aglyons said:

 

me too. And then TA shuts down after a series of connection attempts.

 

I really am not a fan of multiple container operations for a single service to function.

 

Change the TubeArchivisit-RedisJSON container to use 

redis/redis-stack-server:6.2.6-v9

 

Annoyingly this is not the first time an update to redis broke TubeArchivist's setup.

  • Upvote 1
Link to comment
On 8/28/2023 at 1:51 PM, aglyons said:

 

me too. And then TA shuts down after a series of connection attempts.

 

I really am not a fan of multiple container operations for a single service to function.

Same, I think I will go back to youtubdl-material as that one just works. Maybe they can bundle one docker in the future. 

Link to comment

I updated TA today and now it stops after about 15-20 seconds of running.

 

The last line of the logs:

{"log":"ValueError: failed to add item to index\n","stream":"stderr","time":"2023-09-03T17:02:40.557601173Z"}

 

ta-logs.logta-log2.log

 

I thought it might have been a permission issue but I ran "chmod -R 777 /mnt/user/appdata/TubeArchivist/es" and it didn't help.

415e2febe313b2ad2ee443a20756c34e8770e944ec2eb1905f639bb16b2337e6-json.log 7e8910deb776d49a63a3926d98c7a21186125f97a3fbb6c45957b1a1e03aa124-json.log 86fc62e2790a3e5d34d841a2ab8c1f47cafbce882fa5579aae93cb00c786104e-json.log

Edited by chuckthetekkie
Link to comment
  • 2 months later...

Has anyone successfully managed to get the traffic from this container routed through another container (VPN) - I've attempted to use SpaceInvader One's tutorial from 3 years ago to direct all traffic through Binhex's rtorrentvpn container but wasn't able to get it to work.

 

I was able to bash in to the consolse and use "curl ifconfig.io" to confirm the IP address was updated to use the VPN endpoint but I couldn't then access the web ui at all.

 

Didn't matter if I pointed all TA containers to use the proxy or just the TA server and left redis and ES on bridge.

 

Any help very gratefully received.

 

Link to comment
  • 1 month later...

At some point between my first discovery and installation of the TubeArchivist docker, custom Redis and ES container requirements have been added.

I have installed them and fixed the built-in bug for the TubeArchivist-ES container (some sort of java or permissions error, had to chmod 777 the

/mnt/user/appdata/TubeArchivist/es

directory for some reason in order to get it to work).

 

However, because I've had to create a new Elasticsearch database, all of the videos I had previously downloaded are no longer detected by TubeArchivist, and TA thinks I have no videos.

Copying the old ES database into the folder for the new ES container, and using (what I believe to be) the old password, TA crashes. It seems the old ES database is missing the path.repo variable (I'm guessing) and I don't know if I can add it to the old database. TA seems to just be refusing the original database now, even if I spin up the old ES container that I used a year ago (including if  I add the same path.repo variable from the TA-ES container).

 

Isn't there a way to just... Scan the files on disk and add the data that has already been archived to the new database? Or migrate the data from the old ES to the new TubeArchivist-ES?

 

Edit, after looking through TA's settings again, I found where the settings I was looking for were;

Rescanning the filesystem to add orphaned files failed because at some point redis created a dump.rdb file in the top level of TA's /youtube. I removed it, and maybe the rescan would have worked now, but I also found that TA thankfully (?) found a backup of my previous (original) ES database.

Apparently, importing the old ES database backup worked, and my videos all seem to be back now, but I get the feeling that the TA-ES and main TA containers were very unhappy with the import.

I sure hope importing the old "clearly inferior" database doesn't come back to bite me...

Edited by Porkey
Link to comment

I really do like this app and am very thankful for the team that put it together. 

 

That being said, I feel this image should be transitioned into a single container with ES and REDIS included. Having those dependencies as external containers is confusing to new adopters and has clearly introduced issues, just looking at the number of threads. 

 

I don't have any idea HOW to create an image with all the dependencies included.......yet. 

 

But I'm gonna start looking into how that's done. If someone else beats me to it, that's be great!

Link to comment
  • 3 weeks later...
On 11/24/2023 at 7:19 AM, DrBazUK said:

Has anyone successfully managed to get the traffic from this container routed through another container (VPN) - I've attempted to use SpaceInvader One's tutorial from 3 years ago to direct all traffic through Binhex's rtorrentvpn container but wasn't able to get it to work.

 

I was able to bash in to the consolse and use "curl ifconfig.io" to confirm the IP address was updated to use the VPN endpoint but I couldn't then access the web ui at all.

 

Didn't matter if I pointed all TA containers to use the proxy or just the TA server and left redis and ES on bridge.

 

Any help very gratefully received.

 

Going through the same hoops here. I got all instances on VPN but using localhost between TA and its dependencies didn't work. When I used the normal IP it did... before it suddenly stops after initializing redis and ES. The log doesn't stay up or spit out an error, it just stops immediately without saying what happened.

 

Edit:

Here's what I got from running docker start -ai TubeArchivist

Quote

thunder lock: disabled (you can enable it with --thunder-lock)
probably another instance of uWSGI is running on the same address (:8080).
bind(): Address already in use [core/socket.c line 769]
VACUUM: pidfile removed.

 

And with that I set the following variable:

TA_UWSGI_PORT: 8085 (instead of 8080)

And I can access TubeArchivist from the VPN container.

Edited by Fefe
Ran in console
Link to comment
  • 2 weeks later...

Need some help un-hosing my TA install.

I have a ZFS pool and a regular pool. Everything broke when I tried to migrate docker to my regular pool. All containers work except TA. TA redis and ES start but TA dies immediately after starting. I used CrapGPT to try to help me diag but got nowhere - logs from TA seem to point to issues connecting to ES:

Output of "docker logs TubeArchivist"


                         ....  .....
                  ...'',;:cc,. .;::;;,'...
               ..,;:cccllclc,  .:ccllllcc;,..
            ..,:cllcc:;,'.',.  ....'',;ccllc:,..
          ..;cllc:,'..                ...,:cccc:'.
         .;cccc;..                        ..,:ccc:'.
       .ckkkOkxollllllllllllc.      .,:::;.  .,cclc;
      .:0MMMMMMMMMMMMMMMMMMMX:     .cNMMMWx.   .;clc:
     .;lOXK0000KNMMMMX00000KO;     ;KMMMMMNl.   .;ccl:,.
     .;:c:'.....kMMMNo........    'OMMMWMMMK:    '::;;'.
   .......     .xMMMNl           .dWMMXdOMMMO'   ........
   .:cc:;.     .xMMMNc          .lNMMNo.:XMMWx.    .:cl:.
   .:llc,.     .:xxxd,          ;KMMMk. .oWMMNl.   .:llc'
   .cll:.     .;:;;:::,.       'OMMMK:';''kWMMK:   .;llc,
   .cll:.     .,;;;;;;,.     .,xWMMNl.:l:.;KMMMO'  .;llc'
   .:llc.      .cOOOk;      .lKNMMWx..:l:..lNMMWx. .:llc'
   .;lcc,.     .xMMMNc      :KMMMM0, .:lc. .xWMMNl.'ccl:.
    .cllc.     .xMMMNc     'OMMMMXc...:lc...,0MMMKl:lcc,.
    .,ccl:.    .xMMMNc    .xWMMMWo.,;;:lc;;;.cXMMMXdcc;.
     .,clc:.   .xMMMNc   .lNMMMWk. .':clc:,. .dWMMW0o;.
      .,clcc,. .ckkkx;   .okkkOx,    .';,.    'kKKK0l.
       .':lcc:'.....      .  ..            ..,;cllc,.
         .,cclc,....                     ....;clc;..
          ..,:,..,c:'..              ...';:,..,:,.
            ....:lcccc:;,'''.....'',;;:clllc,....
               .'',;:cllllllccccclllllcc:,'..
                   ...'',,;;;;;;;;;,''...
                            .....


#######################
#  Environment Setup  #
#######################

[1] checking expected env vars
    ✓ all expected env vars are set
[2] check ES user overwrite
    ✓ ES user is set to elastic
[3] check TA_PORT overwrite
    TA_PORT is not set
[4] check TA_UWSGI_PORT overwrite
    TA_UWSGI_PORT is not set
[5] check ENABLE_CAST overwrite
    ENABLE_CAST is not set
[6] create superuser
    superuser already created


#######################
#  Connection check   #
#######################

[1] connect to Redis
    ✓ Redis connection verified
[2] set Redis config
    ✓ Redis config set
[3] connect to Elastic Search
    ... waiting for ES [0/24]
{"cluster_name":"docker-cluster","status":"red","timed_out":true,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":11,"active_shards":11,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":91.66666666666666}
    ✓ ES connection established
[4] Elastic Search version check
    ✓ ES version check passed
[5] check ES path.repo env var
    ✓ path.repo env var is set


#######################
#  Application Start  #
#######################

[1] set new config.json values
    ✓ new config values set
[2] create expected cache folders
    ✓ expected folders created
[3] clear leftover keys in redis
    no keys found
[4] clear task leftovers
[5] clear leftover files from dl cache
clear download cache
    no files found
[6] check for first run after update
    no new update found
[MIGRATION] validate index mappings
ta_config index is created and up to date...
ta_channel index is created and up to date...
ta_video index is created and up to date...
ta_download index is created and up to date...
ta_playlist index is created and up to date...
ta_subtitle index is created and up to date...
ta_comment index is created and up to date...
[MIGRATION] setup snapshots
snapshot: run setup
snapshot: repo ta_snapshot already created
snapshot: policy is set.
snapshot: is outdated, create new now
snapshot: last snapshot is up-to-date
snapshot: executing now: {'snapshot_name': 'ta_daily_-vdfxfv21sb2fyrqlwhnh7a'}
[MIGRATION] move user configuration to ES
    ✓ Settings for user '1' migrated to ES
    ✓ Settings for all users migrated to ES


########################
# Filesystem Migration #
########################

{"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"","phase":"indices:data/read/open_point_in_time","grouped":true,"failed_shards":[],"caused_by":{"type":"search_phase_execution_exception","reason":"Search rejected due to missing shards [[ta_video][0]]. Consider using `allow_partial_search_results` setting to bypass this error.","phase":"indices:data/read/open_point_in_time","grouped":true,"failed_shards":[]}},"status":503}
Traceback (most recent call last):
  File "/app/manage.py", line 23, in <module>
    main()
  File "/app/manage.py", line 19, in main
    execute_from_command_line(sys.argv)
  File "/root/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line
    utility.execute()
  File "/root/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "/root/.local/lib/python3.11/site-packages/django/core/management/base.py", line 412, in run_from_argv
    self.execute(*args, **cmd_options)
  File "/root/.local/lib/python3.11/site-packages/django/core/management/base.py", line 458, in execute
    output = self.handle(*args, **options)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/config/management/commands/ta_migpath.py", line 31, in handle
    to_migrate = handler.get_to_migrate()
                 ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/config/management/commands/ta_migpath.py", line 80, in get_to_migrate
    response = IndexPaginate("ta_video", data).get_results()
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/home/src/es/connect.py", line 163, in get_results
    self.get_pit()
  File "/app/home/src/es/connect.py", line 173, in get_pit
    self.pit_id = response["id"]
                  ~~~~~~~~^^^^^^
KeyError: 'id'
Operations to perform:
  Apply all migrations: admin, auth, authtoken, contenttypes, home, sessions
Running migrations:
  No migrations to apply.
 

 

Then a whole bunch of lines about deleting stuff. The container then dies.

 

Output of "docker logs TubeArchivist-ES"

 

{"@timestamp":"2024-01-28T22:18:24.383Z", "log.level": "INFO", "message":"snapshot lifecycle policy [ta_daily] issuing create snapshot [ta_daily_-8yjqavu4rtsg6nffcp0fka]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][generic][T#5]","log.logger":"org.elasticsearch.xpack.slm.SnapshotLifecycleTask","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2024-01-28T22:18:24.396Z", "log.level": "WARN", "message":"[ta_snapshot][ta_daily_-8yjqavu4rtsg6nffcp0fka] failed to create snapshot", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][masterService#updateTask][T#6]","log.logger":"org.elasticsearch.snapshots.SnapshotsService","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.snapshots.SnapshotException","error.message":"[ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]","error.stack_trace":"org.elasticsearch.snapshots.SnapshotException: [ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]\n\tat [email protected]/org.elasticsearch.snapshots.SnapshotsService$SnapshotTaskExecutor.createSnapshot(SnapshotsService.java:3897)\n\tat [email protected]/org.elasticsearch.snapshots.SnapshotsService$SnapshotTaskExecutor.execute(SnapshotsService.java:3729)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1039)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1004)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:232)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1626)\n\tat [email protected]/org.elasticsearch.action.ActionListener.run(ActionListener.java:386)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1623)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1237)\n\tat [email protected]/org.elasticsearch.action.ActionListener.run(ActionListener.java:386)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1216)\n\tat [email protected]/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat [email protected]/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n"}
{"@timestamp":"2024-01-28T22:18:24.398Z", "log.level":"ERROR", "message":"failed to create snapshot for snapshot lifecycle policy [ta_daily]: org.elasticsearch.snapshots.SnapshotException: [ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][masterService#updateTask][T#6]","log.logger":"org.elasticsearch.xpack.slm.SnapshotLifecycleTask","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster"}
{"@timestamp":"2024-01-28T22:18:26.040Z", "log.level": "WARN", "message":"path: /ta_video/_pit, params: {index=ta_video, keep_alive=10m}, status: 503", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][transport_worker][T#19]","log.logger":"rest.suppressed","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.search.SearchPhaseExecutionException","error.message":"","error.stack_trace":"Failed to execute phase [indices:data/read/open_point_in_time], \n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:709)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:456)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:220)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:1144)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:913)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.lambda$executeRequest$10(TransportSearchAction.java:337)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:109)\n\tat [email protected]/org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:77)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:449)\n\tat [email protected]/org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:105)\n\tat [email protected]/org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:53)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:87)\n\tat [email protected]/org.elasticsearch.action.support.ActionFilter$Simple.apply(ActionFilter.java:53)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:85)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$3(SecurityActionFilter.java:163)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$DelegatingFailureActionListener.onResponse(ActionListenerImplementations.java:212)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:623)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.SearchRequestCacheDisablingInterceptor.intercept(SearchRequestCacheDisablingInterceptor.java:53)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.ShardSearchRequestInterceptor.intercept(ShardSearchRequestInterceptor.java:24)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.UpdateRequestInterceptor.intercept(UpdateRequestInterceptor.java:27)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.ResizeRequestInterceptor.intercept(ResizeRequestInterceptor.java:98)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.BulkShardRequestInterceptor.intercept(BulkShardRequestInterceptor.java:85)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.SearchRequestInterceptor.intercept(SearchRequestInterceptor.java:21)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.IndicesAliasesRequestInterceptor.intercept(IndicesAliasesRequestInterceptor.java:124)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.DlsFlsLicenseRequestInterceptor.intercept(DlsFlsLicenseRequestInterceptor.java:106)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.runRequestInterceptors(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.handleIndexActionAuthorizationResult(AuthorizationService.java:602)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeAction$13(AuthorizationService.java:505)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:1028)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:994)\n\tat [email protected]/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.lambda$authorizeIndexAction$3(RBACEngine.java:401)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener$SuccessResult.complete(SubscribableListener.java:310)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.tryComplete(SubscribableListener.java:230)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.addListener(SubscribableListener.java:133)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.addListener(SubscribableListener.java:108)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$CachingAsyncSupplier.getAsync(AuthorizationService.java:1074)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.authorizeIndexAction(RBACEngine.java:381)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeAction(AuthorizationService.java:498)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.maybeAuthorizeRunAs(AuthorizationService.java:435)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorize$3(AuthorizationService.java:322)\n\tat [email protected]/org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:178)\n\tat [email protected]/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.lambda$resolveAuthorizationInfo$0(RBACEngine.java:151)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.lambda$getRoles$4(CompositeRolesStore.java:194)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.lambda$getRole$5(CompositeRolesStore.java:212)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$0(RoleReferenceIntersection.java:49)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.support.GroupedActionListener.onResponse(GroupedActionListener.java:56)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.buildRoleFromRoleReference(CompositeRolesStore.java:244)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$1(RoleReferenceIntersection.java:53)\n\tat java.base/java.lang.Iterable.forEach(Iterable.java:75)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.buildRole(RoleReferenceIntersection.java:53)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.getRole(CompositeRolesStore.java:210)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.getRoles(CompositeRolesStore.java:187)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.resolveAuthorizationInfo(RBACEngine.java:147)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:338)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$4(SecurityActionFilter.java:159)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticate(AuthenticatorChain.java:93)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:262)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:171)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.applyInternal(SecurityActionFilter.java:155)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:114)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:85)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:62)\n\tat [email protected]/org.elasticsearch.tasks.TaskManager.registerAndExecute(TaskManager.java:196)\n\tat [email protected]/org.elasticsearch.client.internal.node.NodeClient.executeLocally(NodeClient.java:108)\n\tat [email protected]/org.elasticsearch.client.internal.node.NodeClient.doExecute(NodeClient.java:86)\n\tat [email protected]/org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:381)\n\tat [email protected]/org.elasticsearch.action.search.RestOpenPointInTimeAction.lambda$prepareRequest$1(RestOpenPointInTimeAction.java:64)\n\tat [email protected]/org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:103)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.doHandleRequest(SecurityRestFilter.java:94)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.lambda$handleRequest$0(SecurityRestFilter.java:85)\n\tat [email protected]/org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:178)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.lambda$authenticateAndAttachToContext$3(SecondaryAuthenticator.java:99)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.authenticate(SecondaryAuthenticator.java:109)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.authenticateAndAttachToContext(SecondaryAuthenticator.java:90)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:79)\n\tat [email protected]/org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:441)\n\tat [email protected]/org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:570)\n\tat [email protected]/org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:325)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:458)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:554)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:431)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.handlePipelinedRequest(Netty4HttpPipeliningHandler.java:128)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:118)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.forwardData(Netty4HttpHeaderValidator.java:194)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.forwardFullRequest(Netty4HttpHeaderValidator.java:137)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.lambda$requestStart$1(Netty4HttpHeaderValidator.java:120)\n\tat [email protected]/io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)\n\tat [email protected]/io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)\n\tat [email protected]/io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)\n\tat [email protected]/io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)\n\tat [email protected]/io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)\n\tat [email protected]/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566)\n\tat [email protected]/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\tat [email protected]/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: Failed to execute phase [indices:data/read/open_point_in_time], Search rejected due to missing shards [[ta_video][0]]. Consider using `allow_partial_search_results` setting to bypass this error.\n\tat [email protected]/org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:61)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.run(AbstractSearchAsyncAction.java:230)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:451)\n\t... 138 more\n"}
 

I know nothing about ES, but with CrapGPT I was able to determine it looks like there is an issue with a shard:

curl -X GET "http://192.168.0.136:9200/_cluster/health" -u elastic:{REDACTED}
{"cluster_name":"docker-cluster","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":12,"active_shards":12,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":92.3076923076923}

 

From within the ES container console:

sh-5.0$ curl -X GET "localhost:9200/_cluster/allocation/explain?pretty" -u elastic:{REDACTED}
{
  "note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.",
  "index" : "ta_video",
  "shard" : 0,
  "primary" : true,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "CLUSTER_RECOVERED",
    "at" : "2024-01-28T22:01:11.229Z",
    "last_allocation_status" : "no_valid_shard_copy"
  },
  "can_allocate" : "no_valid_shard_copy",
  "allocate_explanation" : "Elasticsearch can't allocate this shard because all the copies of its data in the cluster are stale or corrupt. Elasticsearch will allocate this shard when a node containing a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot.",
  "node_allocation_decisions" : [
    {
      "node_id" : "o_n-k-bpTNW1_eO6Mnoqpw",
      "node_name" : "23abc1a3793f",
      "transport_address" : "172.17.0.2:9300",
      "node_attributes" : {
        "transform.config_version" : "10.0.0",
        "xpack.installed" : "true",
        "ml.config_version" : "12.0.0",
        "ml.max_jvm_size" : "536870912",
        "ml.allocated_processors_double" : "32.0",
        "ml.allocated_processors" : "32",
        "ml.machine_memory" : "67490713600"
      },
      "roles" : [
        "data",
        "data_cold",
        "data_content",
        "data_frozen",
        "data_hot",
        "data_warm",
        "ingest",
        "master",
        "ml",
        "remote_cluster_client",
        "transform"
      ],
      "node_decision" : "no",
      "store" : {
        "in_sync" : true,
        "allocation_id" : "0gZ4kEOLRDGQBRiuSAKyzQ",
        "store_exception" : {
          "type" : "corrupt_index_exception",
          "reason" : "failed engine (reason: [corrupt file (source: [start])]) (resource=preexisting_corruption)",
          "caused_by" : {
            "type" : "i_o_exception",
            "reason" : "failed engine (reason: [corrupt file (source: [start])])",
            "caused_by" : {
              "type" : "corrupt_index_exception",
              "reason" : "codec footer mismatch (file truncated?): actual footer=15729219 vs expected footer=-1071082520 (resource=MemorySegmentIndexInput(path=\"/usr/share/elasticsearch/data/indices/K6CzjUsuRl-MxHjOUUBoBw/0/index/_5de.cfs\"))"
            }
          }
        }
      }
    }
  ]
}

 

 

Any support is greatly appreciated!

Link to comment
On 1/28/2024 at 5:23 PM, ProfessionalIdiot said:

Need some help un-hosing my TA install.

I have a ZFS pool and a regular pool. Everything broke when I tried to migrate docker to my regular pool. All containers work except TA. TA redis and ES start but TA dies immediately after starting. I used CrapGPT to try to help me diag but got nowhere - logs from TA seem to point to issues connecting to ES:
 

.......

 

Any support is greatly appreciated!



Removed the logs as to not spam this page with text blocks....

With the help of CrapGPT I have tried manually recreating troubled shards, as well as rolled back to a snapshot of a date where all was happy and healthy - no luck, same issues.

Is there a way to do a clean reinstall of the ES container? I am shocked that my attempt to redeploy ES didn't fix these problems. I must not be doing a clean redeploy or something. 

Link to comment
42 minutes ago, ProfessionalIdiot said:



Removed the logs as to not spam this page with text blocks....

With the help of CrapGPT I have tried manually recreating troubled shards, as well as rolled back to a snapshot of a date where all was happy and healthy - no luck, same issues.

Is there a way to do a clean reinstall of the ES container? I am shocked that my attempt to redeploy ES didn't fix these problems. I must not be doing a clean redeploy or something. 

 

Well - I am back in but I'm sure I did it the wrong way. 

I took a backup of all the data in the Elasticsearch directory (user/appdata/Tubearchivist/es) and then blew that dir away. I then redeployed the ES container and saw permissions issues with the snapshot directory. I fixed that with chown and chmod and then the container started up and I was back in TA! However - all of my settings and data are gone. And there is seemingly no way to manually reimport everything (or my googling is failing me).

I tried moving one of my backed up snapshots into the snapshots dir on my now working instance but it did not recognize it. 

 

So - I did something probably stupid - I am running a "Rescan Filesystem" from the app GUI. It appears to have tried but failed. Per "docker logs TubeArchivist":


[2024-01-30 17:12:04,945: INFO/MainProcess] Task rescan_filesystem[93a26c99-ffa8-472b-a00a-ad78121f9231] received
[2024-01-30 17:12:04,947: WARNING/ForkPoolWorker-32] rescan_filesystem create callback
[2024-01-30 17:12:05,018: WARNING/ForkPoolWorker-32] nothing to delete
[2024-01-30 17:12:05,024: WARNING/ForkPoolWorker-32] kXxQKY41gPA: get metadata from youtube
[2024-01-30 17:12:09,466: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: get metadata from es
[2024-01-30 17:12:09,475: WARNING/ForkPoolWorker-32] {"_index":"ta_channel","_id":"UCbxZDWUN46l0QlmPuJmrUSQ","found":false}
[2024-01-30 17:12:09,475: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: get metadata from youtube
[2024-01-30 17:12:09,890: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: download channel thumbnail
[2024-01-30 17:12:10,539: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: failed to extract thumbnail, use fallback
[2024-01-30 17:12:10,555: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: failed to extract thumbnail, use fallback
[2024-01-30 17:13:10,185: WARNING/ForkPoolWorker-32] {"error":{"root_cause":[{"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put-mapping [ta_channel/laxWDcnsTSiW0b1ydG9aDw]) within 30s"}],"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put-mapping [ta_channel/laxWDcnsTSiW0b1ydG9aDw]) within 30s"},"status":503}
[2024-01-30 17:13:10,185: WARNING/ForkPoolWorker-32] {'channel_active': True, 'channel_description': 'An archive of documentary films and events from North Korea/DPRK.\n\nYou are free to use any footage from this channel.\n\nNot affiliated or endorsed by the DPRK. Everything is here for the sake of archiving.\n', 'channel_id': 'UCbxZDWUN46l0QlmPuJmrUSQ', 'channel_last_refresh': 1706656329, 'channel_name': 'North Korea Film Archive', 'channel_subs': 671, 'channel_subscribed': False, 'channel_tags': False, 'channel_banner_url': False, 'channel_thumb_url': 'https://yt3.googleusercontent.com/ytc/AIf8zZRXhzwr26GVMlDRNJrYKrTOAde_do_6BwOq2ofm=s900-c-k-c0x00ffffff-no-rj', 'channel_tvart_url': False, 'channel_views': 0}
[2024-01-30 17:13:10,216: WARNING/ForkPoolWorker-32] 93a26c99-ffa8-472b-a00a-ad78121f9231 Failed callback
[2024-01-30 17:13:10,219: ERROR/ForkPoolWorker-32] Task rescan_filesystem[93a26c99-ffa8-472b-a00a-ad78121f9231] raised unexpected: ValueError('failed to add item to index')
Traceback (most recent call last):
  File "/root/.local/lib/python3.11/site-packages/celery/app/trace.py", line 477, in trace_task
    R = retval = fun(*args, **kwargs)
                 ^^^^^^^^^^^^^^^^^^^^
  File "/root/.local/lib/python3.11/site-packages/celery/app/trace.py", line 760, in __protected_call__
    return self.run(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/home/tasks.py", line 320, in rescan_filesystem
    handler.apply()
  File "/app/home/src/index/filesystem.py", line 58, in apply
    self.index()
  File "/app/home/src/index/filesystem.py", line 90, in index
    index_new_video(youtube_id)
  File "/app/home/src/index/video.py", line 407, in index_new_video
    video.build_json()
  File "/app/home/src/index/video.py", line 151, in build_json
    self._add_channel()
  File "/app/home/src/index/video.py", line 222, in _add_channel
    channel.build_json(upload=True, fallback=self.youtube_meta)
  File "/app/home/src/index/channel.py", line 51, in build_json
    self.upload_to_es()
  File "/app/home/src/index/generic.py", line 57, in upload_to_es
    _, _ = ElasticWrap(self.es_path).put(self.json_data, refresh=True)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/app/home/src/es/connect.py", line 113, in put
    raise ValueError("failed to add item to index")
ValueError: failed to add item to index
[2024-01-30 17:13:10,223: WARNING/ForkPoolWorker-32] 93a26c99-ffa8-472b-a00a-ad78121f9231 return callback


Further index issues. Fun. Of course, I have no backups in the TA backups directory (thanks, past me!)

Link to comment
  • 1 month later...

since this is 1st google result, I just came across this, redist created appdata folders with weird settings, could probably force it with vars but ended up just going into terminal and changing user and group to nobody and letting them wrx. redist and es started up fine afterwards.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.