ProfessionalIdiot

Members
  • Posts

    7
  • Joined

  • Last visited

ProfessionalIdiot's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Well - I am back in but I'm sure I did it the wrong way. I took a backup of all the data in the Elasticsearch directory (user/appdata/Tubearchivist/es) and then blew that dir away. I then redeployed the ES container and saw permissions issues with the snapshot directory. I fixed that with chown and chmod and then the container started up and I was back in TA! However - all of my settings and data are gone. And there is seemingly no way to manually reimport everything (or my googling is failing me). I tried moving one of my backed up snapshots into the snapshots dir on my now working instance but it did not recognize it. So - I did something probably stupid - I am running a "Rescan Filesystem" from the app GUI. It appears to have tried but failed. Per "docker logs TubeArchivist": [2024-01-30 17:12:04,945: INFO/MainProcess] Task rescan_filesystem[93a26c99-ffa8-472b-a00a-ad78121f9231] received [2024-01-30 17:12:04,947: WARNING/ForkPoolWorker-32] rescan_filesystem create callback [2024-01-30 17:12:05,018: WARNING/ForkPoolWorker-32] nothing to delete [2024-01-30 17:12:05,024: WARNING/ForkPoolWorker-32] kXxQKY41gPA: get metadata from youtube [2024-01-30 17:12:09,466: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: get metadata from es [2024-01-30 17:12:09,475: WARNING/ForkPoolWorker-32] {"_index":"ta_channel","_id":"UCbxZDWUN46l0QlmPuJmrUSQ","found":false} [2024-01-30 17:12:09,475: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: get metadata from youtube [2024-01-30 17:12:09,890: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: download channel thumbnail [2024-01-30 17:12:10,539: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: failed to extract thumbnail, use fallback [2024-01-30 17:12:10,555: WARNING/ForkPoolWorker-32] UCbxZDWUN46l0QlmPuJmrUSQ: failed to extract thumbnail, use fallback [2024-01-30 17:13:10,185: WARNING/ForkPoolWorker-32] {"error":{"root_cause":[{"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put-mapping [ta_channel/laxWDcnsTSiW0b1ydG9aDw]) within 30s"}],"type":"process_cluster_event_timeout_exception","reason":"failed to process cluster event (put-mapping [ta_channel/laxWDcnsTSiW0b1ydG9aDw]) within 30s"},"status":503} [2024-01-30 17:13:10,185: WARNING/ForkPoolWorker-32] {'channel_active': True, 'channel_description': 'An archive of documentary films and events from North Korea/DPRK.\n\nYou are free to use any footage from this channel.\n\nNot affiliated or endorsed by the DPRK. Everything is here for the sake of archiving.\n', 'channel_id': 'UCbxZDWUN46l0QlmPuJmrUSQ', 'channel_last_refresh': 1706656329, 'channel_name': 'North Korea Film Archive', 'channel_subs': 671, 'channel_subscribed': False, 'channel_tags': False, 'channel_banner_url': False, 'channel_thumb_url': 'https://yt3.googleusercontent.com/ytc/AIf8zZRXhzwr26GVMlDRNJrYKrTOAde_do_6BwOq2ofm=s900-c-k-c0x00ffffff-no-rj', 'channel_tvart_url': False, 'channel_views': 0} [2024-01-30 17:13:10,216: WARNING/ForkPoolWorker-32] 93a26c99-ffa8-472b-a00a-ad78121f9231 Failed callback [2024-01-30 17:13:10,219: ERROR/ForkPoolWorker-32] Task rescan_filesystem[93a26c99-ffa8-472b-a00a-ad78121f9231] raised unexpected: ValueError('failed to add item to index') Traceback (most recent call last): File "/root/.local/lib/python3.11/site-packages/celery/app/trace.py", line 477, in trace_task R = retval = fun(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^ File "/root/.local/lib/python3.11/site-packages/celery/app/trace.py", line 760, in __protected_call__ return self.run(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/home/tasks.py", line 320, in rescan_filesystem handler.apply() File "/app/home/src/index/filesystem.py", line 58, in apply self.index() File "/app/home/src/index/filesystem.py", line 90, in index index_new_video(youtube_id) File "/app/home/src/index/video.py", line 407, in index_new_video video.build_json() File "/app/home/src/index/video.py", line 151, in build_json self._add_channel() File "/app/home/src/index/video.py", line 222, in _add_channel channel.build_json(upload=True, fallback=self.youtube_meta) File "/app/home/src/index/channel.py", line 51, in build_json self.upload_to_es() File "/app/home/src/index/generic.py", line 57, in upload_to_es _, _ = ElasticWrap(self.es_path).put(self.json_data, refresh=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/home/src/es/connect.py", line 113, in put raise ValueError("failed to add item to index") ValueError: failed to add item to index [2024-01-30 17:13:10,223: WARNING/ForkPoolWorker-32] 93a26c99-ffa8-472b-a00a-ad78121f9231 return callback Further index issues. Fun. Of course, I have no backups in the TA backups directory (thanks, past me!)
  2. Removed the logs as to not spam this page with text blocks.... With the help of CrapGPT I have tried manually recreating troubled shards, as well as rolled back to a snapshot of a date where all was happy and healthy - no luck, same issues. Is there a way to do a clean reinstall of the ES container? I am shocked that my attempt to redeploy ES didn't fix these problems. I must not be doing a clean redeploy or something.
  3. Need some help un-hosing my TA install. I have a ZFS pool and a regular pool. Everything broke when I tried to migrate docker to my regular pool. All containers work except TA. TA redis and ES start but TA dies immediately after starting. I used CrapGPT to try to help me diag but got nowhere - logs from TA seem to point to issues connecting to ES: Output of "docker logs TubeArchivist" .... ..... ...'',;:cc,. .;::;;,'... ..,;:cccllclc, .:ccllllcc;,.. ..,:cllcc:;,'.',. ....'',;ccllc:,.. ..;cllc:,'.. ...,:cccc:'. .;cccc;.. ..,:ccc:'. .ckkkOkxollllllllllllc. .,:::;. .,cclc; .:0MMMMMMMMMMMMMMMMMMMX: .cNMMMWx. .;clc: .;lOXK0000KNMMMMX00000KO; ;KMMMMMNl. .;ccl:,. .;:c:'.....kMMMNo........ 'OMMMWMMMK: '::;;'. ....... .xMMMNl .dWMMXdOMMMO' ........ .:cc:;. .xMMMNc .lNMMNo.:XMMWx. .:cl:. .:llc,. .:xxxd, ;KMMMk. .oWMMNl. .:llc' .cll:. .;:;;:::,. 'OMMMK:';''kWMMK: .;llc, .cll:. .,;;;;;;,. .,xWMMNl.:l:.;KMMMO' .;llc' .:llc. .cOOOk; .lKNMMWx..:l:..lNMMWx. .:llc' .;lcc,. .xMMMNc :KMMMM0, .:lc. .xWMMNl.'ccl:. .cllc. .xMMMNc 'OMMMMXc...:lc...,0MMMKl:lcc,. .,ccl:. .xMMMNc .xWMMMWo.,;;:lc;;;.cXMMMXdcc;. .,clc:. .xMMMNc .lNMMMWk. .':clc:,. .dWMMW0o;. .,clcc,. .ckkkx; .okkkOx, .';,. 'kKKK0l. .':lcc:'..... . .. ..,;cllc,. .,cclc,.... ....;clc;.. ..,:,..,c:'.. ...';:,..,:,. ....:lcccc:;,'''.....'',;;:clllc,.... .'',;:cllllllccccclllllcc:,'.. ...'',,;;;;;;;;;,''... ..... ####################### # Environment Setup # ####################### [1] checking expected env vars ✓ all expected env vars are set [2] check ES user overwrite ✓ ES user is set to elastic [3] check TA_PORT overwrite TA_PORT is not set [4] check TA_UWSGI_PORT overwrite TA_UWSGI_PORT is not set [5] check ENABLE_CAST overwrite ENABLE_CAST is not set [6] create superuser superuser already created ####################### # Connection check # ####################### [1] connect to Redis ✓ Redis connection verified [2] set Redis config ✓ Redis config set [3] connect to Elastic Search ... waiting for ES [0/24] {"cluster_name":"docker-cluster","status":"red","timed_out":true,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":11,"active_shards":11,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":91.66666666666666} ✓ ES connection established [4] Elastic Search version check ✓ ES version check passed [5] check ES path.repo env var ✓ path.repo env var is set ####################### # Application Start # ####################### [1] set new config.json values ✓ new config values set [2] create expected cache folders ✓ expected folders created [3] clear leftover keys in redis no keys found [4] clear task leftovers [5] clear leftover files from dl cache clear download cache no files found [6] check for first run after update no new update found [MIGRATION] validate index mappings ta_config index is created and up to date... ta_channel index is created and up to date... ta_video index is created and up to date... ta_download index is created and up to date... ta_playlist index is created and up to date... ta_subtitle index is created and up to date... ta_comment index is created and up to date... [MIGRATION] setup snapshots snapshot: run setup snapshot: repo ta_snapshot already created snapshot: policy is set. snapshot: is outdated, create new now snapshot: last snapshot is up-to-date snapshot: executing now: {'snapshot_name': 'ta_daily_-vdfxfv21sb2fyrqlwhnh7a'} [MIGRATION] move user configuration to ES ✓ Settings for user '1' migrated to ES ✓ Settings for all users migrated to ES ######################## # Filesystem Migration # ######################## {"error":{"root_cause":[],"type":"search_phase_execution_exception","reason":"","phase":"indices:data/read/open_point_in_time","grouped":true,"failed_shards":[],"caused_by":{"type":"search_phase_execution_exception","reason":"Search rejected due to missing shards [[ta_video][0]]. Consider using `allow_partial_search_results` setting to bypass this error.","phase":"indices:data/read/open_point_in_time","grouped":true,"failed_shards":[]}},"status":503} Traceback (most recent call last): File "/app/manage.py", line 23, in <module> main() File "/app/manage.py", line 19, in main execute_from_command_line(sys.argv) File "/root/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 442, in execute_from_command_line utility.execute() File "/root/.local/lib/python3.11/site-packages/django/core/management/__init__.py", line 436, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/root/.local/lib/python3.11/site-packages/django/core/management/base.py", line 412, in run_from_argv self.execute(*args, **cmd_options) File "/root/.local/lib/python3.11/site-packages/django/core/management/base.py", line 458, in execute output = self.handle(*args, **options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/config/management/commands/ta_migpath.py", line 31, in handle to_migrate = handler.get_to_migrate() ^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/config/management/commands/ta_migpath.py", line 80, in get_to_migrate response = IndexPaginate("ta_video", data).get_results() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/app/home/src/es/connect.py", line 163, in get_results self.get_pit() File "/app/home/src/es/connect.py", line 173, in get_pit self.pit_id = response["id"] ~~~~~~~~^^^^^^ KeyError: 'id' Operations to perform: Apply all migrations: admin, auth, authtoken, contenttypes, home, sessions Running migrations: No migrations to apply. Then a whole bunch of lines about deleting stuff. The container then dies. Output of "docker logs TubeArchivist-ES" {"@timestamp":"2024-01-28T22:18:24.383Z", "log.level": "INFO", "message":"snapshot lifecycle policy [ta_daily] issuing create snapshot [ta_daily_-8yjqavu4rtsg6nffcp0fka]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][generic][T#5]","log.logger":"org.elasticsearch.xpack.slm.SnapshotLifecycleTask","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2024-01-28T22:18:24.396Z", "log.level": "WARN", "message":"[ta_snapshot][ta_daily_-8yjqavu4rtsg6nffcp0fka] failed to create snapshot", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][masterService#updateTask][T#6]","log.logger":"org.elasticsearch.snapshots.SnapshotsService","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.snapshots.SnapshotException","error.message":"[ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]","error.stack_trace":"org.elasticsearch.snapshots.SnapshotException: [ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]\n\tat [email protected]/org.elasticsearch.snapshots.SnapshotsService$SnapshotTaskExecutor.createSnapshot(SnapshotsService.java:3897)\n\tat [email protected]/org.elasticsearch.snapshots.SnapshotsService$SnapshotTaskExecutor.execute(SnapshotsService.java:3729)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.innerExecuteTasks(MasterService.java:1039)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.executeTasks(MasterService.java:1004)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService.executeAndPublishBatch(MasterService.java:232)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.lambda$run$2(MasterService.java:1626)\n\tat [email protected]/org.elasticsearch.action.ActionListener.run(ActionListener.java:386)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$BatchingTaskQueue$Processor.run(MasterService.java:1623)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$5.lambda$doRun$0(MasterService.java:1237)\n\tat [email protected]/org.elasticsearch.action.ActionListener.run(ActionListener.java:386)\n\tat [email protected]/org.elasticsearch.cluster.service.MasterService$5.doRun(MasterService.java:1216)\n\tat [email protected]/org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:983)\n\tat [email protected]/org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1144)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:642)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\n"} {"@timestamp":"2024-01-28T22:18:24.398Z", "log.level":"ERROR", "message":"failed to create snapshot for snapshot lifecycle policy [ta_daily]: org.elasticsearch.snapshots.SnapshotException: [ta_snapshot:ta_daily_-8yjqavu4rtsg6nffcp0fka/jjnshUCbScWkRBaHi7pOeg] Indices don't have primary shards [ta_video]", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][masterService#updateTask][T#6]","log.logger":"org.elasticsearch.xpack.slm.SnapshotLifecycleTask","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster"} {"@timestamp":"2024-01-28T22:18:26.040Z", "log.level": "WARN", "message":"path: /ta_video/_pit, params: {index=ta_video, keep_alive=10m}, status: 503", "ecs.version": "1.2.0","service.name":"ES_ECS","event.dataset":"elasticsearch.server","process.thread.name":"elasticsearch[23abc1a3793f][transport_worker][T#19]","log.logger":"rest.suppressed","elasticsearch.cluster.uuid":"bE_mcvdqTey7H6NTdkTYyQ","elasticsearch.node.id":"o_n-k-bpTNW1_eO6Mnoqpw","elasticsearch.node.name":"23abc1a3793f","elasticsearch.cluster.name":"docker-cluster","error.type":"org.elasticsearch.action.search.SearchPhaseExecutionException","error.message":"","error.stack_trace":"Failed to execute phase [indices:data/read/open_point_in_time], \n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:709)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:456)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.start(AbstractSearchAsyncAction.java:220)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeSearch(TransportSearchAction.java:1144)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeLocalSearch(TransportSearchAction.java:913)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.lambda$executeRequest$10(TransportSearchAction.java:337)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:109)\n\tat [email protected]/org.elasticsearch.index.query.Rewriteable.rewriteAndFetch(Rewriteable.java:77)\n\tat [email protected]/org.elasticsearch.action.search.TransportSearchAction.executeRequest(TransportSearchAction.java:449)\n\tat [email protected]/org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:105)\n\tat [email protected]/org.elasticsearch.action.search.TransportOpenPointInTimeAction.doExecute(TransportOpenPointInTimeAction.java:53)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:87)\n\tat [email protected]/org.elasticsearch.action.support.ActionFilter$Simple.apply(ActionFilter.java:53)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:85)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$3(SecurityActionFilter.java:163)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$DelegatingFailureActionListener.onResponse(ActionListenerImplementations.java:212)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:623)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.SearchRequestCacheDisablingInterceptor.intercept(SearchRequestCacheDisablingInterceptor.java:53)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.ShardSearchRequestInterceptor.intercept(ShardSearchRequestInterceptor.java:24)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.UpdateRequestInterceptor.intercept(UpdateRequestInterceptor.java:27)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.ResizeRequestInterceptor.intercept(ResizeRequestInterceptor.java:98)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.BulkShardRequestInterceptor.intercept(BulkShardRequestInterceptor.java:85)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.FieldAndDocumentLevelSecurityRequestInterceptor.intercept(FieldAndDocumentLevelSecurityRequestInterceptor.java:79)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.SearchRequestInterceptor.intercept(SearchRequestInterceptor.java:21)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.IndicesAliasesRequestInterceptor.intercept(IndicesAliasesRequestInterceptor.java:124)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:621)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$1.onResponse(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.interceptor.DlsFlsLicenseRequestInterceptor.intercept(DlsFlsLicenseRequestInterceptor.java:106)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.runRequestInterceptors(AuthorizationService.java:617)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.handleIndexActionAuthorizationResult(AuthorizationService.java:602)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorizeAction$13(AuthorizationService.java:505)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:1028)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$AuthorizationResultListener.onResponse(AuthorizationService.java:994)\n\tat [email protected]/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.lambda$authorizeIndexAction$3(RBACEngine.java:401)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener$SuccessResult.complete(SubscribableListener.java:310)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.tryComplete(SubscribableListener.java:230)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.addListener(SubscribableListener.java:133)\n\tat [email protected]/org.elasticsearch.action.support.SubscribableListener.addListener(SubscribableListener.java:108)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService$CachingAsyncSupplier.getAsync(AuthorizationService.java:1074)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.authorizeIndexAction(RBACEngine.java:381)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.authorizeAction(AuthorizationService.java:498)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.maybeAuthorizeRunAs(AuthorizationService.java:435)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.lambda$authorize$3(AuthorizationService.java:322)\n\tat [email protected]/org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:178)\n\tat [email protected]/org.elasticsearch.action.support.ContextPreservingActionListener.onResponse(ContextPreservingActionListener.java:32)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.lambda$resolveAuthorizationInfo$0(RBACEngine.java:151)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.lambda$getRoles$4(CompositeRolesStore.java:194)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.lambda$getRole$5(CompositeRolesStore.java:212)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$0(RoleReferenceIntersection.java:49)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.support.GroupedActionListener.onResponse(GroupedActionListener.java:56)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.buildRoleFromRoleReference(CompositeRolesStore.java:244)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.lambda$buildRole$1(RoleReferenceIntersection.java:53)\n\tat java.base/java.lang.Iterable.forEach(Iterable.java:75)\n\tat [email protected]/org.elasticsearch.xpack.core.security.authz.store.RoleReferenceIntersection.buildRole(RoleReferenceIntersection.java:53)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.getRole(CompositeRolesStore.java:210)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.store.CompositeRolesStore.getRoles(CompositeRolesStore.java:187)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.RBACEngine.resolveAuthorizationInfo(RBACEngine.java:147)\n\tat [email protected]/org.elasticsearch.xpack.security.authz.AuthorizationService.authorize(AuthorizationService.java:338)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.lambda$applyInternal$4(SecurityActionFilter.java:159)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$MappedActionListener.onResponse(ActionListenerImplementations.java:95)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticatorChain.authenticate(AuthenticatorChain.java:93)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:262)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.AuthenticationService.authenticate(AuthenticationService.java:171)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.applyInternal(SecurityActionFilter.java:155)\n\tat [email protected]/org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:114)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:85)\n\tat [email protected]/org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:62)\n\tat [email protected]/org.elasticsearch.tasks.TaskManager.registerAndExecute(TaskManager.java:196)\n\tat [email protected]/org.elasticsearch.client.internal.node.NodeClient.executeLocally(NodeClient.java:108)\n\tat [email protected]/org.elasticsearch.client.internal.node.NodeClient.doExecute(NodeClient.java:86)\n\tat [email protected]/org.elasticsearch.client.internal.support.AbstractClient.execute(AbstractClient.java:381)\n\tat [email protected]/org.elasticsearch.action.search.RestOpenPointInTimeAction.lambda$prepareRequest$1(RestOpenPointInTimeAction.java:64)\n\tat [email protected]/org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:103)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.doHandleRequest(SecurityRestFilter.java:94)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.lambda$handleRequest$0(SecurityRestFilter.java:85)\n\tat [email protected]/org.elasticsearch.action.ActionListener$2.onResponse(ActionListener.java:178)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.lambda$authenticateAndAttachToContext$3(SecondaryAuthenticator.java:99)\n\tat [email protected]/org.elasticsearch.action.ActionListenerImplementations$ResponseWrappingActionListener.onResponse(ActionListenerImplementations.java:236)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.authenticate(SecondaryAuthenticator.java:109)\n\tat [email protected]/org.elasticsearch.xpack.security.authc.support.SecondaryAuthenticator.authenticateAndAttachToContext(SecondaryAuthenticator.java:90)\n\tat [email protected]/org.elasticsearch.xpack.security.rest.SecurityRestFilter.handleRequest(SecurityRestFilter.java:79)\n\tat [email protected]/org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:441)\n\tat [email protected]/org.elasticsearch.rest.RestController.tryAllHandlers(RestController.java:570)\n\tat [email protected]/org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:325)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.dispatchRequest(AbstractHttpServerTransport.java:458)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.handleIncomingRequest(AbstractHttpServerTransport.java:554)\n\tat [email protected]/org.elasticsearch.http.AbstractHttpServerTransport.incomingRequest(AbstractHttpServerTransport.java:431)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.handlePipelinedRequest(Netty4HttpPipeliningHandler.java:128)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpPipeliningHandler.channelRead(Netty4HttpPipeliningHandler.java:118)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)\n\tat [email protected]/io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.forwardData(Netty4HttpHeaderValidator.java:194)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.forwardFullRequest(Netty4HttpHeaderValidator.java:137)\n\tat [email protected]/org.elasticsearch.http.netty4.Netty4HttpHeaderValidator.lambda$requestStart$1(Netty4HttpHeaderValidator.java:120)\n\tat [email protected]/io.netty.util.concurrent.PromiseTask.runTask(PromiseTask.java:98)\n\tat [email protected]/io.netty.util.concurrent.PromiseTask.run(PromiseTask.java:106)\n\tat [email protected]/io.netty.util.concurrent.AbstractEventExecutor.runTask(AbstractEventExecutor.java:174)\n\tat [email protected]/io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:167)\n\tat [email protected]/io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:470)\n\tat [email protected]/io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:566)\n\tat [email protected]/io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)\n\tat [email protected]/io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)\n\tat java.base/java.lang.Thread.run(Thread.java:1583)\nCaused by: Failed to execute phase [indices:data/read/open_point_in_time], Search rejected due to missing shards [[ta_video][0]]. Consider using `allow_partial_search_results` setting to bypass this error.\n\tat [email protected]/org.elasticsearch.action.search.SearchPhase.doCheckNoMissingShards(SearchPhase.java:61)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.run(AbstractSearchAsyncAction.java:230)\n\tat [email protected]/org.elasticsearch.action.search.AbstractSearchAsyncAction.executePhase(AbstractSearchAsyncAction.java:451)\n\t... 138 more\n"} I know nothing about ES, but with CrapGPT I was able to determine it looks like there is an issue with a shard: curl -X GET "http://192.168.0.136:9200/_cluster/health" -u elastic:{REDACTED} {"cluster_name":"docker-cluster","status":"red","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":12,"active_shards":12,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":92.3076923076923} From within the ES container console: sh-5.0$ curl -X GET "localhost:9200/_cluster/allocation/explain?pretty" -u elastic:{REDACTED} { "note" : "No shard was specified in the explain API request, so this response explains a randomly chosen unassigned shard. There may be other unassigned shards in this cluster which cannot be assigned for different reasons. It may not be possible to assign this shard until one of the other shards is assigned correctly. To explain the allocation of other shards (whether assigned or unassigned) you must specify the target shard in the request to this API.", "index" : "ta_video", "shard" : 0, "primary" : true, "current_state" : "unassigned", "unassigned_info" : { "reason" : "CLUSTER_RECOVERED", "at" : "2024-01-28T22:01:11.229Z", "last_allocation_status" : "no_valid_shard_copy" }, "can_allocate" : "no_valid_shard_copy", "allocate_explanation" : "Elasticsearch can't allocate this shard because all the copies of its data in the cluster are stale or corrupt. Elasticsearch will allocate this shard when a node containing a good copy of its data joins the cluster. If no such node is available, restore this index from a recent snapshot.", "node_allocation_decisions" : [ { "node_id" : "o_n-k-bpTNW1_eO6Mnoqpw", "node_name" : "23abc1a3793f", "transport_address" : "172.17.0.2:9300", "node_attributes" : { "transform.config_version" : "10.0.0", "xpack.installed" : "true", "ml.config_version" : "12.0.0", "ml.max_jvm_size" : "536870912", "ml.allocated_processors_double" : "32.0", "ml.allocated_processors" : "32", "ml.machine_memory" : "67490713600" }, "roles" : [ "data", "data_cold", "data_content", "data_frozen", "data_hot", "data_warm", "ingest", "master", "ml", "remote_cluster_client", "transform" ], "node_decision" : "no", "store" : { "in_sync" : true, "allocation_id" : "0gZ4kEOLRDGQBRiuSAKyzQ", "store_exception" : { "type" : "corrupt_index_exception", "reason" : "failed engine (reason: [corrupt file (source: [start])]) (resource=preexisting_corruption)", "caused_by" : { "type" : "i_o_exception", "reason" : "failed engine (reason: [corrupt file (source: [start])])", "caused_by" : { "type" : "corrupt_index_exception", "reason" : "codec footer mismatch (file truncated?): actual footer=15729219 vs expected footer=-1071082520 (resource=MemorySegmentIndexInput(path=\"/usr/share/elasticsearch/data/indices/K6CzjUsuRl-MxHjOUUBoBw/0/index/_5de.cfs\"))" } } } } } ] } Any support is greatly appreciated!
  4. Got it - thank you. So for my understanding, when I go to fix this, is a valid config to create a raidz1 or z2 pool, with cache, and with spares added (via the instructions in your other post)? Adding spares would have been my next step if this all had worked as I (errantly) expected.
  5. Wow - to make this more confusing, when I start the array in order 2 mentioned above (vdev order: raidz1-0, 1-1, mirror-2, cache), the GUI shows all the Zfs disks as unmountable, but everything seems to be working? root@nas01:~# zpool status pool: zfs state: ONLINE scan: scrub repaired 0B in 00:48:59 with 0 errors on Mon Nov 27 01:49:00 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 sdh1 ONLINE 0 0 0 sde1 ONLINE 0 0 0 sdf1 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdl1 ONLINE 0 0 0 sdm1 ONLINE 0 0 0 sdn1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 sdk1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 cache sdj1 ONLINE 0 0 0 sdi1 ONLINE 0 0 0 errors: No known data errors I also see I/O on the pool: root@nas01:~# zpool iostat -v 1 capacity operations bandwidth pool alloc free read write read write ---------- ----- ----- ----- ----- ----- ----- zfs 10.9T 21.8T 62 2 486K 27.9K raidz1-0 10.9T 3.69T 61 0 479K 8.92K sdg1 - - 15 0 126K 2.28K sdh1 - - 15 0 114K 2.23K sde1 - - 15 0 128K 2.27K sdf1 - - 15 0 112K 2.15K raidz1-1 48.8G 14.5T 0 0 4.79K 9.37K sdc1 - - 0 0 1.17K 2.36K sdl1 - - 0 0 1.26K 2.32K sdm1 - - 0 0 1.16K 2.40K sdn1 - - 0 0 1.21K 2.28K mirror-2 1004K 3.62T 0 0 1.55K 9.62K sdk1 - - 0 0 540 3.21K sdd1 - - 0 0 510 3.21K sdb1 - - 0 0 540 3.21K cache - - - - - - sdj1 23.9G 1.80T 35 0 146K 483 sdi1 23.8G 1.80T 33 0 142K 414 ---------- ----- ----- ----- ----- ----- ----- And I can access my shares via SMB. So do I just live with the GUI being inaccurate for now? I can monitor the shares fine otherwise (external monitoring, or from SSH): root@nas01:/var/log# df -h Filesystem Size Used Avail Use% Mounted on rootfs 32G 243M 32G 1% / tmpfs 32M 484K 32M 2% /run /dev/sda1 239G 448M 239G 1% /boot overlay 32G 243M 32G 1% /lib overlay 32G 243M 32G 1% /usr devtmpfs 8.0M 0 8.0M 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 128M 2.6M 126M 2% /var/log tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/addons tmpfs 1.0M 0 1.0M 0% /mnt/rootshare tmpfs 6.3G 0 6.3G 0% /run/user/0 zfs 17T 256K 17T 1% /mnt/zfs zfs/system 17T 499M 17T 1% /mnt/zfs/system zfs/isos 17T 256K 17T 1% /mnt/zfs/isos zfs/domains 17T 256K 17T 1% /mnt/zfs/domains zfs/x 25T 8.0T 17T 33% /mnt/zfs/x zfs/appdata 17T 10M 17T 1% /mnt/zfs/appdata /dev/md1p1 59G 453M 59G 1% /mnt/disk1 shfs 59G 453M 59G 1% /mnt/user0 shfs 59G 453M 59G 1% /mnt/user /dev/loop2 20G 589M 19G 3% /var/lib/docker /dev/loop3 1.0G 4.1M 905M 1% /etc/libvirt Docker seems to be happy as well, which I moved to this ZFS pool Very odd behavior
  6. I am new to ZFS so apologies if I misuse some terms. I created a ZFS pool via unRAID GUI of 4 devices. I then expanded this with 4 more devices via GUI. After that, I added cache using the CLI (from instructions here.) At this point, everything was working great. Here was my zpool at that point: root@nas01:~# zpool status pool: zfs state: ONLINE scan: scrub repaired 0B in 00:48:59 with 0 errors on Mon Nov 27 01:49:00 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 sdh1 ONLINE 0 0 0 sde1 ONLINE 0 0 0 sdf1 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdl1 ONLINE 0 0 0 sdm1 ONLINE 0 0 0 sdn1 ONLINE 0 0 0 cache sdj1 ONLINE 0 0 0 sdi1 ONLINE 0 0 0 errors: No known data errors I then tried adding 3 new devices to my pool via CLI: root@nas01:/var/log# zpool add zfs -f mirror sdk1 sdd1 sdb1 After that I tried the import instructions from JorgeB's post and the pool showed unmountable when I started the array. However, when I bring up the pool via CLI I see it is all happy and healthy: root@nas01:~# zpool status pool: zfs state: ONLINE scan: scrub repaired 0B in 00:48:59 with 0 errors on Mon Nov 27 01:49:00 2023 config: NAME STATE READ WRITE CKSUM zfs ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 sdh1 ONLINE 0 0 0 sde1 ONLINE 0 0 0 sdf1 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 sdc1 ONLINE 0 0 0 sdl1 ONLINE 0 0 0 sdm1 ONLINE 0 0 0 sdn1 ONLINE 0 0 0 mirror-2 ONLINE 0 0 0 sdk1 ONLINE 0 0 0 sdd1 ONLINE 0 0 0 sdb1 ONLINE 0 0 0 cache sdj1 ONLINE 0 0 0 sdi1 ONLINE 0 0 0 (Note the new mirror-2 vdev) And I see the data/shares: root@nas01:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 32G 244M 32G 1% / tmpfs 32M 416K 32M 2% /run /dev/sda1 239G 448M 239G 1% /boot overlay 32G 244M 32G 1% /lib overlay 32G 244M 32G 1% /usr devtmpfs 8.0M 0 8.0M 0% /dev tmpfs 32G 0 32G 0% /dev/shm tmpfs 128M 2.6M 126M 2% /var/log tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/addons tmpfs 1.0M 0 1.0M 0% /mnt/rootshare tmpfs 6.3G 0 6.3G 0% /run/user/0 zfs 17T 256K 17T 1% /mnt/zfs zfs/system 17T 499M 17T 1% /mnt/zfs/system zfs/isos 17T 256K 17T 1% /mnt/zfs/isos zfs/domains 17T 256K 17T 1% /mnt/zfs/domains zfs/x 25T 8.0T 17T 33% /mnt/zfs/x zfs/appdata 17T 10M 17T 1% /mnt/zfs/appdata I have tried to import in the unRAID GUI in two different orders: 1) The original order (with cache) with the 3 new mirror-2 disks below those, in the order shown in zpool status 2) What the new zpool order shows (vdev raidz1-0, 1-1, mirror-2, cache) Trying either of these, the GUI shows the disks as unmountable Due to how ZFS works I am not able to remove the mirror-2 vdev and revert to the old config It seems as if my data is safe, and I do not see any unhappiness in syslog, so two questions: 1) How do I get the array started again? What order do I need to list disks in the GUI? 2) If I wanted to add/replace disks in the future, what is the procedure to determine the correct disk order in the unRAID GUI? Thanks in advance