Added new ZFS cache pool - shares disappeared & dockers no longer initialized


Recommended Posts

Hello,

Currently I have (1X) cache pool 1 which is used for all appdata/dockers

I have added a new ZFS mirror cache pool. This is to be used for one specific docker, 
I was able to initialize the zfs cache pool 2 and attempt to install a test docker <qbitorrent> upon installing the docker the docker log for this container stated  'could not create /appdata/qbitorrent/.cache/'


I attempted to restart the docker and the docker failed to initialize

I then went to my shares on unraid and all the shares had disappeared. 

 

I shutdown the server and restored my USB from backup and rebooted to get back to the state before adding the ZFS cache pool. I am now back to the restored state before adding the ZFS cache pool.

 

Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.687420,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...
Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.688541,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...
Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.689571,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...

 

I am curious to what may have went wrong?
Can you not use another cache pool for other docker containers?
Does each cache pool require to have it's on docker.img file on the pool?

 

Any assistance would be great.

 

Thanks

 

Edited by bombz
Link to comment
2 hours ago, JorgeB said:

Hard to say without the diags.

 

yah for sure. Apologies I didn't grab them
For the time being, I am going to allow the system to sit in the preset state


Deployed:

Pool1 = (2X) btrfs  - 2.5 SSD

Not Deployed:

Pool2 = (2X) zfs - NVMe <Currently the (2X) NVMe disks are detected as UD devices since the restore, which make sense.>

 

I was low on time yesterday & preferred to get things back into an operational state once I saw this concern occur. Moving forward I will attempt to recreate the operation; re-add the (2X) NVMe as a ZFS pool and toss a test docker container on them to see if the concern happens again.

At the time of all the errors and all the shares disappearing, I was possibly thinking it was permissions? available PCIe lanes? even possibly the hardware itself (new expansion card w/ new disk)?
When I have some more 'down time' to work and test... I will be sure to grab diags. and post them....if the concern occurs again,

Appreciate the follow-up.

Link to comment
On 11/21/2023 at 4:06 AM, JorgeB said:

Hard to say without the diags.

 


Hello, 
Circling back on this initial concern. I will be attempting to re-initialize the ZFS cache pool with the same steps performed earlier this week.

Hoping that this issue doesn't reoccur. I will post diags if it does.

 

Thanks.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.