Jump to content

[SOLVED] Added new ZFS cache pool - shares disappeared & dockers no longer initialized


bombz
Go to solution Solved by JorgeB,

Recommended Posts

Hello,

Currently I have (1X) cache pool 1 which is used for all appdata/dockers

I have added a new ZFS mirror cache pool. This is to be used for one specific docker, 
I was able to initialize the zfs cache pool 2 and attempt to install a test docker <qbitorrent> upon installing the docker the docker log for this container stated  'could not create /appdata/qbitorrent/.cache/'


I attempted to restart the docker and the docker failed to initialize

I then went to my shares on unraid and all the shares had disappeared. 

 

I shutdown the server and restored my USB from backup and rebooted to get back to the state before adding the ZFS cache pool. I am now back to the restored state before adding the ZFS cache pool.

 

Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.687420,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...
Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.688541,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...
Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.689571,  0] ../../source3/smbd/smb2_service.c:772(make_connection_snum)
Nov 20 18:28:40 UnRAID smbd[3965]:   make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/...

 

I am curious to what may have went wrong?
Can you not use another cache pool for other docker containers?
Does each cache pool require to have it's on docker.img file on the pool?

 

Any assistance would be great.

 

Thanks

 

Edited by bombz
Link to comment
2 hours ago, JorgeB said:

Hard to say without the diags.

 

yah for sure. Apologies I didn't grab them
For the time being, I am going to allow the system to sit in the preset state


Deployed:

Pool1 = (2X) btrfs  - 2.5 SSD

Not Deployed:

Pool2 = (2X) zfs - NVMe <Currently the (2X) NVMe disks are detected as UD devices since the restore, which make sense.>

 

I was low on time yesterday & preferred to get things back into an operational state once I saw this concern occur. Moving forward I will attempt to recreate the operation; re-add the (2X) NVMe as a ZFS pool and toss a test docker container on them to see if the concern happens again.

At the time of all the errors and all the shares disappearing, I was possibly thinking it was permissions? available PCIe lanes? even possibly the hardware itself (new expansion card w/ new disk)?
When I have some more 'down time' to work and test... I will be sure to grab diags. and post them....if the concern occurs again,

Appreciate the follow-up.

Link to comment
On 11/21/2023 at 4:06 AM, JorgeB said:

Hard to say without the diags.

 


Hello, 
Circling back on this initial concern. I will be attempting to re-initialize the ZFS cache pool with the same steps performed earlier this week.

Hoping that this issue doesn't reoccur. I will post diags if it does.

 

Thanks.

Link to comment
  • 3 weeks later...
4 hours ago, JorgeB said:
Dec 10 15:19:52 UnRAID shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed.

 

It's this issue:

 

Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB.

Hello,

Appreciate this info.

As it stands for this share > NFS Security Settings > No (are set).

I also saw you posted this:

Settings > Global Shares Settings -> Tunable (support Hard Links): no

 

I can set this setting once I can stop the array. I don't believe I require NFS sharing, I thought maybe it was required for dockers. Would I be required to disable NFS globally, or just on this specific share?


Under Global share settings, I don't see a spot at this time to disable SMB or NFS.
Currently I am seeing 'Enable Disk Shares' & 'Enable User Shares'
 

Link to comment
22 minutes ago, JorgeB said:

Correct.

Thank you my friend. 
I reviewed all shares and disabled NFS on the (1X) share that had 'export' set to 'yes' (now changed to 'no').
Hope that resolves the random concern regarding disk shares disappearing.... strange one for sure.
I be sure to follow-up if the concern reoccurs.

 

Edited by bombz
Link to comment
24 minutes ago, JorgeB said:

Correct.

Another question while I have you, since I just deployed the ZFS pool on NVMe -- I was reviwing the 'system' section in 'dashboard'

The percentage on the ZFS pool, why does it seem high usage, is this due to it being accessed (it only has one docker on the pool).

I looked at the space used vs. space free and I don't think it is related to that?

 

Any feedback would be great so I am understanding what the 'system' percentage is displaying.

 

Thanks.

 

zfs.JPG

zfs2.JPG

Link to comment
  • bombz changed the title to [SOLVED] Added new ZFS cache pool - shares disappeared & dockers no longer initialized

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...