bombz Posted November 20, 2023 Share Posted November 20, 2023 (edited) Hello, Currently I have (1X) cache pool 1 which is used for all appdata/dockers I have added a new ZFS mirror cache pool. This is to be used for one specific docker, I was able to initialize the zfs cache pool 2 and attempt to install a test docker <qbitorrent> upon installing the docker the docker log for this container stated 'could not create /appdata/qbitorrent/.cache/' I attempted to restart the docker and the docker failed to initialize I then went to my shares on unraid and all the shares had disappeared. I shutdown the server and restored my USB from backup and rebooted to get back to the state before adding the ZFS cache pool. I am now back to the restored state before adding the ZFS cache pool. Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.687420, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.688541, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... Nov 20 18:28:40 UnRAID smbd[3965]: [2023/11/20 18:28:40.689571, 0] ../../source3/smbd/smb2_service.c:772(make_connection_snum) Nov 20 18:28:40 UnRAID smbd[3965]: make_connection_snum: canonicalize_connect_path failed for service <folder share>, path /mnt/user/... I am curious to what may have went wrong? Can you not use another cache pool for other docker containers? Does each cache pool require to have it's on docker.img file on the pool? Any assistance would be great. Thanks Edited December 12, 2023 by bombz Quote Link to comment
JorgeB Posted November 21, 2023 Share Posted November 21, 2023 9 hours ago, bombz said: I am curious to what may have went wrong? Hard to say without the diags. 9 hours ago, bombz said: Does each cache pool require to have it's on docker.img file on the pool? No. Quote Link to comment
bombz Posted November 21, 2023 Author Share Posted November 21, 2023 2 hours ago, JorgeB said: Hard to say without the diags. yah for sure. Apologies I didn't grab them For the time being, I am going to allow the system to sit in the preset state Deployed: Pool1 = (2X) btrfs - 2.5 SSD Not Deployed: Pool2 = (2X) zfs - NVMe <Currently the (2X) NVMe disks are detected as UD devices since the restore, which make sense.> I was low on time yesterday & preferred to get things back into an operational state once I saw this concern occur. Moving forward I will attempt to recreate the operation; re-add the (2X) NVMe as a ZFS pool and toss a test docker container on them to see if the concern happens again. At the time of all the errors and all the shares disappearing, I was possibly thinking it was permissions? available PCIe lanes? even possibly the hardware itself (new expansion card w/ new disk)? When I have some more 'down time' to work and test... I will be sure to grab diags. and post them....if the concern occurs again, Appreciate the follow-up. Quote Link to comment
bombz Posted November 24, 2023 Author Share Posted November 24, 2023 On 11/21/2023 at 4:06 AM, JorgeB said: Hard to say without the diags. Hello, Circling back on this initial concern. I will be attempting to re-initialize the ZFS cache pool with the same steps performed earlier this week. Hoping that this issue doesn't reoccur. I will post diags if it does. Thanks. Quote Link to comment
bombz Posted December 10, 2023 Author Share Posted December 10, 2023 (edited) On 11/21/2023 at 4:06 AM, JorgeB said: Hard to say without the diags. No. Hello, I shut down the plex docker today to restart and the following error came back when attempting to restart, and all my unraid shares disappeared. I pulled diags Not sure why this happens unraid-diagnostics-20231210-1529.zip Edited December 10, 2023 by bombz Quote Link to comment
JorgeB Posted December 11, 2023 Share Posted December 11, 2023 Dec 10 15:19:52 UnRAID shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. It's this issue: https://forums.unraid.net/bug-reports/stable-releases/683-shfs-error-results-in-lost-mntuser-r939/ Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB. Quote Link to comment
bombz Posted December 11, 2023 Author Share Posted December 11, 2023 4 hours ago, JorgeB said: Dec 10 15:19:52 UnRAID shfs: shfs: ../lib/fuse.c:1450: unlink_node: Assertion `node->nlookup > 1' failed. It's this issue: Some workarounds discussed there, mostly disable NFS if not needed or you can change everything to SMB. Hello, Appreciate this info. As it stands for this share > NFS Security Settings > No (are set). I also saw you posted this: Settings > Global Shares Settings -> Tunable (support Hard Links): no I can set this setting once I can stop the array. I don't believe I require NFS sharing, I thought maybe it was required for dockers. Would I be required to disable NFS globally, or just on this specific share? Under Global share settings, I don't see a spot at this time to disable SMB or NFS. Currently I am seeing 'Enable Disk Shares' & 'Enable User Shares' Quote Link to comment
Solution JorgeB Posted December 11, 2023 Solution Share Posted December 11, 2023 2 minutes ago, bombz said: disable NFS globally, or just on this specific share? If not needed disable it for all shares, it a share by share setting. Quote Link to comment
bombz Posted December 11, 2023 Author Share Posted December 11, 2023 3 minutes ago, JorgeB said: If not needed disable it for all shares, it a share by share setting. Ok, appreciate the prompt follow-up. For sanity, to disable NFS Shares select: Shares > Share Name > NFS Security Settings > Export > No Which should disable it? Quote Link to comment
bombz Posted December 11, 2023 Author Share Posted December 11, 2023 (edited) 22 minutes ago, JorgeB said: Correct. Thank you my friend. I reviewed all shares and disabled NFS on the (1X) share that had 'export' set to 'yes' (now changed to 'no'). Hope that resolves the random concern regarding disk shares disappearing.... strange one for sure. I be sure to follow-up if the concern reoccurs. Edited December 11, 2023 by bombz Quote Link to comment
bombz Posted December 11, 2023 Author Share Posted December 11, 2023 24 minutes ago, JorgeB said: Correct. Another question while I have you, since I just deployed the ZFS pool on NVMe -- I was reviwing the 'system' section in 'dashboard' The percentage on the ZFS pool, why does it seem high usage, is this due to it being accessed (it only has one docker on the pool). I looked at the space used vs. space free and I don't think it is related to that? Any feedback would be great so I am understanding what the 'system' percentage is displaying. Thanks. Quote Link to comment
JorgeB Posted December 11, 2023 Share Posted December 11, 2023 This is normal, and the higher the better, it means the ARC is being fully utilized, it's normal to always be around 95-100% Quote Link to comment
bombz Posted December 11, 2023 Author Share Posted December 11, 2023 6 hours ago, JorgeB said: This is normal, and the higher the better, it means the ARC is being fully utilized, it's normal to always be around 95-100% Hello, You rock man, thanks for all the feedback, really appreciate your time clarifying these concerns! 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.