Jump to content

Docker - dataset does not exist


Go to solution Solved by JorgeB,

Recommended Posts

Hi,

 

i get the following error: Error response from daemon: container 9dd69bd732f0f6e4bfd7787f5ef5cc18941253a19c2aee423faa5bcc2d8d480b: driver "zfs" failed to remove root filesystem: exit status 1: "/usr/sbin/zfs fs destroy -r ssd/System/53bcad0d4b8bd8f9ecdfbdab5a3e843f664a98591a6f57b5836007dd8eceb0d7" => cannot open 'ssd/System/53bcad0d4b8bd8f9ecdfbdab5a3e843f664a98591a6f57b5836007dd8eceb0d7': dataset does not exist

 

Im on Unraid Version 6.12.0-rc6 and use the zfs filesystem.

 

Best regards

Link to comment

And if i want to do docker-compose up, i now get

failed to register layer: exit status 2: "/usr/sbin/zfs fs snapshot ssd/System/b22b6c868df6ce04c55ffda2784887209f9129d6bad85a33dcb6523094c8fa82@749664689" => cannot open 'ssd/System/b22b6c868df6ce04c55ffda2784887209f9129d6bad85a33dcb6523094c8fa82': dataset does not exist
usage:
        snapshot [-r] [-o property=value] ... <filesystem|volume>@<snap> ...

For the property list, run: zfs set|get

For the delegated permission list, run: zfs allow|unallow

 

Link to comment

And while trying to reboot the last time it wasn't a clean reboot. Unraid generated diagnostics. There Unraid had multiple entrys with

 

May 24 17:40:16 HomeServer emhttpd: Unmounting disks...
May 24 17:40:16 HomeServer emhttpd: shcmd (1275): /usr/sbin/zpool export ssd
May 24 17:40:16 HomeServer root: cannot unmount '/var/lib/docker/zfs/graph/53bcad0d4b8bd8f9ecdfbdab5a3e843f664a98591a6f57b5836007dd8eceb0d7-init': unmount failed

 

Link to comment
40 minutes ago, wassereimer said:

Nothing works at the moment with docker. Is there a way to fix this? Or to completely reset docker? (except the volumes if possible - but i have backups if really needed)

 

Is there a way for this? I need the docker containers up and running.

Link to comment

Is there a reason for trying to put appdata under the System share rather than letting it be a share in its own right.   I think the default settings for most templates are defined by the template authors rather than CA, so going non-standard means those defaults will often not work for you.

Link to comment

The docker system automatically maps anything that has a container path of /config to whatever the default appdata path is.

 

Your mosquitto template doesn't have a /config, but rather has /mosquitto/config and /mosquitto/data so the system leaves it all alone and uses whatever the template has in there by default since it has no idea that it is actually a config path.

 

This is all done by the template system.  CA only modifies paths that the maintainers use if they directly reference a disk or pool.  

 

EG if a path in the template says /mnt/download_pool/downloads, but you don't have a pool named "download_pool", then CA will adjust the path to reference a pool which you do have (or a direct disk reference if you don't have any pools)

Link to comment
7 hours ago, itimpi said:

Is there a reason for trying to put appdata under the System share rather than letting it be a share in its own right.   I think the default settings for most templates are defined by the template authors rather than CA, so going non-standard means those defaults will often not work for you.

It was just for my inner Monk. It was all lowercase and didn't fit in my structure as I like it. 😅 And since there is an option, I used it. 🙂 But yes, I had to change a lot of paths, but that was ok for me. I was just confused that the default will not be used.

 

6 hours ago, Squid said:

The docker system automatically maps anything that has a container path of /config to whatever the default appdata path is.

 

Your mosquitto template doesn't have a /config, but rather has /mosquitto/config and /mosquitto/data so the system leaves it all alone and uses whatever the template has in there by default since it has no idea that it is actually a config path.

 

This is all done by the template system.  CA only modifies paths that the maintainers use if they directly reference a disk or pool.  

 

EG if a path in the template says /mnt/download_pool/downloads, but you don't have a pool named "download_pool", then CA will adjust the path to reference a pool which you do have (or a direct disk reference if you don't have any pools)

Thank you for the explanation! In as little as two or three cases, my path was used. But I wasn't able to tell why. Now I know the "why". 🙂

 

Also, thank you again for helping me out to fix my docker. I'm still not a fan of how the docker containers are added and controlled in the UI, but this works better at the moment. An official and basic docker compose support, without the need of the CAs, would be great and a good compromise I think.

Link to comment
  • 3 months later...

I am also experiencing this issue on Unraid 6.12.4 with Docker image data in an individual share on a ZFS disk. Cannot remove a container through CLI or Force Update through GUI, so the container is stuck.

 

$ zfs version
zfs-2.1.12-1
zfs-kmod-2.1.12-1
$ docker rm -f my-app
Error response from daemon: container 3ed55f07dde27c39b475b232e8a06f248c19fc09f6464fbaf0276b8c81cab4ff: driver "zfs" failed to remove root filesystem: exit status 1: "/usr/sbin/zfs fs destroy -r cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888" => cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist
$ zfs list | grep 503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888-init   136K   863G     91.4M  legacy
$ zfs unmount cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist
$ zfs destroy cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888
cannot open 'cache/docker/503e6d29ad94faaa061257e4ab1c13c30cac283b17ad29d4edc2c5f283428888': dataset does not exist

 

Some relevant GitHub issue discussions:

2015-09-07 moby/moby not exactly the same error but relevant, and I had the same one previously (nuked all Docker image data to solve)

2017-02-13 moby/moby

2019-10-24 moby/moby

2020-06-02 moby/moby

2021-12-13 moby/moby references above issue

 

Based on the 2017-02-13 issue, I tried stopping Docker service and `rm /var/lib/docker` and restarting service, no change.

The 2019-10-24 issue says that ZFS 2.2 may introduce a fix.

The 2020-06-02 issue and Unraid user BVD recommend creating a zvol virtual disk with a non-ZFS filesystem inside.

May have minor performance impact (another filesystem abstraction layer) and also limits the size of the docker.img (I changed to directory image data in the first place because I wanted no limit besides bare metal disk space).

Hope that Unraid promptly upgrades to ZFS 2.2 when it is released.

 

Attached diagnostics.

tower-diagnostics-20230906-2033.zip

Edited by ZooMass
Added links to relevant issues
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...