-Daedalus Posted September 18, 2023 Share Posted September 18, 2023 I could use some help here. Sample output: root@server:~# zfs list NAME USED AVAIL REFER MOUNTPOINT nvme 492G 1.28T 483G /mnt/nvme nvme/021c0d8e3c51fa2043a923ae8827453a8368e01d44f109e517a0ccb29fa110e3 110M 1.28T 116M legacy nvme/033e003d210f53465a73b12264a92ec7620eeaa1f466233d48636eb0041364ac 228K 1.28T 794M legacy nvme/03cd86d9fadbaa84de181842cd3398a29e9d7ca5b3af7c00dd3075fb8f24f3e0 65.1M 1.28T 1.03G legacy nvme/03dd3c2c139853025a8cf253bb181fa781b184c5da1fa59c2834c9c6cf392bc6 208K 1.28T 694M legacy nvme/03e24096d6226b6ada14c0770faef6b17f9957e4df067c78ae7805ca0c7b395d 17.9M 1.28T 32.2M legacy nvme/03fd3b4ba6d133e7799eafe3c455dbea737db4693381612cd712b64c7990d38c 16.4M 1.28T 588M legacy nvme/0599aa2763c0aa6c5d36710bab682d78186f4b21259ef3e9b74fe941f9ab7570 100K 1.28T 15.1M legacy nvme/06ba9a9cca2e8c1ce12c9f0907f4eff9537afaf98d806a0d319c3dcd6e1a6747 276K 1.28T 986M legacy nvme/07883f0e01ef4020154f9d9cd77961d6bfbf0a850f460592bb7660ab40f309fc 12.9M 1.28T 128M legacy nvme/09840a34ce6780b9bbda5bb473e3379f373d53e8edd00617efca44b9fd7d4731 1.39M 1.28T 812M legacy nvme/09840a34ce6780b9bbda5bb473e3379f373d53e8edd00617efca44b9fd7d4731-init 224K 1.28T 811M legacy nvme/0a90b9fbdb52e1bbe5aa3d6e22866ad8965b81d8986f7316df7716a9b2d3dc49 176K 1.28T 250M legacy nvme/0cd905b9df7f949ce3502e3915b05e517bf38a1c804363605de49a940b0a598f 141M 1.28T 1.21G legacy nvme/0cd905b9df7f949ce3502e3915b05e517bf38a1c804363605de49a940b0a598f-init 256K 1.28T 1.08G legacy nvme/0d3dea73b93807f65b3ab60e197ad8f8ace3cd25f9ac2607567a319458aa71c5 160K 1.28T 360M legacy nvme/0fd6e752e00f3b9e056b8f6bdf8ab3354aff2c37296cda586496fd07ac5d8ea3 14.9M 1.28T 366M legacy nvme/10651987709cf54cf33ae0c171631fa00a812f55dd4afb9f8881d343e5004b85 360K 1.28T 360K legacy nvme/113ead63506a17cc2bdd4f7a0933d0347cea62a6ef732b94979414ca4c3192d0 116K 1.28T 6.64M legacy nvme/14eb062140a9e49382c36a7f9ba7f91ed67072ef2d512a8a5913a4dd4fb10e8c 436M 1.28T 468M legacy nvme/15821de8faf7118665d9258d6dd4da7653b10e80e3b848a43375ac3c7656c40b 54.1M 1.28T 1.08G legacy nvme/1839c100fafb9797b2499dfe297006ff78ab8c2b99d2bce14852a08b40c9544d 140K 1.28T 96.5M legacy nvme/191974c0b556409cb8cf587cfc4e7b45696b708e3b6e010d96e3c016d72c5315 96.4M 1.28T 96.4M legacy nvme/19ed814063c99a1a639ca806a75beabfd694e0f5601f52047962028a68e86542 131M 1.28T 250M legacy nvme/1a0e9188edae81deda39532926c3f0211b0d6675726063ec7015aa41f37698e8 591M 1.28T 623M legacy nvme/1a5b1c75b7ad68b229187bd83b60c5b8632e82eca39b98540b705c08afb8bf79 45.1M 1.28T 258M legacy nvme/1b1dba293258c26b8a3b52149c3ec99b20d5c13f1958f307807754703903f2fc 7.14M 1.28T 231M legacy That's a small sampling. There are around 200 datasets, according to ZFS Master. So, this went: Move everything to array via mover. Erase pool, format as ZFS. Create directories (not datasets) via CLI: appdata, docker, domains, system. Recreate docker folder (it was taking a very long time to move, so I deleted it) Move data back to pool from array. Reinstall containers from CA. I had ZFS Master installed in preparation for the move, but my first interaction with it was clicking "Show Datasets" and finding all of these. Can anyone shed any light on what happened here? Quote Link to comment
Solution JorgeB Posted September 18, 2023 Solution Share Posted September 18, 2023 This will happen if you use a docker folder with zfs, you can use an image instead. Quote Link to comment
-Daedalus Posted September 18, 2023 Author Share Posted September 18, 2023 (edited) I had moved away from an image to prevent it filling up, or allocating too much space to it. Image it is I guess! There's no work around to prevent this behaviour? I'd got used to not tracking space for it. Edited September 18, 2023 by -Daedalus Quote Link to comment
JorgeB Posted September 18, 2023 Share Posted September 18, 2023 2 minutes ago, -Daedalus said: There's no work around to prevent this behaviour? Not with zfs and docker folder, at least not for now. 1 Quote Link to comment
ra1k_0 Posted January 30 Share Posted January 30 Just come across this issue myself (sorry to resurrect). I have reverted to an image but unfortunately still stuck with over 1600 legacy datasets. Is there a way to clean up that will not be one-by-one? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.