Unpack5920 Posted June 9, 2023 Share Posted June 9, 2023 In this Youtube-Video (@27:00) of @SpaceInvaderOne I saw one ZFS drive within the Unraid Array. This made me think … Unraid Array Pool +----------------------------------------+ +----------------+ | | | | | | | | | +---------+ +---------+ +--------+ | | +----------+ | | | HDD| | HDD| | HDD| | | | NVME| | | | | | | | | | | | | | | | 14 TB | | 14 TB | | 5 TB | | | | 2 TB | | | | | | | | | | | | data | | | | parity | | data | | mirror | |ZFS send | | | | | | | | | | & |<--+---------+--+ | | | | | | | | data | | | | | | | | XFS | | XFS | | | | | | | | | | | | | | ZFS | | | | ZFS | | | | | | | | pool2 | | | | pool1 | | | +---------+ +---------+ +--------+ | | +----------+ | | | | | +----------------------------------------+ +----------------+ Goal energy and cost efficient way to have a parity for my cache pool usage of zfs data compression fast and easy delta data mirroring (zfs send) usage of zfs snapshots in pool get an additional parity within the unraid-array for free easy expandable cache-pool better performance? One-Time-Setup Limit ZFS memory usage to 16GB (set in go file) > sysctl -w vfs.zfs.arc_max=17179869184 Create a ZFS Pool with a single 2TB NVME as Cache > zpool create zfs-cache-pool <disk-name> Create a ZFS Pool with a single 5TB HDD within the Unraid-Array > zpool create zfs-array-pool <disk-name> Create a dataset “docker” within the zfs-array-pool > zfs create zfs-array-pool/docker Create a dataset “docker” within the zfs-cache-pool > zfs create zfs-cache-pool/docker Enable dataset compression > zfs set compression=lz4 zfs-cache-pool/docker Enable trim for the ssd > zpool set autotrim=on zfs-cache-pool Cron each hour create a snapshot of the docker dataset > zfs snapshot zfs-cache-pool/docker@<snapshot-name> in the evening copy changes of zfs-array-pool/docker to the zfs-array-pool zfs send zfs-cache-pool/docker@<snapshot-name> | zfs receive zfs-array-pool/docker Check the data-transfer > zfs list -r zfs-array-pool/docker my Questions Does this work and makes sence? Do I miss something (major downsides)? Does this work with Unraid 6.12? Quote Link to comment
JorgeB Posted June 9, 2023 Share Posted June 9, 2023 Not really following everything you wrote, but it's perfectly fine to use an array zfs disk as send/receive target (or source), all the pool/datasets should be created by the GUI, not CLI, you also have options there for trim and compression, and not clear if you plan to use the docker dataset for image/folder or appdata, usually you just backup the appdata, docker image/folder ca easily be recreated, though some thing like custom docker networks would also need to be re-created. Quote Link to comment
Unpack5920 Posted June 9, 2023 Author Share Posted June 9, 2023 Thank you for your reply. Sounds good. Reagarding GUI/CLI for ZFS setup i'll use the GUI as far as possible. Regarding docker: I've my own file structure. I don't use the unRAID docker capabiliies (appdata-folder,...). root@jupiter:/mnt/user/docker/filebrowser# ll total 12 drwxrwxrwx 1 nobody users 86 Jan 1 22:11 ./ drwxrwxrwx 1 nobody users 4096 Jun 9 00:27 ../ drwxrwxrwx 1 nobody users 97 Jan 1 22:11 config/ drwxrwxrwx 1 nobody users 109 Jan 1 22:11 database/ -rw-rw-rw- 1 nobody users 1467 Mar 5 00:25 docker-compose.yml Any advice to wait for 6.12 or to start now using ZFS Master and other plugins? Quote Link to comment
JorgeB Posted June 9, 2023 Share Posted June 9, 2023 You can already use 6.12-rc7, latest release candidates should be pretty solid. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.