Ojun Posted 12 hours ago Share Posted 12 hours ago (edited) Hello Unraid Forum, my Unraid server runs as media and gaming machine. The goal is to have the most storage/redundancy/gaming performance as possible. Current setup: 4TB Parity with Array devices: 2x 2TB and 1x 4TB + zfs cache pool with: 3x m2 ssd. Currently it bothers me that all 3 of the array devices bring a new zfs pool with 3 mount points. And the caching system is annoying in my eyes with managing mover and bigger datasets in the cache. The best configuration that suits my case is from my thinking zfs: raid-z1 with L2ARC caching. So plan would be to replace the 2x 2TB with another 4TB so I got 3 of them and then put them to raid-z1 and use all ssd's as L2ARC. I would gain: much more read speeds on all shares? and loose: write speed on cache shares + ability to remove/add disks however I want? Everything is googled together. I am not an expert with managing/optimizing storage pools. Please let me know if my plans are crap or maybe let me know what you peoples are using. 🤔 Regards Ojun Edited 12 hours ago by Ojun Quote Link to comment
bmartino1 Posted 2 hours ago Share Posted 2 hours ago There are quite a few setups you cold do. I would normlay sugest runin th eNVME in a XFS on the array and teh High/Large TB as zfs raidz1 / mirror for reduncacy and speed. Befroe the ability to remove the array i would say have your faster devices there as a xfs setup. With the abilty to have 0 array devices. Since you are in beta and are able to ditch the unraid array all together. I would recomend this setup. Array Disabled. Go pool Devices Only with ZFS... So setup your NVME as a 3x m.2 as a zfs raidz1 for speed and space 2x 2TB as a ZFS Mirror 1x 4tb as cache You can then use plugins such as user script and appdata backup to copy and move items off for backups automagicaly. and if a disk in the zfs array die the data isn't lost and easliy recoverable with a replace and resliver. The NVME araray should store your VMs and Docker Applicaiton Data here The 2xMirro can be you bigger libary data and Main shares. The 1x4TB as the cache disk ?btrfs? and be the main backup for the appdata folder(due to default templates) and data location for plugin or user script to rsync back data to as a active partity disk between both pools. you just have to rember to go into the share folder after coming upd with datatsets in zfs. in Shares be sure to set storage location before using them. example: Go to shares click the dataset name and under Chose your Primary Storage. Quote Link to comment
bmartino1 Posted 2 hours ago Share Posted 2 hours ago (edited) best practices aside. As in the end it comes down to how you want to interact with it and what you want it to do. as one should follow the 321 backup rule or 221 ... https://www.techtarget.com/searchdatabackup/definition/3-2-1-Backup-Strategy#:~:text=The basic concept of the,data is sent off site. Recommend plugins I would have you use and install: ZFS plugins. Snapshots to see zfs snapshots and btrfs snapshots... General unraid Must Haves Backup Tools and Utilities: Additional Add-on Systems (Docker compose and LXC): I also recommend running some user script at first array startup: The Mina one is more for ZFS and creating snapshots for shadow copy backups within ZFS. #!/bin/bash #v0.3 - Updated for new datasets and recursive snapshots ########################simple-snapshot-zfs####################### ###################### User Defined Options ###################### # List of ZFS datasets DATASETS=("vm-zfs/Backups") #"zfsPoolName/Datasetname" # Set Number of Snapshots to Keep SNAPSHOT_QTY=5 ###### Don't change below unless you know what you're doing ###### ################################################################## timestamp=$(date "+%Y-%m-%d-%H:%M") echo "Starting Snapshot ${timestamp}" echo "_____________________________________________________________" # Function to create snapshot if there is changed data create_snapshot_if_changed() { local DATASET="$1" local WRITTEN WRITTEN=$(zfs get -H -o value written "${DATASET}") if [[ "${WRITTEN}" != "0" ]]; then local TIMESTAMP TIMESTAMP="$(date '+%Y-%m-%d-%H%M')" # Use -r for recursive snapshots zfs snapshot -r "${DATASET}@${TIMESTAMP}" echo "Recursive snapshot created: ${DATASET}@${TIMESTAMP}" else echo "No changes detected in ${DATASET}. No snapshot created." fi } # Function to prune snapshots prune_snapshots() { local DATASET="$1" local KEEP="${SNAPSHOT_QTY}" local SNAPSHOTS=( $(zfs list -t snapshot -o name -s creation -r "${DATASET}" | grep "^${DATASET}@") ) local SNAPSHOTS_COUNT=${#SNAPSHOTS[@]} echo "Total snapshots for ${DATASET}: ${SNAPSHOTS_COUNT}" local SNAPSHOTS_SPACE SNAPSHOTS_SPACE=$(zfs get -H -o value usedbysnapshots "${DATASET}") echo "Space used by snapshots for ${DATASET}: ${SNAPSHOTS_SPACE}" if [[ ${SNAPSHOTS_COUNT} -gt ${KEEP} ]]; then local TO_DELETE=$((SNAPSHOTS_COUNT - KEEP)) for i in "${SNAPSHOTS[@]:0:${TO_DELETE}}"; do zfs destroy "${i}" echo "Deleted snapshot: ${i}" echo "_____________________________________________________________" done else echo "_____________________________________________________________" fi } # Iterate over each dataset and call the functions for dataset in "${DATASETS[@]}"; do create_snapshot_if_changed "${dataset}" prune_snapshots "${dataset}" done echo "----------------------------Done!----------------------------" Using snapshot plugin, one can investigate and inspect. These are my min best practices. I will gladly help where I can. Edited 1 hour ago by bmartino1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.