Jump to content

OS: 7.0beta4 best practice storage/disk/cache configuration?


Recommended Posts

Hello Unraid Forum, 

 

my Unraid server runs as media and gaming machine. The goal is to have the most storage/redundancy/gaming performance as possible.

 

Current setup: 4TB Parity with Array devices: 2x 2TB and 1x 4TB

+ zfs cache pool with: 3x m2 ssd.

 

Currently it bothers me that all 3 of the array devices bring a new zfs pool with 3 mount points. And the caching system is annoying in my eyes with managing mover and bigger datasets in the cache. 

 

The best configuration that suits my case is from my thinking zfs: raid-z1 with L2ARC caching. So plan would be to replace the 2x 2TB with another 4TB so I got 3 of them and then put them to raid-z1 and use all ssd's as L2ARC.

I would gain: much more read speeds on all shares?

and loose: write speed on cache shares + ability to remove/add disks however I want?

 

 


Everything is googled together. I am not an expert with managing/optimizing storage pools. Please let me know if my plans are crap or maybe let me know what you peoples are using. 🤔

 

 

Regards

Ojun

 

 

 

 

Edited by Ojun
Link to comment
  • Ojun changed the title to OS: 7.0beta4 best practice storage/disk/cache configuration?

There are quite a few setups you cold do. I would normlay sugest runin th eNVME in a XFS on the array and teh High/Large TB as zfs raidz1 / mirror for reduncacy and speed.
Befroe the ability to remove the array i would say have your faster devices there as a xfs setup. With the abilty to have 0 array devices.

Since you are in beta and are able to ditch the unraid array all together. I would recomend this setup.

Array Disabled. Go pool Devices Only with ZFS...
image.thumb.png.c72adfd999ac865a8397b1053e3f0c16.png

So setup your NVME as a
3x m.2 as a zfs raidz1 for speed and space
2x 2TB as a ZFS Mirror
1x 4tb as cache

You can then use plugins such as user script and appdata backup to copy and move items off for backups automagicaly. and if a disk in the zfs array die the data isn't lost and easliy recoverable with a replace and resliver.

The NVME araray should store your VMs and Docker Applicaiton Data here
The 2xMirro can be you bigger libary data and Main shares.
The 1x4TB as the cache disk ?btrfs? and be the main backup for the appdata folder(due to default templates) and  data location for plugin or user script to rsync back data to as a active partity disk between both pools.

you just have to rember to go into the share folder after coming upd with datatsets in zfs.
in Shares be sure to set storage location before using them.

example:
image.thumb.png.8f67b5bbd2c5a5c5c5238a3015993b8e.png

 

image.thumb.png.f2db4c978d546c59abce6f0bb50ab34c.png
Go to shares click the dataset name and under 
image.png.eba3017f1bde72ab45787d607b3fea4d.png
Chose your Primary Storage.

 

Link to comment

best practices aside. As in the end it comes down to how you want to interact with it and what you want it to do.

as one should follow the 321 backup rule or 221 ...
https://www.techtarget.com/searchdatabackup/definition/3-2-1-Backup-Strategy#:~:text=The basic concept of the,data is sent off site.

Recommend plugins I would have you use and install:

ZFS plugins.
image.png.0bf712d93fcbf7de6fb2864b55bd3036.png
image.png.978ef6098471eeecd8f0ffcbd9c58d22.png


Snapshots to see zfs snapshots and btrfs snapshots...

image.png.673e4f9f31ca2e69137bf8bee607658c.png

 

General unraid Must Haves
image.png.32f7e6871c7b1c84084a8fe00c2c93ec.png
image.png.72051e395afa6e7d78b4c61a9b0d8562.png

Backup Tools and Utilities:
image.png.f6b399b6106a6cab52d9e3d82cde7176.png
image.png.c1c0053673aba020b26a1b16d2a541b9.png

image.png.fef2ffa911738ac77b856f40b6e2140f.png

Additional Add-on Systems (Docker compose and LXC):
image.png.87f28c451dc917fdfaa0b9c0a5326739.png
image.png.6bcd1cad32498d160b5e8d58839b859c.png

I also recommend running some user script at first array startup:
image.thumb.png.900a74e76a9c85c106c9b9edbb851ad6.png

The Mina one is more for ZFS and creating snapshots for shadow copy backups within ZFS.

#!/bin/bash

#v0.3 - Updated for new datasets and recursive snapshots
########################simple-snapshot-zfs#######################
###################### User Defined Options ######################

# List of ZFS datasets
DATASETS=("vm-zfs/Backups")
#"zfsPoolName/Datasetname"

# Set Number of Snapshots to Keep
SNAPSHOT_QTY=5

###### Don't change below unless you know what you're doing ######
##################################################################

timestamp=$(date "+%Y-%m-%d-%H:%M")
echo "Starting Snapshot ${timestamp}"
echo "_____________________________________________________________"

# Function to create snapshot if there is changed data
create_snapshot_if_changed() {
  local DATASET="$1"
  local WRITTEN
  WRITTEN=$(zfs get -H -o value written "${DATASET}")

  if [[ "${WRITTEN}" != "0" ]]; then
    local TIMESTAMP
    TIMESTAMP="$(date '+%Y-%m-%d-%H%M')"
    # Use -r for recursive snapshots
    zfs snapshot -r "${DATASET}@${TIMESTAMP}"
    echo "Recursive snapshot created: ${DATASET}@${TIMESTAMP}"
  else
    echo "No changes detected in ${DATASET}. No snapshot created."
  fi
}

# Function to prune snapshots
prune_snapshots() {
  local DATASET="$1"
  local KEEP="${SNAPSHOT_QTY}"
  
  local SNAPSHOTS=( $(zfs list -t snapshot -o name -s creation -r "${DATASET}" | grep "^${DATASET}@") )
  local SNAPSHOTS_COUNT=${#SNAPSHOTS[@]}

  echo "Total snapshots for ${DATASET}: ${SNAPSHOTS_COUNT}"

  local SNAPSHOTS_SPACE
  SNAPSHOTS_SPACE=$(zfs get -H -o value usedbysnapshots "${DATASET}")
  echo "Space used by snapshots for ${DATASET}: ${SNAPSHOTS_SPACE}"

  if [[ ${SNAPSHOTS_COUNT} -gt ${KEEP} ]]; then
    local TO_DELETE=$((SNAPSHOTS_COUNT - KEEP))
    for i in "${SNAPSHOTS[@]:0:${TO_DELETE}}"; do
      zfs destroy "${i}"
      echo "Deleted snapshot: ${i}"
      echo "_____________________________________________________________"
    done
  else
   echo "_____________________________________________________________"
  fi
}

# Iterate over each dataset and call the functions
for dataset in "${DATASETS[@]}"; do
  create_snapshot_if_changed "${dataset}"
  prune_snapshots "${dataset}"
done

echo "----------------------------Done!----------------------------"

Using snapshot plugin, one can investigate and inspect.
image.thumb.png.fd194bcbe2a1d6f75d0852b91654f2eb.png

These are my min best practices. I will gladly help where I can.

Edited by bmartino1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...