Jump to content

best ZPool setup given my current drive array


TheSkaz

Recommended Posts

I have spent some time understanding zfs and zpools and whatnot. I have an array of drives and dont want to use the OOTB array that unraid does. 

 

right now, I have a USB 3.1 drive (64GB) as the only array disk so that it starts up. Currently using 6.10.0-rc1. 

 

[0:0:0:0]	disk             USB DISK 3.0     PMAP  /dev/sda   62.0GB
[1:0:0:0]	disk     USB      SanDisk 3.2Gen1 1.00  /dev/sdb   30.7GB
[4:0:0:0]	disk    ATA      Patriot Blast    12.2  /dev/sdc    240GB
[4:0:1:0]	disk    ATA      Patriot Blast    12.2  /dev/sdd    240GB
[4:0:2:0]	disk    ATA      WDC  WDBNCE0020P 40RL  /dev/sde   2.00TB
[4:0:3:0]	disk    ATA      WDC  WDBNCE0020P 40RL  /dev/sdf   2.00TB
[4:0:4:0]	disk    ATA      Patriot Blast    12.2  /dev/sdg    240GB
[4:0:5:0]	disk    ATA      Patriot Blast    12.2  /dev/sdh    240GB
[4:0:6:0]	disk    ATA      WDC  WDS200T2B0A 40WD  /dev/sdi   2.00TB
[4:0:7:0]	disk    ATA      Patriot Blast    12.2  /dev/sdj    240GB
[4:0:8:0]	disk    ATA      Patriot Blast    22.3  /dev/sdk    240GB
[4:0:9:0]	disk    ATA      WDC  WDS200T2B0A 40WD  /dev/sdl   2.00TB
[4:0:11:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdm   2.00TB
[4:0:12:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdn   16.0TB
[4:0:13:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdo   16.0TB
[4:0:14:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdp   16.0TB
[4:0:15:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdq   16.0TB
[4:0:16:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdr   16.0TB
[4:0:17:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sds   16.0TB
[4:0:18:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdt   2.00TB
[4:0:19:0]	disk    ATA      Hitachi HDS5C302 AA10  /dev/sdu   2.00TB
[4:0:20:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdv   16.0TB
[4:0:21:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdw   16.0TB
[4:0:22:0]	disk    ATA      Hitachi HDS5C302 AA10  /dev/sdx   2.00TB
[4:0:23:0]	disk    ATA      WDC WD20EFRX-68E 0A82  /dev/sdy   2.00TB
[4:0:24:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdz   16.0TB
[4:0:25:0]	disk    ATA      ST16000NM001G-2K SN03  /dev/sdaa  16.0TB
[N:0:1:1]	disk    Force MP600__1                             /dev/nvme0n1  2.00TB
[N:1:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme1n1  2.00TB
[N:2:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme2n1  2.00TB
[N:3:1:1]	disk    PCIe SSD__1                                /dev/nvme3n1  1.00TB
[N:4:1:1]	disk    Sabrent Rocket Q4__1                       /dev/nvme4n1  2.00TB

 

current setup:

 

  pool: datastore
 state: ONLINE
  scan: scrub repaired 0B in 00:03:36 with 0 errors on Fri Sep 24 07:00:44 2021
config:

        NAME        STATE     READ WRITE CKSUM
        datastore   ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdi     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdl     ONLINE       0     0     0

errors: No known data errors

  pool: fast
 state: ONLINE
  scan: scrub repaired 0B in 00:02:44 with 0 errors on Thu Sep 23 22:47:49 2021
config:

        NAME                                           STATE     READ WRITE CKSUM
        fast                                           ONLINE       0     0     0
          nvme0n1                                      ONLINE       0     0     0
          nvme1n1                                      ONLINE       0     0     0
          nvme2n1                                      ONLINE       0     0     0
          nvme-Sabrent_Rocket_Q4_03F10707144404184492  ONLINE       0     0     0

errors: No known data errors

  pool: tank
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
  scan: scrub in progress since Mon Sep 27 08:42:25 2021
        6.13T scanned at 4.81G/s, 412G issued at 323M/s, 6.13T total
        0B repaired, 6.56% done, 05:10:03 to go
config:

        NAME                                            STATE     READ WRITE CKSUM
        tank                                            DEGRADED     0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267CE1           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267NDH           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2672V8           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL268CAW           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2660YG           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL266MEX           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267LNF           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL2678RA           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL266CQD           ONLINE       0     0     0
            ata-ST16000NM001G-2KK103_ZL267S9C           ONLINE       0     0     0
          raidz1-1                                      DEGRADED     0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M1NJR9X5    ONLINE       0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M4TRDPV1    ONLINE       0     0     0
            ata-Hitachi_HDS5C3020ALA632_ML4230FA10X4EK  ONLINE       0     0     0
            ata-Hitachi_HDS5C3020ALA632_ML0230FA16RPLD  ONLINE       0     0     0
            ata-WDC_WD20EFRX-68EUZN0_WD-WCC4M6FLJJNK    UNAVAIL     98   499     0
        logs
          nvme3n1                                       ONLINE       0     0     0

errors: No known data errors

  pool: vmstorage
 state: ONLINE
  scan: scrub repaired 0B in 00:04:08 with 0 errors on Thu Sep 23 22:44:03 2021
config:

        NAME        STATE     READ WRITE CKSUM
        vmstorage   ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sdg     ONLINE       0     0     0
            sdh     ONLINE       0     0     0
            sdj     ONLINE       0     0     0
            sdk     ONLINE       0     0     0

errors: No known data errors

 

I want to redo tank and vmstorage. I am going to wipe raidz1-1 out of tank (means ill delete and recreate the pool) and throw those drives out. they are really old WD Red 2TB and 2 hitachi 2TBs. I am going to buy another 10 16TB drives and I think that i can use the 240GB SSDs as cache/log drives. maybe a mirror array on both cache and log, then 2 hot spares? I have no use case for vmstorage since I now have the fast pool.

 

my ask, is given the info above, what would be the best config with performance as #1 with some redundancy. these will not be mission critical files, if they are, they will be backed up elsewhere. 

 

2 vdevs of raidz with 10 disks each? and cache/log setup in mirrors? or something else?

I also have a 1TB nvme drive that wouldnt be utilized.

Edited by TheSkaz
Link to comment
  • JorgeB locked this topic
Guest
This topic is now closed to further replies.
×
×
  • Create New...