ZFS plugin for unRAID


steini84

Recommended Posts

Hi everyone,

 

I'd like to get your point of view about the ASHIFT parameter on SSD.

 

I'm aware that what is usually told is that SSD "lies" by announcing a 512 Sector Size instead of 4K or 8K. So Ashift 12 or 13 is prefered. In order to not suffer from write amplification.

 

But on my DC500R SSDs, the datasheet or internal commands doesn't refers to 4K Sector Size so I ran these benchmarks when I created my array with the differents ASHIFT values.

 

The ASHIFT 9 is whatever the winner in writing speed. The ASHIFT 12 shows some interesting results.

 

I'd simply like to know I did good with a benchmark to determine the ASHIFT or if another method is prefered?

 

Thank you.

 

ASHIFT 9
root@server:/mnt/fastraid# dd if=/dev/zero of=ashift9.bin bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 91.531 s, 1.1 GB/s
root@server:/mnt/fastraid# dd if=ashift9.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 64.7996 s, 1.6 GB/s
root@server:/mnt/fastraid# df -h
Filesystem      Size  Used Avail Use% Mounted on
fastraid        5.1T   98G  5.1T   2% /mnt/fastraid
root@server:/mnt/fastraid# zfs list
NAME       USED  AVAIL     REFER  MOUNTPOINT
fastraid  97.6G  5.00T     97.6G  /mnt/fastraid

ASHIFT 12
root@server:/mnt/fastraid# dd if=/dev/zero of=ashift12.bin bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 88.9758 s, 1.2 GB/s
root@server:/mnt/fastraid# dd if=ashift12.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 89.2021 s, 1.2 GB/s
root@server:/mnt/fastraid# df -h
Filesystem      Size  Used Avail Use% Mounted on
fastraid        5.0T   98G  4.9T   2% /mnt/fastraid
root@server:/mnt/fastraid# zfs list
NAME       USED  AVAIL     REFER  MOUNTPOINT
fastraid  97.6G  4.85T     97.6G  /mnt/fastraid

ASHIFT 13
root@server:/mnt/fastraid# dd if=/dev/zero of=ashift13.bin bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 92.9889 s, 1.1 GB/s
root@server:/mnt/fastraid# dd if=ashift13.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 74.625 s, 1.4 GB/s
root@server:/mnt/fastraid# df -h
Filesystem      Size  Used Avail Use% Mounted on
fastraid        5.0T   98G  4.9T   2% /mnt/fastraid
root@server:/mnt/fastraid# zfs list
NAME       USED  AVAIL     REFER  MOUNTPOINT
fastraid  97.6G  4.85T     97.6G  /mnt/fastraid

 

Here's the result with the Crucial P5, also displayed with 512 Sector Size and performance dropping is huge.

 

ASHIFT 9

root@server:/mnt/nvmeraid# dd if=/dev/zero of=ashift9.bin bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 21.2881 s, 4.9 GB/s
root@server:/mnt/nvmeraid# dd if=ashift9.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 9.82965 s, 10.7 GB/s
root@server:/mnt/nvmeraid# df -h
Filesystem                                                                        Size  Used Avail Use% Mounted on
nvmeraid                                                                          1.7T  128K  1.7T   1% /mnt/nvmeraid
root@server:/mnt/nvmeraid# zfs list
NAME                                                                                    USED  AVAIL     REFER  MOUNTPOINT
nvmeraid                                                                               1.03M  1.67T       24K  /mnt/nvmeraid

ASHIFT 12

root@server:/mnt/nvmeraid# dd if=/dev/zero of=ashift12.bin bs=1M count=100000
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 83.2163 s, 1.3 GB/
root@server:/mnt/nvmeraid# dd if=ashift12.bin of=/dev/null bs=1M
100000+0 records in
100000+0 records out
104857600000 bytes (105 GB, 98 GiB) copied, 51.633 s, 2.0 GB/s
root@server:/mnt/nvmeraid# df -h
Filesystem                                                                        Size  Used Avail Use% Mounted on
nvmeraid                                                                          1.7T   98G  1.6T   6% /mnt/nvmeraid
root@server:/mnt/nvmeraid# zfs list
NAME                                                                                    USED  AVAIL     REFER  MOUNTPOINT
nvmeraid                                                                               97.7G  1.57T     97.7G  /mnt/nvmeraid

 

Edited by gyto6
Link to comment

Did some benchmark with FIO. I can recommand the ssd-test.fio which can be personnalized with block size and queue depth.

 

It appears that my SSDs are indeed 512 Sector Size if we wanna reach the best IOPS numbers. But talking about Bandwith performance, the 4K block size seems most relevant and the cpu usage is divided by 3. Finally the 8k Block Size is twice better in random read and write speed that the ASHIFT 12 and better too in sequential write speed.

 

So, as it finally depends of my usage too, the 512 sector and block size is not relevant. Going onto the Ashift 12 refers to the most of my usage and might test the ashift 13 later.

 

For some people interested about the block size ina concrete example, you can read this article, else read the benchmarks.

 

Block Size 512 / Ashift 9

Spoiler

seq-read: (g=0): rw=read, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=4
rand-read: (g=1): rw=randread, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=4
seq-write: (g=2): rw=write, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=4
rand-write: (g=3): rw=randwrite, bs=(R) 512B-512B, (W) 512B-512B, (T) 512B-512B, ioengine=libaio, iodepth=4
fio-3.23
Starting 4 processes
seq-read: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [_(3),w(1)][57.7%][w=1378KiB/s][w=2757 IOPS][eta 02m:50s]                      
seq-read: (groupid=0, jobs=1): err= 0: pid=2415: Mon Mar  7 23:42:01 2022
  read: IOPS=414k, BW=202MiB/s (212MB/s)(10.0GiB/50642msec)
    slat (nsec): min=1498, max=10958k, avg=1746.42, stdev=4186.01
    clat (nsec): min=1202, max=10978k, avg=7695.11, stdev=7443.96
     lat (usec): min=2, max=10979, avg= 9.49, stdev= 8.58
    clat percentiles (nsec):
     |  1.00th=[ 6880],  5.00th=[ 6880], 10.00th=[ 6880], 20.00th=[ 6944],
     | 30.00th=[ 6944], 40.00th=[ 6944], 50.00th=[ 6944], 60.00th=[ 7008],
     | 70.00th=[ 7008], 80.00th=[ 7072], 90.00th=[ 7264], 95.00th=[10816],
     | 99.00th=[31616], 99.50th=[33536], 99.90th=[40704], 99.95th=[47872],
     | 99.99th=[75264]
   bw (  KiB/s): min=184828, max=216766, per=100.00%, avg=207502.64, stdev=7385.37, samples=73
   iops        : min=369656, max=433534, avg=415005.78, stdev=14770.70, samples=73
  lat (usec)   : 2=0.01%, 4=0.01%, 10=94.20%, 20=4.14%, 50=1.62%
  lat (usec)   : 100=0.04%, 250=0.01%, 500=0.01%, 750=0.01%
  lat (msec)   : 10=0.01%, 20=0.01%
  cpu          : usr=34.03%, sys=65.74%, ctx=20825, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=20971520,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-read: (groupid=1, jobs=1): err= 0: pid=19319: Mon Mar  7 23:42:01 2022
  read: IOPS=6417, BW=3209KiB/s (3286kB/s)(188MiB/60001msec)
    slat (nsec): min=1758, max=24816k, avg=153474.79, stdev=272284.97
    clat (usec): min=12, max=28548, avg=468.70, stdev=488.48
     lat (usec): min=24, max=28591, avg=622.36, stdev=570.66
    clat percentiles (usec):
     |  1.00th=[   49],  5.00th=[   64], 10.00th=[   66], 20.00th=[   69],
     | 30.00th=[   73], 40.00th=[   94], 50.00th=[  553], 60.00th=[  603],
     | 70.00th=[  635], 80.00th=[  676], 90.00th=[ 1123], 95.00th=[ 1221],
     | 99.00th=[ 1680], 99.50th=[ 1795], 99.90th=[ 2278], 99.95th=[ 2999],
     | 99.99th=[13566]
   bw (  KiB/s): min= 2419, max= 3595, per=100.00%, avg=3216.67, stdev=222.39, samples=86
   iops        : min= 4838, max= 7190, avg=6433.76, stdev=444.73, samples=86
  lat (usec)   : 20=0.01%, 50=1.77%, 100=41.09%, 250=1.66%, 500=0.20%
  lat (usec)   : 750=40.10%, 1000=1.09%
  lat (msec)   : 2=13.94%, 4=0.11%, 10=0.01%, 20=0.03%, 50=0.01%
  cpu          : usr=1.49%, sys=17.11%, ctx=94314, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=385078,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
seq-write: (groupid=2, jobs=1): err= 0: pid=44547: Mon Mar  7 23:42:01 2022
  write: IOPS=206k, BW=101MiB/s (105MB/s)(6032MiB/60001msec); 0 zone resets
    slat (usec): min=2, max=24843, avg= 4.08, stdev=37.06
    clat (nsec): min=1314, max=24859k, avg=15086.66, stdev=64421.15
     lat (usec): min=4, max=24862, avg=19.23, stdev=74.40
    clat percentiles (usec):
     |  1.00th=[   11],  5.00th=[   11], 10.00th=[   11], 20.00th=[   11],
     | 30.00th=[   11], 40.00th=[   12], 50.00th=[   12], 60.00th=[   12],
     | 70.00th=[   13], 80.00th=[   13], 90.00th=[   14], 95.00th=[   21],
     | 99.00th=[   34], 99.50th=[   45], 99.90th=[  660], 99.95th=[ 1020],
     | 99.99th=[ 1762]
   bw (  KiB/s): min=61533, max=118048, per=99.91%, avg=102850.88, stdev=11619.07, samples=86
   iops        : min=123066, max=236096, avg=205702.23, stdev=23238.09, samples=86
  lat (usec)   : 2=0.01%, 10=0.01%, 20=94.42%, 50=5.12%, 100=0.06%
  lat (usec)   : 250=0.04%, 500=0.01%, 750=0.28%, 1000=0.02%
  lat (msec)   : 2=0.04%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=19.74%, sys=63.16%, ctx=35441, majf=0, minf=13
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,12352876,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-write: (groupid=3, jobs=1): err= 0: pid=59723: Mon Mar  7 23:42:01 2022
  write: IOPS=2778, BW=1389KiB/s (1422kB/s)(81.4MiB/60001msec); 0 zone resets
    slat (usec): min=3, max=36188, avg=357.25, stdev=634.28
    clat (usec): min=2, max=37646, avg=1081.06, stdev=1129.42
     lat (usec): min=25, max=37673, avg=1438.58, stdev=1313.86
    clat percentiles (usec):
     |  1.00th=[   61],  5.00th=[   75], 10.00th=[   81], 20.00th=[   87],
     | 30.00th=[   98], 40.00th=[  725], 50.00th=[ 1037], 60.00th=[ 1123],
     | 70.00th=[ 1287], 80.00th=[ 1975], 90.00th=[ 2442], 95.00th=[ 3097],
     | 99.00th=[ 4555], 99.50th=[ 5342], 99.90th=[ 7373], 99.95th=[ 9110],
     | 99.99th=[25297]
   bw (  KiB/s): min=  801, max= 2739, per=100.00%, avg=1391.27, stdev=178.01, samples=119
   iops        : min= 1602, max= 5478, avg=2782.54, stdev=356.02, samples=119
  lat (usec)   : 4=0.01%, 20=0.01%, 50=0.30%, 100=30.17%, 250=7.78%
  lat (usec)   : 500=0.34%, 750=1.48%, 1000=5.64%
  lat (msec)   : 2=34.70%, 4=17.70%, 10=1.83%, 20=0.03%, 50=0.02%
  cpu          : usr=0.88%, sys=9.65%, ctx=47811, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,166684,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=202MiB/s (212MB/s), 202MiB/s-202MiB/s (212MB/s-212MB/s), io=10.0GiB (10.7GB), run=50642-50642msec

Run status group 1 (all jobs):
   READ: bw=3209KiB/s (3286kB/s), 3209KiB/s-3209KiB/s (3286kB/s-3286kB/s), io=188MiB (197MB), run=60001-60001msec

Run status group 2 (all jobs):
  WRITE: bw=101MiB/s (105MB/s), 101MiB/s-101MiB/s (105MB/s-105MB/s), io=6032MiB (6325MB), run=60001-60001msec

Run status group 3 (all jobs):
  WRITE: bw=1389KiB/s (1422kB/s), 1389KiB/s-1389KiB/s (1422kB/s-1422kB/s), io=81.4MiB (85.3MB), run=60001-60001msec

Block Size 4k / Ashift 12

Spoiler

seq-read: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
rand-read: (g=1): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
seq-write: (g=2): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
rand-write: (g=3): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=4
fio-3.23
Starting 4 processes
seq-read: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [_(3),w(1)][59.7%][w=10.7MiB/s][w=2745 IOPS][eta 02m:09s]                         
seq-read: (groupid=0, jobs=1): err= 0: pid=57281: Mon Mar  7 23:47:49 2022
  read: IOPS=92.5k, BW=361MiB/s (379MB/s)(10.0GiB/28328msec)
    slat (nsec): min=1649, max=12723k, avg=9796.60, stdev=79990.10
    clat (nsec): min=1454, max=12784k, avg=33068.99, stdev=138323.89
     lat (usec): min=3, max=12792, avg=42.95, stdev=159.18
    clat percentiles (usec):
     |  1.00th=[    8],  5.00th=[    8], 10.00th=[    9], 20.00th=[    9],
     | 30.00th=[    9], 40.00th=[    9], 50.00th=[   11], 60.00th=[   12],
     | 70.00th=[   13], 80.00th=[   14], 90.00th=[   29], 95.00th=[  163],
     | 99.00th=[  441], 99.50th=[  586], 99.90th=[  922], 99.95th=[ 1188],
     | 99.99th=[ 2147]
   bw (  KiB/s): min=222328, max=416368, per=99.81%, avg=369458.49, stdev=67095.84, samples=41
   iops        : min=55582, max=104092, avg=92364.32, stdev=16773.93, samples=41
  lat (usec)   : 2=0.01%, 10=47.60%, 20=39.78%, 50=4.16%, 100=0.77%
  lat (usec)   : 250=4.34%, 500=2.59%, 750=0.56%, 1000=0.10%
  lat (msec)   : 2=0.07%, 4=0.01%, 10=0.01%, 20=0.01%
  cpu          : usr=12.00%, sys=28.75%, ctx=75192, majf=0, minf=16
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=2621440,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-read: (groupid=1, jobs=1): err= 0: pid=2166: Mon Mar  7 23:47:49 2022
  read: IOPS=6223, BW=24.3MiB/s (25.5MB/s)(1459MiB/60001msec)
    slat (usec): min=2, max=24918, avg=158.46, stdev=413.01
    clat (usec): min=13, max=26751, avg=483.18, stdev=769.79
     lat (usec): min=32, max=38001, avg=641.83, stdev=910.28
    clat percentiles (usec):
     |  1.00th=[   49],  5.00th=[   64], 10.00th=[   67], 20.00th=[   69],
     | 30.00th=[   74], 40.00th=[   94], 50.00th=[  537], 60.00th=[  586],
     | 70.00th=[  619], 80.00th=[  668], 90.00th=[ 1090], 95.00th=[ 1188],
     | 99.00th=[ 1713], 99.50th=[ 1991], 99.90th=[13042], 99.95th=[13829],
     | 99.99th=[25560]
   bw (  KiB/s): min=11375, max=29185, per=99.81%, avg=24845.98, stdev=4231.67, samples=86
   iops        : min= 2843, max= 7296, avg=6211.21, stdev=1057.91, samples=86
  lat (usec)   : 20=0.01%, 50=1.53%, 100=41.19%, 250=1.75%, 500=0.75%
  lat (usec)   : 750=38.86%, 1000=2.30%
  lat (msec)   : 2=13.12%, 4=0.31%, 10=0.02%, 20=0.14%, 50=0.03%
  cpu          : usr=1.34%, sys=16.62%, ctx=91304, majf=0, minf=15
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=373395,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
seq-write: (groupid=2, jobs=1): err= 0: pid=44837: Mon Mar  7 23:47:49 2022
  write: IOPS=63.6k, BW=248MiB/s (261MB/s)(10.0GiB/41216msec); 0 zone resets
    slat (usec): min=2, max=24949, avg=14.80, stdev=174.58
    clat (nsec): min=1273, max=24980k, avg=47763.21, stdev=301767.31
     lat (usec): min=4, max=24983, avg=62.64, stdev=347.83
    clat percentiles (usec):
     |  1.00th=[   12],  5.00th=[   12], 10.00th=[   13], 20.00th=[   13],
     | 30.00th=[   13], 40.00th=[   13], 50.00th=[   13], 60.00th=[   14],
     | 70.00th=[   14], 80.00th=[   19], 90.00th=[   34], 95.00th=[   40],
     | 99.00th=[ 1037], 99.50th=[ 1205], 99.90th=[ 2573], 99.95th=[ 3032],
     | 99.99th=[12780]
   bw (  KiB/s): min=181077, max=438810, per=99.89%, avg=254142.55, stdev=60172.58, samples=60
   iops        : min=45269, max=109702, avg=63535.33, stdev=15043.11, samples=60
  lat (usec)   : 2=0.01%, 10=0.01%, 20=81.33%, 50=14.71%, 100=0.71%
  lat (usec)   : 250=0.24%, 500=0.09%, 750=1.10%, 1000=0.64%
  lat (msec)   : 2=1.01%, 4=0.14%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=7.41%, sys=28.66%, ctx=32972, majf=0, minf=14
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,2621440,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-write: (groupid=3, jobs=1): err= 0: pid=12876: Mon Mar  7 23:47:49 2022
  write: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(662MiB/60002msec); 0 zone resets
    slat (usec): min=3, max=24886, avg=351.35, stdev=608.69
    clat (usec): min=2, max=26767, avg=1062.93, stdev=1075.23
     lat (usec): min=45, max=29390, avg=1414.53, stdev=1249.67
    clat percentiles (usec):
     |  1.00th=[   61],  5.00th=[   76], 10.00th=[   82], 20.00th=[   87],
     | 30.00th=[   98], 40.00th=[  848], 50.00th=[ 1037], 60.00th=[ 1106],
     | 70.00th=[ 1270], 80.00th=[ 1958], 90.00th=[ 2409], 95.00th=[ 3032],
     | 99.00th=[ 4359], 99.50th=[ 4948], 99.90th=[ 6718], 99.95th=[ 8455],
     | 99.99th=[22938]
   bw (  KiB/s): min= 8616, max=14144, per=100.00%, avg=11324.30, stdev=917.43, samples=119
   iops        : min= 2154, max= 3536, avg=2831.06, stdev=229.37, samples=119
  lat (usec)   : 4=0.01%, 20=0.01%, 50=0.29%, 100=30.22%, 250=7.57%
  lat (usec)   : 500=0.39%, 750=0.97%, 1000=6.55%
  lat (msec)   : 2=34.98%, 4=17.44%, 10=1.54%, 20=0.03%, 50=0.01%
  cpu          : usr=0.92%, sys=9.72%, ctx=48443, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,169523,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=361MiB/s (379MB/s), 361MiB/s-361MiB/s (379MB/s-379MB/s), io=10.0GiB (10.7GB), run=28328-28328msec

Run status group 1 (all jobs):
   READ: bw=24.3MiB/s (25.5MB/s), 24.3MiB/s-24.3MiB/s (25.5MB/s-25.5MB/s), io=1459MiB (1529MB), run=60001-60001msec

Run status group 2 (all jobs):
  WRITE: bw=248MiB/s (261MB/s), 248MiB/s-248MiB/s (261MB/s-261MB/s), io=10.0GiB (10.7GB), run=41216-41216msec

Run status group 3 (all jobs):
  WRITE: bw=11.0MiB/s (11.6MB/s), 11.0MiB/s-11.0MiB/s (11.6MB/s-11.6MB/s), io=662MiB (694MB), run=60002-60002msec

Block Size 8k / Ashift 13

Spoiler

seq-read: (g=0): rw=read, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=4
rand-read: (g=1): rw=randread, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=4
seq-write: (g=2): rw=write, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=4
rand-write: (g=3): rw=randwrite, bs=(R) 8192B-8192B, (W) 8192B-8192B, (T) 8192B-8192B, ioengine=libaio, iodepth=4
fio-3.23
Starting 4 processes
seq-read: Laying out IO file (1 file / 10240MiB)
Jobs: 1 (f=1): [_(3),w(1)][59.9%][w=13.0MiB/s][w=1788 IOPS][eta 02m:08s]          
seq-read: (groupid=0, jobs=1): err= 0: pid=7522: Mon Mar  7 23:53:32 2022
  read: IOPS=39.1k, BW=305MiB/s (320MB/s)(10.0GiB/33553msec)
    slat (nsec): min=1855, max=13480k, avg=24436.50, stdev=196678.73
    clat (nsec): min=1469, max=13496k, avg=77508.98, stdev=337467.22
     lat (usec): min=3, max=13499, avg=102.04, stdev=387.41
    clat percentiles (usec):
     |  1.00th=[    9],  5.00th=[   10], 10.00th=[   10], 20.00th=[   10],
     | 30.00th=[   10], 40.00th=[   10], 50.00th=[   14], 60.00th=[   16],
     | 70.00th=[   17], 80.00th=[   29], 90.00th=[  206], 95.00th=[  412],
     | 99.00th=[  898], 99.50th=[ 1172], 99.90th=[ 2073], 99.95th=[12256],
     | 99.99th=[12518]
   bw (  KiB/s): min=195616, max=412528, per=99.90%, avg=312195.83, stdev=86634.46, samples=48
   iops        : min=24452, max=51566, avg=39024.17, stdev=10829.25, samples=48
  lat (usec)   : 2=0.01%, 10=42.55%, 20=32.72%, 50=7.70%, 100=0.55%
  lat (usec)   : 250=9.05%, 500=4.42%, 750=1.63%, 1000=0.60%
  lat (msec)   : 2=0.67%, 4=0.05%, 20=0.05%
  cpu          : usr=5.82%, sys=17.60%, ctx=74453, majf=0, minf=19
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=1310720,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-read: (groupid=1, jobs=1): err= 0: pid=29731: Mon Mar  7 23:53:32 2022
  read: IOPS=5827, BW=45.5MiB/s (47.7MB/s)(2732MiB/60001msec)
    slat (usec): min=2, max=25743, avg=169.22, stdev=441.20
    clat (nsec): min=1633, max=27271k, avg=516016.87, stdev=810421.15
     lat (usec): min=22, max=37803, avg=685.50, stdev=958.34
    clat percentiles (usec):
     |  1.00th=[   50],  5.00th=[   65], 10.00th=[   67], 20.00th=[   69],
     | 30.00th=[   73], 40.00th=[   94], 50.00th=[  578], 60.00th=[  619],
     | 70.00th=[  652], 80.00th=[  701], 90.00th=[ 1172], 95.00th=[ 1254],
     | 99.00th=[ 1844], 99.50th=[ 2343], 99.90th=[13042], 99.95th=[13960],
     | 99.99th=[25560]
   bw (  KiB/s): min=25548, max=56288, per=99.69%, avg=46477.88, stdev=8218.44, samples=84
   iops        : min= 3193, max= 7036, avg=5809.37, stdev=1027.29, samples=84
  lat (usec)   : 2=0.01%, 20=0.01%, 50=1.33%, 100=41.67%, 250=1.35%
  lat (usec)   : 500=0.12%, 750=38.17%, 1000=2.31%
  lat (msec)   : 2=14.31%, 4=0.52%, 10=0.03%, 20=0.15%, 50=0.03%
  cpu          : usr=1.41%, sys=15.88%, ctx=86229, majf=0, minf=19
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=349661,0,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
seq-write: (groupid=2, jobs=1): err= 0: pid=59526: Mon Mar  7 23:53:32 2022
  write: IOPS=36.6k, BW=286MiB/s (300MB/s)(10.0GiB/35824msec); 0 zone resets
    slat (usec): min=3, max=24896, avg=26.31, stdev=179.17
    clat (nsec): min=1405, max=24920k, avg=82630.21, stdev=306807.41
     lat (usec): min=5, max=24925, avg=109.03, stdev=351.66
    clat percentiles (usec):
     |  1.00th=[   13],  5.00th=[   14], 10.00th=[   14], 20.00th=[   14],
     | 30.00th=[   14], 40.00th=[   14], 50.00th=[   14], 60.00th=[   15],
     | 70.00th=[   18], 80.00th=[   34], 90.00th=[   40], 95.00th=[  627],
     | 99.00th=[ 1254], 99.50th=[ 1467], 99.90th=[ 2933], 99.95th=[ 3163],
     | 99.99th=[ 5932]
   bw (  KiB/s): min=191232, max=524263, per=99.85%, avg=292277.83, stdev=73343.82, samples=52
   iops        : min=23904, max=65532, avg=36534.29, stdev=9167.91, samples=52
  lat (usec)   : 2=0.01%, 10=0.01%, 20=72.48%, 50=19.26%, 100=1.70%
  lat (usec)   : 250=0.46%, 500=0.03%, 750=1.85%, 1000=1.31%
  lat (msec)   : 2=2.60%, 4=0.28%, 10=0.01%, 20=0.01%, 50=0.01%
  cpu          : usr=4.89%, sys=21.73%, ctx=32207, majf=0, minf=12
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,1310720,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4
rand-write: (groupid=3, jobs=1): err= 0: pid=572: Mon Mar  7 23:53:32 2022
  write: IOPS=2766, BW=21.6MiB/s (22.7MB/s)(1297MiB/60001msec); 0 zone resets
    slat (usec): min=4, max=24948, avg=358.90, stdev=642.01
    clat (usec): min=2, max=38100, avg=1085.59, stdev=1153.62
     lat (usec): min=44, max=39260, avg=1444.76, stdev=1350.83
    clat percentiles (usec):
     |  1.00th=[   61],  5.00th=[   72], 10.00th=[   81], 20.00th=[   86],
     | 30.00th=[   95], 40.00th=[  676], 50.00th=[ 1037], 60.00th=[ 1123],
     | 70.00th=[ 1303], 80.00th=[ 1958], 90.00th=[ 2507], 95.00th=[ 3064],
     | 99.00th=[ 4686], 99.50th=[ 5473], 99.90th=[ 7963], 99.95th=[12649],
     | 99.99th=[25297]
   bw (  KiB/s): min=12272, max=40496, per=100.00%, avg=22213.78, stdev=3626.11, samples=119
   iops        : min= 1534, max= 5062, avg=2776.72, stdev=453.26, samples=119
  lat (usec)   : 4=0.01%, 50=0.36%, 100=31.45%, 250=6.16%, 500=0.44%
  lat (usec)   : 750=2.26%, 1000=5.54%
  lat (msec)   : 2=34.44%, 4=17.44%, 10=1.86%, 20=0.06%, 50=0.02%
  cpu          : usr=0.93%, sys=9.82%, ctx=48827, majf=0, minf=11
  IO depths    : 1=0.1%, 2=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     issued rwts: total=0,165978,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=4

Run status group 0 (all jobs):
   READ: bw=305MiB/s (320MB/s), 305MiB/s-305MiB/s (320MB/s-320MB/s), io=10.0GiB (10.7GB), run=33553-33553msec

Run status group 1 (all jobs):
   READ: bw=45.5MiB/s (47.7MB/s), 45.5MiB/s-45.5MiB/s (47.7MB/s-47.7MB/s), io=2732MiB (2864MB), run=60001-60001msec

Run status group 2 (all jobs):
  WRITE: bw=286MiB/s (300MB/s), 286MiB/s-286MiB/s (300MB/s-300MB/s), io=10.0GiB (10.7GB), run=35824-35824msec

Run status group 3 (all jobs):
  WRITE: bw=21.6MiB/s (22.7MB/s), 21.6MiB/s-21.6MiB/s (22.7MB/s-22.7MB/s), io=1297MiB (1360MB), run=60001-60001msec

 

Link to comment

Just to let you guys know, a update for ZFS is available ZFS v2.1.3 (Changelog)

 

The new ZFS packages where built yesterday for Unraid versions:

  • 6.9.2
  • 6.10.0-rc2
  • 6.10.0-rc3

 

To pull the update please restart your server or upgrade to Unraid 6.10.0-RC3 if you want to try the Unraid RC series (if you restart your server make sure that your Server has a active internet connection on boot and is actually able to connect to the internet).

 

Thank you @steini84 for the plugin. :)

  • Like 4
  • Thanks 1
Link to comment

Unraid 6.10.0-rc3 GUI still shows version as 2.1.0 even after update and restart.

Though dmesg output shows:

ZFS: Loaded module v2.1.3-1, ZFS pool version 5000, ZFS filesystem version 5

 

Also, am running into new issues with 6.10.0-rc3. While creating a new VM, the Unraid GUI is not able to follow a symbolic link to ZFS.

add_vm_iso_browse_failure.thumb.png.7d4bfa8dfb36ae33487781a5c76f0e30.png

 

"data" is a symbolic link to ZFS. This was working fine with 6.9.2.

 

Strangely, browsing the Unraid Shares works as expected.

shares_zfs_browse_success.thumb.png.81e7df62651b29d9138ac0eb2ef41931.png

Edited by sabertooth
Add information
Link to comment
13 minutes ago, sabertooth said:

Unraid 6.10.0-rc3 GUI still shows version as 2.1.0 even after update and restart.

This is the plugin version and not the ZFS version.

 

13 minutes ago, sabertooth said:

"data" is a symbolic link to ZFS. This was working fine with 6.9.2.

Why not use the ZFS directory?

Are you really sure this worked before? But this seems like a issue with GUI itself and should reported to the bug tracker for RC3.

Link to comment
46 minutes ago, ich777 said:

This is the plugin version and not the ZFS version.

Thanks, and sorry for the false alarm.


 

Quote

 

Why not use the ZFS directory?

Are you really sure this worked before? But this seems like a issue with GUI itself and should reported to the bug tracker for RC3.

 

Earlier I created Unraid Shares and replaced them with symbolic link to ZFS directory. Worked like charm apart from getxattr() errors, which I was fine with. However, with 6.10.0-rc3, browsing these shares and using them during VM creation wasn't an option since they simply wouldn't work. As an alternative, I moved the ZFS shares under smb-extra.conf.

 

I have reported these under RC3, since ZFS isn't a supported FS, support isn't guaranteed. :(

 

Edited by sabertooth
minor change
  • Like 1
Link to comment
6 minutes ago, anylettuce said:

last time I had to reboot my unraid I did the shutdown from the GUI. ZFS had to do a scrub as "unclean shutdown detected.

On what Unraid version are you?

Was a Parity check triggered too?

This will be only triggered if the shutdown was somehow unclean.

Please always attach your Diagnostics if you experience such issues (pull the Diagnostics of corse after it happened and don't reboot in between).

 

6 minutes ago, anylettuce said:

Is there a proper way to reboot the machine? looking to update to the latest unraid beta

Do you had any SSH connections or some terminal windows with an active connection to Unraid open?

Please go forward and try to upgrade to the latest RC version and see if the same happens.

A scrub won't hurt your pool in any way.

Link to comment

Just got this error while updating to UNRAID 6.10.0 rc4. I previously updated from rc1 to rc3 just fine. The server logs (the ones in the UI) don't seem to show anything valuable regarding this error?

 

Any ideas on how to re-trigger the download maybe? (Although I'm not sure why it failed in the first place, since the internet was working just fine.)

 

image.png.c77d819292851397e7a01977d12d907c.png

Link to comment
1 minute ago, TheJulianJES said:

Just got this error while updating to UNRAID 6.10.0 rc4. I previously updated from rc1 to rc3 just fine. The server logs (the ones in the UI) don't seem to show anything valuable regarding this error?

 

Any ideas on how to re-trigger the download maybe? (Although I'm not sure why it failed in the first place, since the internet was working just fine.)

 

image.png.c77d819292851397e7a01977d12d907c.png

Lkkely you updated before new modules were ready. Takes about a hour after release. They will be downloaded at reboot. This helper justs pre downloads to improve boot time.

  • Like 1
Link to comment
1 minute ago, SimonF said:

Lkkely you updated before new modules were ready. Takes about a hour after release. They will be downloaded at reboot. This helper justs pre downloads to improve boot time.

Ah, good to know that it's still possible to reboot then. I just updated like 7 minutes ago and on GitHub, it looked like they were ready. Not sure what exactly happened, but I guess I'll reboot later today (and see what happens).

Link to comment

@SimonF & @TheJulianJES investigated a little further and found the cause of the issue, the naming from the Linux Kernel version changed, updated the Plugin-Update-Helper to take that into account. Hopefully it will work next time.

 

It is safe to reboot because the plugins are downloaded on boot too, just make sure that you have a active Internet connection without any AdBlocking software in front of it.

If you don't have a active Internet connection on boot a plugin removal from the Plugin Error tab and a new installation will fix it.

  • Like 3
Link to comment

WIth 6.10.0-rc4, pool goes offline when an attempt is made to write something into the pool.

errors: List of errors unavailable: pool I/O is currently suspended

 

Attempting to run zpool clear fails

cannot clear errors for data: I/O error

 

state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
  scan: scrub repaired 0B in 01:51:14 with 0 errors on Wed Mar 23 15:24:10 2022

 

The pool was fine till RC3.

Edited by sabertooth
Spelling
Link to comment
4 minutes ago, sabertooth said:

WIth 6.10.0-rc4, pool goes offline when an attempt is made to write something into the pool.

errors: List of errors unavailable: pool I/O is currently suspended

 

Attempting to run zpool clear fails

cannot clear errors for data: I/O error

 

state: SUSPENDED
status: One or more devices are faulted in response to IO failures.
action: Make sure the affected devices are connected, then run 'zpool clear'.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-HC
  scan: scrub repaired 0B in 01:51:14 with 0 errors on Wed Mar 23 15:24:10 2022

 

The pool was fine till RC3.

 

This is just an unfortunate coincidence, i would guess it´s cabling issues or a dying drive. Have you done a smart test?

Link to comment
1 hour ago, steini84 said:

 

This is just an unfortunate coincidence, i would guess it´s cabling issues or a dying drive. Have you done a smart test?

Smart test was done last fortnight, and have weekly scrub scheduled on the pool.

I have built the system just last month with new Exos drives.

 

I reverted back to RC3 and things seem to be fine for now, will schedule a SMART test right away.

Do you need more verbose logging?

 

RC3: Linux 5.15.27-Unraid.

RC4: Linux 5.15.30-Unraid.

 

Update 1: SMART short self-test passed for all the drives.

Edited by sabertooth
Update 1
Link to comment

@steini84 or @ich777 would it be possible to include ioztat with the plugin, or do you feel it's better served to something like NerdPack? I've been using it since it's inception, super helpful for quickly tracking down problem-child filesets:

https://github.com/jimsalterjrs/ioztat

 

It does require python, but that's the only thing outside of the base OS that's required for us (though I don't know if that requirement precluded it from inclusion, hence the NerdPack comment...)

 

Something else I've been symlinking for a while now is the bash-completion.d file from mainline zfs - it's 'mostly' functional in openzfs, though I've not spent a lot of time poking around at it.

https://github.com/openzfs/zfs/tree/master/contrib/bash_completion.d

Link to comment
@steini84 or @ich777 would it be possible to include ioztat with the plugin, or do you feel it's better served to something like NerdPack? I've been using it since it's inception, super helpful for quickly tracking down problem-child filesets:
https://github.com/jimsalterjrs/ioztat
 
It does require python, but that's the only thing outside of the base OS that's required for us (though I don't know if that requirement precluded it from inclusion, hence the NerdPack comment...)
 
Something else I've been symlinking for a while now is the bash-completion.d file from mainline zfs - it's 'mostly' functional in openzfs, though I've not spent a lot of time poking around at it.
https://github.com/openzfs/zfs/tree/master/contrib/bash_completion.d

I want to keep the zfs package as vanilla as possible. It would be a great fit for a plugin :)


Sent from my iPhone using Tapatalk
Link to comment

Just to let you guys know, a update for ZFS is available ZFS v2.1.4 (Changelog - not available yet but should be tomorrow)

 

The new ZFS packages are currently building for Unraid versions:

  • 6.9.2
  • 6.10.0-rc4

 

Should be done in about 20 minutes.

 

To pull the update please restart your server or upgrade to Unraid 6.10.0-RC4 if you want to try the Unraid RC series (if you restart your server make sure that your Server has a active internet connection on boot and is actually able to connect to the internet).

 

@steini84

  • Like 1
Link to comment
1 hour ago, ich777 said:

To pull the update please restart your server or upgrade to Unraid 6.10.0-RC4 if you want to try the Unraid RC series (if you restart your server make sure that your Server has a active internet connection on boot and is actually able to connect to the internet).


is it possible to update without reboot?

Link to comment
1 hour ago, ich777 said:

Why not ask if he can compile it? Or what's the issue with this?

No issue with asking him to do so... I just hate asking someone else to do something if I can do it myself lol

 

I guess it might make some sense though now that I think about it a bit more... with zfs support eventually coming, it might actually make sense to have it built into Nerd Tools than to just have on my own separated from the rest lol

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.