Andrea3000 Posted May 29, 2022 Share Posted May 29, 2022 Hi, I just set up an unraid server with zfs and I run some benchmarks. The performance of the array are significantly lower than what I was expecting. Eventually the array will be of 8 drives in raidz2 but at the moment I'm waiting for 4 more SATA cables to be delivered, therefore at the moment my system is as follows: Gigabyte C246M-WU4 Intel Core i3-9100 Kingston 64GB 2666Mhz ECC RAM 4x WD Red Pro NAS 4TB 7200RPM The ZFS array is of 4 drives in raidz2. I just run some benchmarks with fio and this is what I get: fio --direct=1 --name=test --bs=256k --filename=/zfs/zfs/test/whatever.tmp --thread --size=64G --iodepth=64 --readwrite=randrw test: (g=0): rw=randrw, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=psync, iodepth=64 fio-3.23 Starting 1 thread test: Laying out IO file (1 file / 65536MiB) Jobs: 1 (f=1): [m(1)][100.0%][r=44.5MiB/s,w=47.5MiB/s][r=177,w=189 IOPS][eta 00m:00s] test: (groupid=0, jobs=1): err= 0: pid=2196: Sun May 29 11:44:57 2022 read: IOPS=225, BW=56.4MiB/s (59.1MB/s)(31.0GiB/580821msec) clat (usec): min=25, max=765897, avg=4299.50, stdev=18500.73 lat (usec): min=25, max=765897, avg=4299.87, stdev=18500.73 clat percentiles (usec): | 1.00th=[ 41], 5.00th=[ 71], 10.00th=[ 126], 20.00th=[ 129], | 30.00th=[ 133], 40.00th=[ 139], 50.00th=[ 147], 60.00th=[ 159], | 70.00th=[ 6390], 80.00th=[ 9503], 90.00th=[ 11207], 95.00th=[ 12649], | 99.00th=[ 26084], 99.50th=[ 39584], 99.90th=, 99.95th=, | 99.99th= bw ( KiB/s): min= 512, max=130048, per=100.00%, avg=58788.18, stdev=27067.79, samples=1141 iops : min= 2, max= 508, avg=229.63, stdev=105.74, samples=1141 write: IOPS=225, BW=56.4MiB/s (59.2MB/s)(32.0GiB/580821msec); 0 zone resets clat (usec): min=23, max=23560, avg=115.99, stdev=251.13 lat (usec): min=25, max=23570, avg=125.35, stdev=251.63 clat percentiles (usec): | 1.00th=[ 28], 5.00th=[ 41], 10.00th=[ 98], 20.00th=[ 101], | 30.00th=[ 103], 40.00th=[ 105], 50.00th=[ 108], 60.00th=[ 111], | 70.00th=[ 117], 80.00th=[ 130], 90.00th=[ 143], 95.00th=[ 149], | 99.00th=[ 174], 99.50th=[ 212], 99.90th=[ 799], 99.95th=[ 5342], | 99.99th= bw ( KiB/s): min= 512, max=125952, per=100.00%, avg=59016.54, stdev=28070.89, samples=1137 iops : min= 2, max= 492, avg=230.52, stdev=109.65, samples=1137 lat (usec) : 50=4.79%, 100=5.49%, 250=73.76%, 500=0.26%, 750=0.04% lat (usec) : 1000=0.06% lat (msec) : 2=0.02%, 4=0.03%, 10=7.27%, 20=7.41%, 50=0.71% lat (msec) : 100=0.05%, 250=0.05%, 500=0.03%, 750=0.03%, 1000=0.01% cpu : usr=0.51%, sys=6.56%, ctx=44312, majf=0, minf=0 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% issued rwts: total=131040,131104,0,0 short=0,0,0,0 dropped=0,0,0,0 latency : target=0, window=0, percentile=100.00%, depth=64 Run status group 0 (all jobs): READ: bw=56.4MiB/s (59.1MB/s), 56.4MiB/s-56.4MiB/s (59.1MB/s-59.1MB/s), io=31.0GiB (34.4GB), run=580821-580821msec WRITE: bw=56.4MiB/s (59.2MB/s), 56.4MiB/s-56.4MiB/s (59.2MB/s-59.2MB/s), io=32.0GiB (34.4GB), run=580821-580821msec The dataset used to run the benchmark was created with those parameters: zfs create zfs/test -o casesensitivity=insensitive -o compression=off -o atime=off -o sync=standard Both the read/write speeds and the IOPS are much lower than what those drives should be capable of. @ich777 are there some settings I'm missing or am I doing something wrong? Thanks Andrea Quote Link to comment
Posted by ich777,
Recommended by ich777
Go to this post
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.