unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

Here's the logs from my run. I opted to adjust my settings based on these settings from Pass 2 instead of any of the resulting findings.

 

 --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s

31 | 152 | 6912 | 3456 | 128 |  3392  | 181.7

 

--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 180 | 8192 | 3686 | 128 |  1843  | 177.7


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 |  28 | 1280 |  384 | 128 |   192  | 177.5

 

The results below do NOT include the Baseline test of current values.

The Fastest settings tested give a peak speed of 181.8 MB/s
     md_sync_window: 4544          md_num_stripes: 9088
     md_sync_thresh: 4488             nr_requests: 128
This will consume 200 MB (20 MB more than your current utilization of 180 MB)

The Thriftiest settings (95% of Fastest) give a peak speed of 173.6 MB/s
     md_sync_window: 384          md_num_stripes: 768
     md_sync_thresh: 192             nr_requests: 128
This will consume 16 MB (164 MB less than your current utilization of 180 MB)

The Recommended settings (99% of Fastest) give a peak speed of 180.3 MB/s
     md_sync_window: 768          md_num_stripes: 1536
     md_sync_thresh: 760             nr_requests: 128
This will consume 33 MB (147 MB less than your current utilization of 180 MB)

LongSyncTestReport_2019_08_09_1119.txt

Link to comment
1 hour ago, BRiT said:

Here's the logs from my run. I opted to adjust my settings based on these settings from Pass 2 instead of any of the resulting findings.

 

I'm noticing a lot of variability in your results.

 

While the fastest result from Pass 1 (Test 4b), which was retested in Pass 2 (Test 25) both produced 181.2 MB/s (100% consistent, nice), Pass 2 Test 48 (181.8 MB/s) was retested in Pass 3 Test 1q @ 171.2 MB/s.  Also, Pass 1 Test 3b @ 153.4 MB/s was retested in Pass 2 Test 1 @ 181.2 MB/s.

 

The way the #'s are jumping around in Pass 2, up/down/up/down, rather than a bell curve, suggests you are experiencing random slowdowns that are not related to the settings.

 

I don't know if you have any Dockers, VM's, or other processes running, or file access occurring that is causing this randomness.  You may want to try running this again in safe mode, with all plugins, dockers, VM's disabled, and no file access occurring.  I would also imagine that if you ran the test again, as-is with no server changes, you would see different answers for the same tests, due to the randomness observed.

 

You can also probably err on the lower side for values.  It looks like your server can hit over 180 MB/s with md_sync_window in the 768 range (Pass 1 Tests 2a/b/c), which is 99% of the fastest and in the Recommended range.

  • Upvote 1
Link to comment

Yeah, I made sure cachedirs was disabled but forgot to disable Emby that does some hourly scans of the libraries. So it's likely the result of that. I had no downloads or uploads going on with no VMs running and no other server activity. 

 

As to using low numbers, I try not to if only because of memories from issues way back in the 4.x days with reads or writes being starved out, and I have oodles of memory to spare where 150 meg isn't a concern. Even though I know everything is setup differently now and those issues shouldn't ever happen with the new software design.

 

For curiosity sake I started a full parity check before the last post to see how it plays out over the long haul. When the parity check finishes and when I can turn off Emby and all dockers, I'll run another Tuner script round trying for more consistency.

Link to comment
2 minutes ago, BRiT said:

Emby that does some hourly scans of the libraries. So it's likely the result of that.

That checks out.  Looks like the slowdowns were about every 5 tests.  Since you were running 10-minute tests, and there's on average about 30 seconds of overhead on each of your tests (varies by array size and speed), every 5 tests is about 53 minutes.

 

CacheDirs is less of a concern, since by rapidly scanning your directories, you keeping your disks from spinning up, so no disk access actually occurs.

Link to comment

Since I know of no one that puts their server into safe mode and disables everything for their parity checks, I see some value in running a pass in both clean and dirty configurations. Problem is, there is no way to get consistent numbers when things are firing off seemingly randomly.

 

I'm having a hard time wrapping my head around my own question, let alone the correct answer, so here goes.

 

Is there a way to test or know for sure that the values obtained running clean are indeed the best values during heavy use?

 

I'm hoping there is a logical explanation that says, "of course, that's how it works"

  • Upvote 1
Link to comment
6 hours ago, jonathanm said:

Since I know of no one that puts their server into safe mode and disables everything for their parity checks, I see some value in running a pass in both clean and dirty configurations. Problem is, there is no way to get consistent numbers when things are firing off seemingly randomly. 

 

Currently, I see zero value in running a dirty pass with UTT.  There's a reason we don't take pictures of the stars during the day, as the noise from the sun overpowers the faint starlight, and all you see is blue sky and a big ball of fire.

 

The goal of UTT is to identify the right range of value combinations that works well on a server.  If you are running a dirty pass, with events randomly running, then the fastest set of values might be tested during heaviest random load, and end up looking like the slowest combo, making the data worthless.  I guarantee you that if you access your array for ANY reason, that the random reads and writes to random sectors on a hard drive, which force the heads to move out of position, will affect speeds.  That's just simply physics.  I think BRiT's results illustrate that perfectly.  How much it affects the speeds is depending upon the nature of the reads/writes (big files, small files, 1 file, 1k files, position on disk platter, etc.) and your hardware/drives and which drive you're writing to, and is nearly impossible to replicate the exact same transactions pass after pass so that all UTT tests are under the exact same conditions. 

 

The only way for UTT to provide a comparative analysis of different combinations of values is to do so without any external influence.  If you want to take pictures of stars, you want to do it on a dark, moonless night, with no clouds or light pollution.  Safe mode is an easy way to turn off all the extraneous noise that is polluting the performance picture, which I why I recommended it as an option for BRiT.  Any user that seems random highs/lows should consider running a UTT test in Safe Mode to see if that cleans up the results.

 

 

6 hours ago, jonathanm said:

Is there a way to test or know for sure that the values obtained running clean are indeed the best values during heavy use?

"for sure", implying 100% certainty?  No.  But there are some manual tests you could do to get an idea.  Essentially, run a clean (not dirty) UTT test.  From that, take a few sets of values:

  • Unraid stock values
  • Thriftiest values
  • Recommended values
  • Fastest values
  • And some values that consume even more memory, beyond the Fastest, that still provide fast speeds

Apply each set of values, then start a Parity Check, then perform some read/write performance tests.  I'll leave it up to you on how to do those performance tests, though there are plenty of ways to accomplish it.  Whatever test you do, you should try to do it in a way that you can repeat it for each combination of Tunables above.

 

Some ideas of tests: 

  • You could create a script to generate a random content file (Linux can do this easily), then make a copy of it, then delete it.
  • You could read files from the array, or write files to the array over the network using a network benchmarking tool.
  • You could do the network test from multiple client PC's concurrently for an even higher server load.
  • You could stream a few movies to different PC's while running the parity check, seeing which Tunable settings provide a seamless viewing experience.

Depending upon your own performance goals, you could be looking for which set of values provides the fastest parity checks, or which provides the fastest read/write speeds, or which has the best balance of both.  In my small family, it's unlikely that I would ever be streaming more than 2 movies concurrently, but maybe you have a big family and lot's of TV's, and want to stream 6 movies concurrently. 

 

A real torture test would be to stream multiple movies from the exact same disk.  Get 4 streams going, then start a parity check and see how far it progresses in 10 minutes, and monitor the streams for viewing glitches. Change your tunables, reset your streams, then start a new parity check and measure again.  I think you could do all this in just a few hours.

 

I would say that this approach is better than trying to run a dirty UTT test.  UTT can ONLY tell you which values are good when run clean.  But you can then take those values and do some manual load testing with them, to see which ones perform well when the server is being tasked with various concurrent loads.

 

One last idea:  I had started programming some read/write tests for UTT v3 which I didn't release.  Unraid 6.x broke those tests (I don't remember why, but they somehow got broken), so I removed them from the new UTT v4, but I'm more than happy to post them if someone wants to take a stab at making them work under Unraid 6.x. 

 

It's been a while since I looked at them, but I believe that these read/write tests tried to automate some of what I discussed above - running repeatable read/write tests while running parity checks with different Tunable values. Essentially, you would pick a disk in your array to write the file to, pick a size for the file, and the script would write a file of that size using random data (isolating write speed from all other factors), then read it back to test read speed.

 

But to be honest, in testing on my own server, I never found that a server under load responded differently to the Tunables than a parity check on a completely idle server - meaning that the fastest UTT values remained fastest even under load.  But that was just on my hardware, and may not be an absolute for every server.  For that reason, I don't find trying to tune for heavy server load worth my time, as in my experience tuning for Parity Check speeds works for all loads, on my hardware.

Edited by Pauven
  • Upvote 1
Link to comment
On 8/7/2019 at 10:29 PM, Xaero said:

Turns out I have 3 pieces of hardware that completely break that entire section of the script.

The first one was an easy fix; the nvme ssds break the array declaration because you can't have a ":" in the name of a variable.
I reworked your sed line 132 to:


< <( sed -e 's/://g' -e 's/\[/scsi/g' -e 's/]//g'  < <( lsscsi -H )

 

 

I like this sed fix, thanks!

 

 

For those that have NVMe drives (and any others who want to share, the more samples the more I can program around variances), I need to see the output from this command:

lshw -quiet -short -c storage

 

Link to comment
20 minutes ago, Pauven said:

For those that have NVMe drives (and any others who want to share, the more samples the more I can program around variances), I need to see the output from this command:


lshw -quiet -short -c storage

 

 

root@nas:~# lshw -quiet -short -c storage
H/W path               Device       Class      Description
==========================================================
/0/100/7.1                          storage    82371AB/EB/MB PIIX4 IDE
/0/100/15/0            scsi3        storage    PVSCSI SCSI Controller
/0/100/15.1/0                       storage    NVMe SSD Controller SM981/PM981/PM983
/0/100/16.1/0          scsi4        storage    PVSCSI SCSI Controller
/0/100/17/0            scsi5        storage    SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
/0/3                   scsi0        storage    

 

Link to comment
3 minutes ago, Pauven said:

Thanks @StevenD!  Can I bother you to also run:


lsscsi -H

and


lsscsi -st

 

 

 

root@nas:~# lsscsi -H
[0]    usb-storage   
[1]    ata_piix      
[2]    ata_piix      
[3]    vmw_pvscsi    
[4]    vmw_pvscsi    
[5]    mpt3sas       
[N:0]  /dev/nvme0  Samsung SSD 970 PRO 512GB         S463NF0M516013Y     1B2QEXP7
root@nas:~# lsscsi -st
[0:0:0:0]    disk    usb:1-1.1:1.0                   /dev/sda   31.9GB
[0:0:0:1]    disk    usb:1-1.1:1.0                   /dev/sdb        -
[3:0:0:0]    disk                                    /dev/sdc   1.07GB
[4:0:0:0]    disk                                    /dev/sdd    960GB
[5:0:0:0]    disk    sas:0x300605b00e84f8bf          /dev/sde   8.00TB
[5:0:1:0]    enclosu sas:0x300705b00e84f8b0          -               -
[5:0:2:0]    disk    sas:0x300605b00e84f8bb          /dev/sdf   8.00TB
[5:0:3:0]    disk    sas:0x300605b00e84f8b3          /dev/sdg   8.00TB
[5:0:4:0]    disk    sas:0x300605b00e84f8b5          /dev/sdh   8.00TB
[5:0:5:0]    disk    sas:0x300605b00e84f8b9          /dev/sdi   8.00TB
[5:0:6:0]    disk    sas:0x300605b00e84f8bd          /dev/sdj   8.00TB
[5:0:7:0]    disk    sas:0x300605b00e84f8b7          /dev/sdk   8.00TB
[5:0:8:0]    disk    sas:0x300605b00e84f8ba          /dev/sdl   8.00TB
[5:0:9:0]    disk    sas:0x300605b00e84f8b4          /dev/sdm   8.00TB
[5:0:10:0]   disk    sas:0x300605b00e84f8b1          /dev/sdn   8.00TB
[5:0:11:0]   disk    sas:0x300605b00e84f8be          /dev/sdo   8.00TB
[5:0:12:0]   disk    sas:0x300605b00e84f8bc          /dev/sdp   8.00TB
[5:0:13:0]   disk    sas:0x300605b00e84f8b8          /dev/sdq   8.00TB
[5:0:14:0]   disk    sas:0x300605b00e84f8b0          /dev/sdr   8.00TB
[N:0:4:1]    disk    pcie 0x144d:0xa801                         /dev/nvme0n1   512GB

 

Link to comment
31 minutes ago, Pauven said:

For those that have NVMe drives (and any others who want to share, the more samples the more I can program around variances), I need to see the output from this command:

 

Count me an "any others".

root@Brunnhilde:~# lshw -quiet -short -c storage
H/W path              Device       Class      Description
=========================================================
/0/100/1.1/0          scsi1        storage    SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/1c.4/0                      storage    ASM1062 Serial ATA Controller
/0/100/1f.2                        storage    8 Series/C220 Series Chipset Family 6-port SATA Controller 1 [AHCI mode]
/0/1                  scsi0        storage    
/0/2                  scsi2        storage    
/0/3                  scsi3        storage    
/0/4                  scsi4        storage    
/0/5                  scsi5        storage    
/0/6                  scsi7        storage    
/0/7                  scsi8        storage    
root@Brunnhilde:~# 

 

 

Link to comment
37 minutes ago, Pauven said:

@StevenD & @Xaero can you provide the output for:

 


mdcmd status | grep "rdevStatus"

and


mdcmd status | grep "rdevName"

and


df -h

 

 

root@nas:~# mdcmd status | grep "rdevStatus"
rdevStatus.0=DISK_OK
rdevStatus.1=DISK_OK
rdevStatus.2=DISK_OK
rdevStatus.3=DISK_OK
rdevStatus.4=DISK_OK
rdevStatus.5=DISK_OK
rdevStatus.6=DISK_OK
rdevStatus.7=DISK_OK
rdevStatus.8=DISK_OK
rdevStatus.9=DISK_OK
rdevStatus.10=DISK_OK
rdevStatus.11=DISK_OK
rdevStatus.12=DISK_OK
rdevStatus.13=DISK_NP
rdevStatus.14=DISK_NP
rdevStatus.15=DISK_NP
rdevStatus.16=DISK_NP
rdevStatus.17=DISK_NP
rdevStatus.18=DISK_NP
rdevStatus.19=DISK_NP
rdevStatus.20=DISK_NP
rdevStatus.21=DISK_NP
rdevStatus.22=DISK_NP
rdevStatus.23=DISK_NP
rdevStatus.24=DISK_NP
rdevStatus.25=DISK_NP
rdevStatus.26=DISK_NP
rdevStatus.27=DISK_NP
rdevStatus.28=DISK_NP
rdevStatus.29=DISK_OK
root@nas:~# 
root@nas:~# 
root@nas:~# mdcmd status | grep "rdevName"
rdevName.0=sdi
rdevName.1=sdf
rdevName.2=sdj
rdevName.3=sde
rdevName.4=sdq
rdevName.5=sdl
rdevName.6=sdp
rdevName.7=sdo
rdevName.8=sdn
rdevName.9=sdg
rdevName.10=sdh
rdevName.11=sdk
rdevName.12=sdr
rdevName.13=
rdevName.14=
rdevName.15=
rdevName.16=
rdevName.17=
rdevName.18=
rdevName.19=
rdevName.20=
rdevName.21=
rdevName.22=
rdevName.23=
rdevName.24=
rdevName.25=
rdevName.26=
rdevName.27=
rdevName.28=
rdevName.29=sdm
root@nas:~# 
root@nas:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           16G  716M   15G   5% /
tmpfs            32M  600K   32M   2% /run
devtmpfs         16G     0   16G   0% /dev
tmpfs            16G     0   16G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  1.9M  127M   2% /var/log
/dev/sda1        30G  490M   30G   2% /boot
/dev/loop0      8.7M  8.7M     0 100% /lib/modules
/dev/loop1      5.9M  5.9M     0 100% /lib/firmware
/dev/md1        7.3T  7.1T  239G  97% /mnt/disk1
/dev/md2        7.3T  3.7T  3.7T  51% /mnt/disk2
/dev/md3        7.3T  6.6T  715G  91% /mnt/disk3
/dev/md4        7.3T  6.6T  734G  91% /mnt/disk4
/dev/md5        7.3T  5.5T  1.9T  75% /mnt/disk5
/dev/md6        7.3T  4.1T  3.3T  56% /mnt/disk6
/dev/md7        7.3T  7.2T  165G  98% /mnt/disk7
/dev/md8        7.3T  5.5T  1.9T  75% /mnt/disk8
/dev/md9        7.3T  7.1T  193G  98% /mnt/disk9
/dev/md10       7.3T  6.5T  850G  89% /mnt/disk10
/dev/md11       7.3T  6.9T  458G  94% /mnt/disk11
/dev/md12       7.3T  3.4T  3.9T  47% /mnt/disk12
/dev/nvme0n1p1  477G  395G   82G  83% /mnt/cache
shfs             88T   70T   18T  80% /mnt/user
/dev/sdd1       894G  716G  179G  81% /mnt/disks/APPDATA_BACKUP
/dev/sdc1      1021M  348M  674M  35% /mnt/disks/UNRAIDBOOT
/dev/loop2       20G  4.3G   16G  22% /var/lib/docker
shm              64M     0   64M   0% /var/lib/docker/containers/e60fe82b5262b59081476363199cb7cc3082771ec0b2946a38b7accc59a0a502/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/6c04f6088bd948815e7023f949d83fbc5673f2e6bcf14aea7f42aec79f5e612f/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/9e668c38bba52dea641308a12408f339f7d8824f2ce84d573cc633f518a0cbeb/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b6a9bf81564477a9cc60cdc5079ed23d8c950ab1b60047972ce0d0339bd7a107/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/ebe0cc2382f0dfe4ff00acd561ad8e50f2b8a723d685ea32b3b306463057fc70/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/0d23c9a0d418f1daab9ba74758001d2a9e2372b4228e03f884eecba0a57dabc4/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/779092989e59bed35c5c36d91c3b68a7c3392da84f56d1da4e240f189fb41eb9/mounts/shm
shm              64M  8.0K   64M   1% /var/lib/docker/containers/f3ffce0a4ddee82e2de28367ebd9d3d99564bfcd95ff7d7cfc0926f315bb8883/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/9ba4d044a1e36b6049f429fa264569894ccb92c264fbfd360b5ac26c5cb250c8/mounts/shm
root@nas:~# 

 

Link to comment
2 hours ago, Pauven said:

lshw -quiet -short -c storage

H/W path                    Device       Class       Description
================================================================
/0/100/1.1/0.1                           storage     X399 Series Chipset SATA Controller
/0/100/1.1/0.2/4/0          scsi11       storage     SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon]
/0/100/1.2/0                             storage     NVMe SSD Controller SM961/PM961
/0/100/8.1/0.2                           storage     FCH SATA Controller [AHCI mode]
/0/117/1.2/0                             storage     WD Black NVMe SSD
/0/117/8.1/0.2                           storage     FCH SATA Controller [AHCI mode]
/0/1                        scsi0        storage
/0/2                        scsi4        storage
/0/3                        scsi5        storage
/0/4                        scsi6        storage
/0/5                        scsi8        storage
/0/6                        scsi1        storage
/0/7                        scsi2        storage
/0/8                        scsi3        storage

 

Link to comment
3 minutes ago, Pauven said:

If anyone has an NVMe drive as part of their array (not cache, but data or parity), please run the above

How about as neither? I have a NVMe drive mounted by UD for a VM.

 

EDIT: I see the first two are only for unraid drives, but the last is still an option.

Edited by jbartlett
Link to comment
7 minutes ago, jbartlett said:

You were wondering if having a saturated controller would affect the results if drives were moved around to maximize bandwidth - looks like it did. 

 

File on the 9th is with my PCIe 2 SAS controller saturated, file from the 10th is after I moved two drives to the MB controller.

 

Wow!

Link to comment

Interestingly, running 'grep "rdevStatus" didn't work, but grep -i "rdevstatus" did:

root@BlackHole:~# mdcmd status | grep -i "rdevstatus"
rdevStatus.0=DISK_OK
rdevStatus.1=DISK_OK
rdevStatus.2=DISK_OK
rdevStatus.3=DISK_OK
rdevStatus.4=DISK_OK
rdevStatus.5=DISK_OK
rdevStatus.6=DISK_OK
rdevStatus.7=DISK_OK
rdevStatus.8=DISK_OK
rdevStatus.9=DISK_OK
rdevStatus.10=DISK_OK
rdevStatus.11=DISK_OK
rdevStatus.12=DISK_OK
rdevStatus.13=DISK_OK
rdevStatus.14=DISK_OK
rdevStatus.15=DISK_OK
rdevStatus.16=DISK_OK
rdevStatus.17=DISK_OK
rdevStatus.18=DISK_OK
rdevStatus.19=DISK_OK
rdevStatus.20=DISK_OK
rdevStatus.21=DISK_OK
rdevStatus.22=DISK_OK
rdevStatus.23=DISK_NP
rdevStatus.24=DISK_NP
rdevStatus.25=DISK_NP
rdevStatus.26=DISK_NP
rdevStatus.27=DISK_NP
rdevStatus.28=DISK_NP
rdevStatus.29=DISK_OK
root@BlackHole:~# mdcmd status | grep "rdevStatus"
root@BlackHole:~# 

Similar thing happened with rdevName - I think it may be a web terminal issue, not sure:


root@BlackHole:~# mdcmd status | grep "rdevName"
root@BlackHole:~# mdcmd status | grep -i "rdevname"
rdevName.0=sdy
rdevName.1=sdw
rdevName.2=sde
rdevName.3=sdf
rdevName.4=sdg
rdevName.5=sdc
rdevName.6=sdt
rdevName.7=sdd
rdevName.8=sdj
rdevName.9=sdu
rdevName.10=sdh
rdevName.11=sdl
rdevName.12=sdk
rdevName.13=sdb
rdevName.14=sdv
rdevName.15=sdm
rdevName.16=sdn
rdevName.17=sdq
rdevName.18=sdr
rdevName.19=sdo
rdevName.20=sds
rdevName.21=sdi
rdevName.22=sdp
rdevName.23=
rdevName.24=
rdevName.25=
rdevName.26=
rdevName.27=
rdevName.28=
rdevName.29=sdx
root@BlackHole:~# 

 

And finally df -h:


root@BlackHole:~# df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           24G  1.4G   23G   6% /
tmpfs            32M  1.3M   31M   5% /run
devtmpfs         24G     0   24G   0% /dev
tmpfs            24G     0   24G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  904K  128M   1% /var/log
/dev/sda1        59G  4.6G   54G   8% /boot
/dev/loop0       20M   20M     0 100% /lib/modules
/dev/loop1      5.9M  5.9M     0 100% /lib/firmware
/dev/md1        7.3T  3.3T  4.1T  45% /mnt/disk1
/dev/md2        7.3T  2.5T  4.9T  34% /mnt/disk2
/dev/md3        7.3T  728G  6.6T  10% /mnt/disk3
/dev/md4        7.3T  728G  6.6T  10% /mnt/disk4
/dev/md5        7.3T  728G  6.6T  10% /mnt/disk5
/dev/md6        7.3T  728G  6.6T  10% /mnt/disk6
/dev/md7        7.3T  728G  6.6T  10% /mnt/disk7
/dev/md8        7.3T  728G  6.6T  10% /mnt/disk8
/dev/md9        7.3T  844G  6.5T  12% /mnt/disk9
/dev/md10       7.3T  728G  6.6T  10% /mnt/disk10
/dev/md11       7.3T  1.4T  6.0T  19% /mnt/disk11
/dev/md12       7.3T  730G  6.6T  10% /mnt/disk12
/dev/md13       7.3T  728G  6.6T  10% /mnt/disk13
/dev/md14       7.3T  728G  6.6T  10% /mnt/disk14
/dev/md15       7.3T  730G  6.6T  10% /mnt/disk15
/dev/md16       7.3T  728G  6.6T  10% /mnt/disk16
/dev/md17       7.3T  730G  6.6T  10% /mnt/disk17
/dev/md18       7.3T  1.4T  6.0T  18% /mnt/disk18
/dev/md19       7.3T  728G  6.6T  10% /mnt/disk19
/dev/md20       7.3T  728G  6.6T  10% /mnt/disk20
/dev/md21       7.3T  734G  6.6T  10% /mnt/disk21
/dev/md22       7.3T  954G  6.4T  13% /mnt/disk22
/dev/nvme0n1p1  954G  100G  854G  11% /mnt/cache
shfs            161T   22T  139T  14% /mnt/user0
shfs            161T   22T  140T  14% /mnt/user
/dev/loop2       40G  7.9G   31G  21% /var/lib/docker
/dev/loop3      1.0G   17M  905M   2% /etc/libvirt
shm              64M     0   64M   0% /var/lib/docker/containers/ad97b37af764aa83b3276d7f03807a5486a8885f56fdb77f557e5b78f820e150/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/8d286807ba4757698d04b3160d399be1162d0b33dd8cfc6b86bde162bf95f1be/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/1eb3ea0e1e716beee08125eb1f4d65e421bf1182860515e1d0926a6f5f24500d/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/b6e07ad9a92216ffc1a5dd6ef6206852a466eb2aa4b9dfd5a38a990cc14f7d95/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/7935b46776f36856f516675b79cd89261734cea208e0ee25abe162293bde75a2/mounts/shm
shm              64M     0   64M   0% /var/lib/docker/containers/47458aa783d0ec7ca0f5bd3171dac3232d29e7f5bfea8058a8b422633afc486e/mounts/shm
shm              64M  368K   64M   1% /var/lib/docker/containers/05c5042e739fd6f3e2ac99a4e2f4193ae1fb8059d305587ceb2efe960f280cd8/mounts/shm
shm              64M  8.0K   64M   1% /var/lib/docker/containers/a38258d8e8231f6114f033bf8e5f4f36a99e4e0f6ed17948fec61ac54a7369d1/mounts/shm
root@BlackHole:~# 

 

If you are wondering if my NVME is part of my array - no it is not.

Anyways, back to trying to find all the stuff for my actual computer setup so I can get off this tiny laptop where I might be able to actually look at some code.

Link to comment
6 minutes ago, jbartlett said:

How about as neither? I have a NVMe drive mounted by UD for a VM.

 

EDIT: I see the first two are only for unraid drives, but the last is still an option.

 

I'm thinking just array drives.  This is for the SCSI Hot Controllers and Connected Drives report at the end of the UTT results.  A lot of the report requires configuration data for array drives, and so far all these NVMe drives have been non-array Cache or Unassigned drives, so they don't fully make it into the report.  I'm trying to connect data from various sources together, and so I need to see what NVMe array devices look like.

Link to comment
2 minutes ago, StevenD said:

You're very welcome!  Have you posted an updated version recently?  I have scheduled downtime for tonight to run the Long Test.

Not yet.  I'm trying to get the disk report working correctly, and hope to have UTT 4.1 out soon, maybe even today...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.