Jump to content
Pauven

unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables

1051 posts in this topic Last Reply

Recommended Posts

Below are my initial results from UTT v4.1.  The main difference here is that Pass 2 now uses the lowest md_sync_window that reaches at least 99.8% of the fastest speed in Pass 1, and Pass 3 is using the lowest md_sync_window that reaches at least 99.8% of the fastest speed but looking at both Pass 1 and Pass 2 results to find it.

 

I think I'm happy with using the 99.8%+ value for Pass 2.  The fastest peak speed I've ever seen on my server is 139.5 MB/s, and Pass 2 still hits this a few times (and at lower values, nice!), so I think the new logic helped it find the leading edge of the absolute peak.

 

I'm less convinced that using the 99.8%+ value for Pass 3 was the right choice, as it ends up spending quite a bit of time testing a value (2944 @ 139.3 MB/s) that wasn't the fastest (4032 @ 139.5 MB/s).  Plus, the final results still show 4032 as the Fastest, so having Pass 3 test a lower value seems irrelevant in the final results, and 2944 wasn't even included in the final results, making Pass 3 seem pointless.

 

I'm thinking the right path here is to use the 99.8%+ value for Pass 2, but the absolute Fastest value for Pass 3.  Thoughts?

 

--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 679 | 7680 | 3840 | 128 |  3800  | 139.5 


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 113 | 1280 |  384 | 128 |   192  | 134.7 


 --- TEST PASS 1 (2.5 Hrs - 12 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |  67 |  768 |  384 | 128 |   376  |  41.5 |   320  | 135.4 |   192  | 134.6
  2 | 135 | 1536 |  768 | 128 |   760  |  61.6 |   704  | 136.3 |   384  | 135.3
  3 | 271 | 3072 | 1536 | 128 |  1528  |  84.9 |  1472  | 137.5 |   768  | 136.2
  4 | 543 | 6144 | 3072 | 128 |  3064  | 130.2 |  3008  | 139.3 |  1536  | 137.5

 --- TEST PASS 1_HIGH (40 Min - 3 Sample Points @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |1086 |12288 | 6144 | 128 |  6136  | 139.3 |  6080  | 139.4 |  3072  | 139.3

 --- TEST PASS 1_VERYHIGH (40 Min - 3 Sample Points @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |1630 |18432 | 9216 | 128 |  9208  | 139.2 |  9152  | 138.0 |  4608  | 138.5

 --- Using md_sync_window=3072 & md_sync_thresh=window-64 for Pass 2 ---

 --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 271 | 3072 | 1536 | 128 |  1472  | 137.5
  2 | 283 | 3200 | 1600 | 128 |  1536  | 137.6
  3 | 294 | 3328 | 1664 | 128 |  1600  | 137.6
  4 | 305 | 3456 | 1728 | 128 |  1664  | 137.9
  5 | 317 | 3584 | 1792 | 128 |  1728  | 137.8
  6 | 328 | 3712 | 1856 | 128 |  1792  | 137.8
  7 | 339 | 3840 | 1920 | 128 |  1856  | 138.0
  8 | 350 | 3968 | 1984 | 128 |  1920  | 138.2
  9 | 362 | 4096 | 2048 | 128 |  1984  | 138.1
 10 | 373 | 4224 | 2112 | 128 |  2048  | 138.3
 11 | 384 | 4352 | 2176 | 128 |  2112  | 138.5
 12 | 396 | 4480 | 2240 | 128 |  2176  | 138.4
 13 | 407 | 4608 | 2304 | 128 |  2240  | 138.5
 14 | 418 | 4736 | 2368 | 128 |  2304  | 138.8
 15 | 430 | 4864 | 2432 | 128 |  2368  | 138.6
 16 | 441 | 4992 | 2496 | 128 |  2432  | 138.4
 17 | 452 | 5120 | 2560 | 128 |  2496  | 139.0
 18 | 464 | 5248 | 2624 | 128 |  2560  | 138.8
 19 | 475 | 5376 | 2688 | 128 |  2624  | 138.5
 20 | 486 | 5504 | 2752 | 128 |  2688  | 139.2
 21 | 498 | 5632 | 2816 | 128 |  2752  | 139.0
 22 | 509 | 5760 | 2880 | 128 |  2816  | 139.2
 23 | 520 | 5888 | 2944 | 128 |  2880  | 139.3
 24 | 532 | 6016 | 3008 | 128 |  2944  | 139.2
 25 | 543 | 6144 | 3072 | 128 |  3008  | 139.3
 26 | 554 | 6272 | 3136 | 128 |  3072  | 139.4
 27 | 566 | 6400 | 3200 | 128 |  3136  | 139.2
 28 | 577 | 6528 | 3264 | 128 |  3200  | 139.4
 29 | 588 | 6656 | 3328 | 128 |  3264  | 139.3
 30 | 600 | 6784 | 3392 | 128 |  3328  | 139.2
 31 | 611 | 6912 | 3456 | 128 |  3392  | 139.4
 32 | 622 | 7040 | 3520 | 128 |  3456  | 139.4
 33 | 634 | 7168 | 3584 | 128 |  3520  | 139.4
 34 | 645 | 7296 | 3648 | 128 |  3584  | 139.4
 35 | 656 | 7424 | 3712 | 128 |  3648  | 139.4
 36 | 668 | 7552 | 3776 | 128 |  3712  | 139.3
 37 | 679 | 7680 | 3840 | 128 |  3776  | 139.4
 38 | 690 | 7808 | 3904 | 128 |  3840  | 139.4
 39 | 701 | 7936 | 3968 | 128 |  3904  | 139.4
 40 | 713 | 8064 | 4032 | 128 |  3968  | 139.5
 41 | 724 | 8192 | 4096 | 128 |  4032  | 139.4
 42 | 735 | 8320 | 4160 | 128 |  4096  | 139.4
 43 | 747 | 8448 | 4224 | 128 |  4160  | 139.5
 44 | 758 | 8576 | 4288 | 128 |  4224  | 139.3
 45 | 769 | 8704 | 4352 | 128 |  4288  | 139.4
 46 | 781 | 8832 | 4416 | 128 |  4352  | 139.5
 47 | 792 | 8960 | 4480 | 128 |  4416  | 139.4
 48 | 803 | 9088 | 4544 | 128 |  4480  | 139.5
 49 | 815 | 9216 | 4608 | 128 |  4544  | 139.5

 --- Using md_sync_window=2944 for Pass 3 ---

 --- TEST PASS 3 (4 Hrs - 18 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
 1a | 520 | 5888 | 2944 | 128 |  2943  | 125.9
 1b | 520 | 5888 | 2944 | 128 |  2940  | 126.6
 1c | 520 | 5888 | 2944 | 128 |  2936  | 123.5
 1d | 520 | 5888 | 2944 | 128 |  2932  | 123.5
 1e | 520 | 5888 | 2944 | 128 |  2928  | 131.8
 1f | 520 | 5888 | 2944 | 128 |  2924  | 137.3
 1g | 520 | 5888 | 2944 | 128 |  2920  | 138.1
 1h | 520 | 5888 | 2944 | 128 |  2916  | 139.1
 1i | 520 | 5888 | 2944 | 128 |  2912  | 139.3
 1j | 520 | 5888 | 2944 | 128 |  2908  | 139.1
 1k | 520 | 5888 | 2944 | 128 |  2904  | 139.2
 1l | 520 | 5888 | 2944 | 128 |  2900  | 139.3
 1m | 520 | 5888 | 2944 | 128 |  2896  | 139.1
 1n | 520 | 5888 | 2944 | 128 |  2892  | 139.3
 1o | 520 | 5888 | 2944 | 128 |  2888  | 138.3
 1p | 520 | 5888 | 2944 | 128 |  2884  | 138.9
 1q | 520 | 5888 | 2944 | 128 |  2880  | 139.3
 1r | 520 | 5888 | 2944 | 128 |  1472  | 137.6

The results below do NOT include the Baseline test of current values.

The Fastest settings tested give a peak speed of 139.5 MB/s
     md_sync_window: 4032          md_num_stripes: 8064
     md_sync_thresh: 3968             nr_requests: 128
This will consume 713 MB (34 MB more than your current utilization of 679 MB)

The Thriftiest settings (95% of Fastest) give a peak speed of 135.4 MB/s
     md_sync_window: 384          md_num_stripes: 768
     md_sync_thresh: 320             nr_requests: 128
This will consume 67 MB (612 MB less than your current utilization of 679 MB)

The Recommended settings (99% of Fastest) give a peak speed of 138.2 MB/s
     md_sync_window: 1984          md_num_stripes: 3968
     md_sync_thresh: 1920             nr_requests: 128
This will consume 350 MB (329 MB less than your current utilization of 679 MB)

NOTE: Adding additional drives will increase memory consumption.

In Unraid, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 15 Hrs 20 Min 40 Sec.

 

Share this post


Link to post
1 minute ago, DanielCoffey said:

I am sorry but I must have cleared those logs a long time ago. I would have cleared them back in 2017 and don't have a local copy of the logs any more. Unless they are buried on the flash I don't know where they are.

 

I can certainly do an extra-long test on the 4.1 script when you have it.

 

\\server\flash\preclear_reports or /boot/preclear_reports

 

Thanks!

Share this post


Link to post

Looks like the logs are long gone from the flash - we have had so many updates in all that time. Sorry.

Share this post


Link to post
23 minutes ago, Pauven said:

Hmmm, I don't know.  Typo?  Copy&Paste relic?  I agree it looks wrong.  Do you think that was the problem?

I'll be able to test when I get home, going to change the declaration order to:
    InitVars
    Getlshw
    GetDisks
    ReportHeader
    ReportFooter

Which *should* give me just the header and footer output so I can tinker with this particular problem. I have a feeling the error is in that general area, just need to pinpoint it. One thought is that it could be assigning it properly, and then overwrite it based on the logic being erroneous, but I can't see any obvious logical errors.

Share this post


Link to post
13 hours ago, Xaero said:

I think some of my hardware may break your script - note that my NVME SSD's report an extra column (pcie, then the bus address) compared to regular disks.

 

Though I have a NVMe cache drive, lsscsi -st doesn't even show it on my server.  Certainly makes development harder when you can't even simulate all the possibilities.

root@Tower:/boot/utt# lsscsi -st
[0:0:0:0]    disk    usb:1-5:1.0                     /dev/sda   4.00GB
[12:0:0:0]   disk    sas:0x0000000000000000          /dev/sdb   3.00TB
[12:0:1:0]   disk    sas:0x0100000000000000          /dev/sdc   3.00TB
[12:0:2:0]   disk    sas:0x0200000000000000          /dev/sdd   3.00TB
[12:0:3:0]   disk    sas:0x0300000000000000          /dev/sde   3.00TB
[12:0:4:0]   disk    sas:0x0400000000000000          /dev/sdf   8.00TB
[12:0:5:0]   disk    sas:0x0700000000000000          /dev/sdg   8.00TB
[13:0:0:0]   disk    sas:0x0000000000000000          /dev/sdh   8.00TB
[13:0:1:0]   disk    sas:0x0100000000000000          /dev/sdi   3.00TB
[13:0:2:0]   disk    sas:0x0200000000000000          /dev/sdj   3.00TB
[13:0:3:0]   disk    sas:0x0300000000000000          /dev/sdk   3.00TB
[13:0:4:0]   disk    sas:0x0400000000000000          /dev/sdl   3.00TB
[13:0:5:0]   disk    sas:0x0500000000000000          /dev/sdm   3.00TB
[13:0:6:0]   disk    sas:0x0600000000000000          /dev/sdn   3.00TB
[13:0:7:0]   disk    sas:0x0700000000000000          /dev/sdo   3.00TB
[14:0:0:0]   disk    sas:0x0000000000000000          /dev/sdp   3.00TB
[14:0:1:0]   disk    sas:0x0100000000000000          /dev/sdq   3.00TB
[14:0:2:0]   disk    sas:0x0200000000000000          /dev/sdr   3.00TB
[14:0:3:0]   disk    sas:0x0300000000000000          /dev/sds   3.00TB
[14:0:4:0]   disk    sas:0x0400000000000000          /dev/sdt   3.00TB
[14:0:5:0]   disk    sas:0x0500000000000000          /dev/sdu   3.00TB
[14:0:6:0]   disk    sas:0x0600000000000000          /dev/sdv   4.00TB
[14:0:7:0]   disk    sas:0x0700000000000000          /dev/sdw   4.00TB

 

Share this post


Link to post
2 minutes ago, DanielCoffey said:

Looks like the logs are long gone from the flash - we have had so many updates in all that time. Sorry.

 

No worries.  I was mainly just curious if they were around 16.5 hours.  A pre-clear read on a single disk should operate at 100% speed (in theory), so if your parity check completed in the same time, then it is operating at 100% maximum speed. 

 

I really have no idea if a parity check could complete in the same time as a pre-clear read pass (parity check overhead?, bandwith limitations for all drives concurrent?), but I'm fairly confident in saying that a parity check could never be faster than a pre-clear read pass.  So it makes a nice benchmark to compare against.

Share this post


Link to post
6 minutes ago, Xaero said:

I'll be able to test when I get home, going to change the declaration order to:
    InitVars
    Getlshw
    GetDisks
    ReportHeader
    ReportFooter

Which *should* give me just the header and footer output so I can tinker with this particular problem. I have a feeling the error is in that general area, just need to pinpoint it. One thought is that it could be assigning it properly, and then overwrite it based on the logic being erroneous, but I can't see any obvious logical errors.

 

Just an FYI:  Getlshw is depreciated on Unraid 6.x.  Even back on Unraid 6.2, 3 years ago, I made a note that it didn't seem necessary.  I've been thinking of removing it, but I always have this fear that as soon as I remove it someone will need it.  Regardless, your declaration order should work fine.

 

I previously added a lot of debug echo statements to the GetDisks and DiskReport, to troubleshoot the variables and arrays as they were processing.  That's  how I got it working on my server.  Of course, those echo statements are long gone now, sorry.

Share this post


Link to post

Any reason why the disk report at the end is showing you have a 512 GB  Parity drive, but 3 & 6 TB data drives?


That looks wrong. My cache drive is 512GB but parity is 6TB

Share this post


Link to post
2 minutes ago, tmchow said:

 


That looks wrong. My cache drive is 512GB but parity is 6TB

Please run lsscsi -st from the command line and paste the results here.

Share this post


Link to post
47 minutes ago, Pauven said:

Please run lsscsi -st from the command line and paste the results here.

One problem is that my config changed after I've run it since I formatted my original single cache SSD from xfs to btrfs so I could add a second cache SSD.

But here's the output:

 

[0:0:0:0]    disk    usb:4-1:1.0                     /dev/sda   15.3GB
[1:0:0:0]    disk    sas:0x4433221103000000          /dev/sdb   3.00TB
[1:0:1:0]    disk    sas:0x4433221100000000          /dev/sdc   3.00TB
[1:0:2:0]    disk    sas:0x4433221101000000          /dev/sdd   3.00TB
[1:0:3:0]    disk    sas:0x4433221102000000          /dev/sde   3.00TB
[1:0:4:0]    disk    sas:0x4433221104000000          /dev/sdf   6.00TB
[1:0:5:0]    disk    sas:0x4433221105000000          /dev/sdg   6.00TB
[1:0:6:0]    disk    sas:0x4433221106000000          /dev/sdh   6.00TB
[1:0:7:0]    disk    sas:0x4433221107000000          /dev/sdi   6.00TB
[2:0:0:0]    disk    sata:5707c181007ca132           /dev/sdj    512GB
[3:0:0:0]    disk    sata:5707c1810043a27a           /dev/sdk    512GB

 

Share this post


Link to post

One question: What should I have `Tunable (enable NCQ)'  set to? It's currently set to "off" but not sure if it should an alternate setting.

Share this post


Link to post
1 hour ago, tmchow said:

One question: What should I have `Tunable (enable NCQ)'  set to? It's currently set to "off" but not sure if it should an alternate setting.

 

For the purposes of Parity Check speeds and the UTT tests, it doesn't really matter.

 

Native Command Queuing (NCQ) is primarily a feature that affect lots of random reads/writes.  Imagine you were give a list of 100 random books to check out of your library.  If you worked the list top to bottom, you would ping-pong all over the library, running from aisle to aisle, trying to find the books in the listed order.  NCQ effectively re-orders the list, so you can grab all the books on aisle 1 before going to aisle 2 and so on.  Moving the drive heads around on spinning disk platters takes time, so you want to minimize movements between reads/writes for best performance, which NCQ does.  So it can make random operations faster - especially if there are a lot of them.  I'm not sure what you'd have to do to generate this type of load in a home environment, this is more of an enterprise feature.

 

The Parity Check, on the other hand, is like being tasked to read all the books in the library, one after another, without skipping any, going shelf by shelf, aisle by aisle.

 

I think it might be possible that NCQ can actually make sequential, non-random tasks slower, as there is some overhead in the NCQ processing, but this might not be the issue it was when NCQ first came on the market.

Share this post


Link to post
1 hour ago, Pauven said:

 

For the purposes of Parity Check speeds and the UTT tests, it doesn't really matter.

  

Native Command Queuing (NCQ) is primarily a feature that affect lots of random reads/writes.  Imagine you were give a list of 100 random books to check out of your library.  If you worked the list top to bottom, you would ping-pong all over the library, running from aisle to aisle, trying to find the books in the listed order.  NCQ effectively re-orders the list, so you can grab all the books on aisle 1 before going to aisle 2 and so on.  Moving the drive heads around on spinning disk platters takes time, so you want to minimize movements between reads/writes for best performance, which NCQ does.  So it can make random operations faster - especially if there are a lot of them.  I'm not sure what you'd have to do to generate this type of load in a home environment, this is more of an enterprise feature.

 

The Parity Check, on the other hand, is like being tasked to read all the books in the library, one after another, without skipping any, going shelf by shelf, aisle by aisle.

 

I think it might be possible that NCQ can actually make sequential, non-random tasks slower, as there is some overhead in the NCQ processing, but this might not be the issue it was when NCQ first came on the market.

Thanks for much for the detailed reply. Very much appreciated!

Share this post


Link to post
Posted (edited)

So uh, this will be a weekend project.

Turns out I have 3 pieces of hardware that completely break that entire section of the script.

The first one was an easy fix; the nvme ssds break the array declaration because you can't have a ":" in the name of a variable.
I reworked your sed line 132 to:

< <( sed -e 's/://g' -e 's/\[/scsi/g' -e 's/]//g'  < <( lsscsi -H )

 

That takes care of that, but I feel I should get a bit more "advanced" with it, since we could strip all invalid characters from that area.

From there, it vomits on my megaraid controller at line 215.

My resulting output, is kind of comical:


SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0       usbstorage -
[N:1:1:1]       parity          sdy     /dev/nvme1n1    WDC WD80EMAZ-00W

[1] scsi1       megaraidsas -   MegaRAID SAS 2008 [Falcon]
        disk1           sdw             WDC WD80EFAX-68L
[N:1:1:1]       parity          sdy     /dev/nvme1n1    WDC WD80EMAZ-00W

[N0] scsiN0     devnvme0 -

[N1] scsiN1     devnvme1 -


I uh. I have 24 online 8tb reds.

It seems like the associative arrays are off-by-one, but that doesn't explain the duplicated output on the 0 and then 1. I'll have to poke at it with time to sit down and really sink my teeth into it.

 

Edited by Xaero

Share this post


Link to post

I tried running version 4.0 against my system and I noticed some errors displaying before the screen cleared and it asked me if I wanted to continue.

 

Querying lsscsi for the SCSI Hosts
./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:0': not a valid identifier
./unraid6x-tunables-tester.sh: line 128: scsiN:0[scsibus]=N:0: command not found
./unraid6x-tunables-tester.sh: line 130: scsiN:0[driver]=/dev/nvme0: No such file or directory
./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:1': not a valid identifier
./unraid6x-tunables-tester.sh: line 128: scsiN:1[scsibus]=N:1: command not found
./unraid6x-tunables-tester.sh: line 130: scsiN:1[driver]=/dev/nvme1: No such file or directory
Querying lshw for the SCSI Host Names, please wait (may take several minutes)

 

 

nas-diagnostics-20190809-0744.zip

Share this post


Link to post
6 hours ago, jbartlett said:

I tried running version 4.0 against my system and I noticed some errors displaying before the screen cleared and it asked me if I wanted to continue.

 

Querying lsscsi for the SCSI Hosts
./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:0': not a valid identifier
./unraid6x-tunables-tester.sh: line 128: scsiN:0[scsibus]=N:0: command not found
./unraid6x-tunables-tester.sh: line 130: scsiN:0[driver]=/dev/nvme0: No such file or directory
./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:1': not a valid identifier
./unraid6x-tunables-tester.sh: line 128: scsiN:1[scsibus]=N:1: command not found
./unraid6x-tunables-tester.sh: line 130: scsiN:1[driver]=/dev/nvme1: No such file or directory
Querying lshw for the SCSI Host Names, please wait (may take several minutes)

 

 

nas-diagnostics-20190809-0744.zip 167.53 kB · 0 downloads

Correct - you probably have a NVME SSD reporting as a scsi device in the kernel drivers. I'm not sure if this is a kernel change in the 6.7.x as Pauven (I believe) is running a 6.6.x build. But the change I posted above addresses this specific problem. There's some debugging that needs to be done with the disk reporting. These errors are purely for informational output in the report and should not affect the results of the tester.

Share this post


Link to post
3 hours ago, Xaero said:

you probably have a NVME SSD reporting as a scsi device

Two, it looks like.

[N:0]  /dev/nvme0  Samsung SSD 960 EVO 500GB         S3X4NB0K309****     3B7QCXE7
[N:1]  /dev/nvme1  WDC WDS256G1X0C-00ENX0            17501442****        B35900WD

(last 4 of SN removed by me)

Share this post


Link to post

Started a test run today, looking forward to sharing the results when done.

Share this post


Link to post

Ran 4.0 through on a short test, it reported the following. Probably related to the above issue I posted. It also reported the same controller 3 times at the end.

 

Completed: 0 Hrs 2 Min 34 Sec.
./unraid6x-tunables-tester.sh: line 200: scsiN: 0[scsibus]: syntax error: invalid arithmetic operator (error token is "[scsibus]")
./unraid6x-tunables-tester.sh: line 201: scsiN: 0[driver]: syntax error: invalid arithmetic operator (error token is "[driver]")
./unraid6x-tunables-tester.sh: line 202: scsiN: 0[name]: syntax error: invalid arithmetic operator (error token is "[name]")
./unraid6x-tunables-tester.sh: line 204: ${#scsiN:0[@]}: bad substitution
./unraid6x-tunables-tester.sh: line 209: scsiN: 0[@]: syntax error: invalid arithmetic operator (error token is "[@]")
./unraid6x-tunables-tester.sh: line 200: scsiN: 1[scsibus]: syntax error: invalid arithmetic operator (error token is "[scsibus]")
./unraid6x-tunables-tester.sh: line 201: scsiN: 1[driver]: syntax error: invalid arithmetic operator (error token is "[driver]")
./unraid6x-tunables-tester.sh: line 202: scsiN: 1[name]: syntax error: invalid arithmetic operator (error token is "[name]")
./unraid6x-tunables-tester.sh: line 204: ${#scsiN:1[@]}: bad substitution
./unraid6x-tunables-tester.sh: line 209: scsiN: 1[@]: syntax error: invalid arithmetic operator (error token is "[@]")

 

ShortSyncTestReport_2019_08_09_1101.zip

Share this post


Link to post
1 hour ago, jbartlett said:

Two, it looks like.


[N:0]  /dev/nvme0  Samsung SSD 960 EVO 500GB         S3X4NB0K309****     3B7QCXE7
[N:1]  /dev/nvme1  WDC WDS256G1X0C-00ENX0            17501442****        B35900WD

(last 4 of SN removed by me)

Adding the no nvme option excluded the NVMe drives from the output

 

lsscsi -H --no-nvme

Share this post


Link to post
5 minutes ago, jbartlett said:

Adding the no nvme option excluded the NVMe drives from the output

 

lsscsi -H --no-nvme

 

I suppose someone could add an NVMe drive to their array, no?  Seems like I remember @johnnie.black having an Unraid server using just SSD's. 

 

The --no-nvme option might be a good solution to at least fix this report.  As Xaero mentioned, these error messages are harmless and informational only, but unwanted all the same.

Share this post


Link to post

Just started running the long test. I notice the block option isn’t working for me. I’m still receiving parity check alerts. Not sure where the Tunables tester logs are located?

Share this post


Link to post

The logs will either be in /boot on the flash or in the folder you placed the .sh script? That is where mine were anyway - a pair of files, one with .txt and one with .csv

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.