unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

Result from run with dockers turned off. The test results has better consistency.

 

--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 152 | 6912 | 3456 | 128 |  3392  | 181.7 

 

The Fastest settings tested give a peak speed of 182.2 MB/s
     md_sync_window: 9216          md_num_stripes: 18432
     md_sync_thresh: 9215             nr_requests: 128
This will consume 406 MB (254 MB more than your current utilization of 152 MB)

The Thriftiest settings (95% of Fastest) give a peak speed of 179.4 MB/s
     md_sync_window: 192          md_num_stripes: 384
     md_sync_thresh: 184             nr_requests: 128
This will consume 8 MB (144 MB less than your current utilization of 152 MB)

The Recommended settings (99% of Fastest) give a peak speed of 180.8 MB/s
     md_sync_window: 256          md_num_stripes: 512
     md_sync_thresh: 248             nr_requests: 128
This will consume 11 MB (141 MB less than your current utilization of 152 MB)

NOTE: Adding additional drives will increase memory consumption.

In Unraid, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 17 Hrs 44 Min 58 Sec.

LongSyncTestReport_2019_08_10_2343.txt

Link to comment
12 minutes ago, BRiT said:

Result from run with dockers turned off. The test results has better consistency.

 

Much better.  Looks like the accuracy is +/- 0.2 MB/s.

 

The new logic in UTT v4.1 would have used md_sync_window 6144 (from TEST PASS 1_HIGH) for Pass 2, and test from 3072 - 9216.  All things considered, I think the v.41 results would be identical to these results for you, as your server has a really flat curve that starts extremely low, and the new logic won't really affect those results.

  • Upvote 1
Link to comment
32 minutes ago, Pauven said:

Could anyone that has at least 2 Cache drives please run this command and provide the output:

 


egrep  "\[|idx"  /var/local/emhttp/disks.ini

 

["parity"]
idx="0"
["disk1"]
idx="1"
["disk2"]
idx="2"
["disk3"]
idx="3"
["disk4"]
idx="4"
["disk5"]
idx="5"
["disk6"]
idx="6"
["disk7"]
idx="7"
["disk8"]
idx="8"
["disk9"]
idx="9"
["parity2"]
idx="29"
["cache"]
idx="30"
["cache2"]
idx="31"
["flash"]
idx="54"

 

Link to comment

I finally figured out why my NVMe drives are not showing. 

 

On Unraid 6.6.6 (which is what I am running), the lsscsi version is 0.29, which doesn't have support for NVMe.

 

Later versions of Unraid have lsscsi version 0.30, which is the latest and has NVMe support.

 

Anyone know exactly what version of Unraid upgraded lsscsi to v0.30?  Nevermind, I just read in the 6.7.0 release notes that lsscsi was upgraded to 0.30.

Edited by Pauven
Link to comment
1 hour ago, Pauven said:

I finally figured out why my NVMe drives are not showing. 

 

On Unraid 6.6.6 (which is what I am running), the lsscsi version is 0.29, which doesn't have support for NVMe.

 

Later versions of Unraid have lsscsi version 0.30, which is the latest and has NVMe support.

 

Anyone know exactly what version of Unraid upgraded lsscsi to v0.30?  Nevermind, I just read in the 6.7.0 release notes that lsscsi was upgraded to 0.30.

 

So for users on older versions of Unraid 6.x, pre-6.7.0, would it be a good feature to have UTT offer to upgrade lsscsi to v0.30?

 

If so, could someone help me out with the commands to do this?  I'm a Windows guy, and I really get stumped when it comes to installing packages and updates unless there's a step by step guide. 

 

I looked at my own code from years ago to install lshw, and modified it to upgrade lsscsi to v0.30.  Looks like it is working.

Edited by Pauven
Link to comment
5 hours ago, Pauven said:

Thanks @Xaero.  I just added the -i to the egreps in the UTT script, just in case.

 

Any idea why your nvme1n1 drive doesn't show up in your df -H results?

The drive is part of a btrfs raid 1. The primary disk is mounted and the secondary disk gets identical data written to it in this case. At least that's how I understand it. I see activity on both of them when I write data to the cache volume - so I assume its working as intended, though I haven't bothered to read into it. I'm considering migrating from the Raid1 setup to a Raid0 setup when I get my 10gbe network going. I plan on having 10gbe inside the rack with a 10gbe uplink to the switch, using a dual-10gbe card. Meaning the server could easily see 20gb/s if I really hit it. Especially when migrating data from older server(s) and/or working with disk images while streaming.

Oh - and to clarify - df only reports mounted filesystems

Edited by Xaero
Link to comment

UTT v4.1 BETA 1 is attached.

 

I'm primarily concerned about the SCSI Hosts and Discs report, so if I could get a few users to run this with a Short test and post the reports that would be great. 

 

This does have the new logic to find the leading edge for Pass 2, rather than the peak, so feel free to run the longer tests if you desire, just run a Short first and share those results.

 

Here's the changelog:

# V4.1: Added a function to use the first result with 99.8% max speed for Pass 2
#       Fixed Server Name in Notification messages (was hardcoded TOWER)
#       Many fixes to the SCSI Host Controllers and Connected Drives report
#       Added a function to check lsscsi version and optionally upgrade to v0.30
#       Cosmetic menu tweaks - by Pauven 08/11/2019

 

 

Edited by Pauven
  • Like 1
  • Upvote 1
Link to comment

Here's a short test on v4.1-beta.  I just kicked off a long test. I will report back tomorrow.

 

                   Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven

             Tunables Report produced Sun Aug 11 21:00:16 CDT 2019

                              Run on server: nas

                             Short Parity Sync Test


Current Values:  md_num_stripes=4480, md_sync_window=2048, md_sync_thresh=2000
                 Global nr_requests=128
                 Disk Specific nr_requests Values:
                    sdj=128, sdi=128, sdf=128, sde=128, sdp=128, sdo=128, 
                    sdq=128, sdr=128, sdh=128, sdg=128, sdm=128, sdl=128, 
                    sdn=128, sdk=128, 


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 256 | 4480 | 2048 | 128 |  2000  | 176.3 


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 |  73 | 1280 |  384 | 128 |   192  | 126.7 


 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |  43 |  768 |  384 | 128 |   376  | 171.3 |   320  | 156.3 |   192  | 127.4
  2 |  87 | 1536 |  768 | 128 |   760  | 173.4 |   704  | 173.0 |   384  | 169.7
  3 | 175 | 3072 | 1536 | 128 |  1528  | 174.9 |  1472  | 174.1 |   768  | 173.9
  4 | 351 | 6144 | 3072 | 128 |  3064  | 176.0 |  3008  | 175.9 |  1536  | 174.4

 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 | 702 |12288 | 6144 | 128 |  6136  | 177.6 |  6080  | 177.3 |  3072  | 177.6

 --- TEST PASS 1_VERYHIGH (30 Sec - 3 Sample Points @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |1054 |18432 | 9216 | 128 |  9208  | 177.8 |  9152  | 178.0 |  4608  | 177.4

 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST ---

If the speeds changed with different values you should run a NORMAL/LONG test.
If speeds didn't change then adjusting Tunables likely won't help your system.

Completed: 0 Hrs 3 Min 36 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with Unraid,
      especially if you have any add-ons or plug-ins installed.


System Info:  nas
              Unraid version 6.7.3-rc1
                   md_num_stripes=4480
                   md_sync_window=2048
                   md_sync_thresh=2000
                   nr_requests=128 (Global Setting)
                   sbNumDisks=14
              CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
              RAM: 32GiB System Memory

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       32941156      569084    31813128      540572      558944    31479924
Low:       32941156     1128028    31813128
High:             0           0           0
Swap:             0           0           0


SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0	usb-storage -	
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL

[1] scsi1	ata_piix -	

[2] scsi2	ata_piix -	

[3] scsi3	vmw_pvscsi -	PVSCSI SCSI Controller

[4] scsi4	vmw_pvscsi -	PVSCSI SCSI Controller

[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
[5:0:0:0]	disk3		sde	8.00TB	HGST HDN728080AL
[5:0:10:0]	disk12		sdn	8.00TB	HGST HDN728080AL
[5:0:11:0]	disk5		sdo	8.00TB	HGST HDN728080AL
[5:0:12:0]	disk4		sdp	8.00TB	HGST HDN728080AL
[5:0:13:0]	disk6		sdq	8.00TB	HGST HDN728080AL
[5:0:2:0]	disk2		sdf	8.00TB	HGST HDN728080AL
[5:0:3:0]	disk9		sdg	8.00TB	HGST HDN728080AL
[5:0:4:0]	disk8		sdh	8.00TB	HGST HDN728080AL
[5:0:5:0]	disk1		sdi	8.00TB	HGST HDN728080AL
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL
[5:0:7:0]	parity2		sdk	8.00TB	HGST HDN728080AL
[5:0:8:0]	disk11		sdl	8.00TB	HGST HDN728080AL
[5:0:9:0]	disk10		sdm	8.00TB	HGST HDN728080AL

[N0] scsiN0	nvme0 -	NVMe
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL


                      *** END OF REPORT ***

 

Link to comment

                   Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven

             Tunables Report produced Sun Aug 11 20:25:03 MDT 2019

                              Run on server: BlackHole

                             Short Parity Sync Test


Current Values:  md_num_stripes=5920, md_sync_window=2664, md_sync_thresh=2000
                 Global nr_requests=128
                 Disk Specific nr_requests Values:
                    sdy=128, sdw=128, sde=128, sdf=128, sdg=128, sdc=128, 
                    sdt=128, sdd=128, sdj=128, sdu=128, sdh=128, sdl=128, 
                    sdk=128, sdb=128, sdv=128, sdm=128, sdn=128, sdq=128, 
                    sdr=128, sdo=128, sds=128, sdi=128, sdp=128, sdx=128, 


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 569 | 5920 | 2664 | 128 |  2000  |  53.4 


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 123 | 1280 |  384 | 128 |   192  |  55.0 


 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |  73 |  768 |  384 | 128 |   376  |  58.4 |   320  |  50.9 |   192  |  53.9
  2 | 147 | 1536 |  768 | 128 |   760  |  61.3 |   704  |  61.8 |   384  |  57.8
  3 | 295 | 3072 | 1536 | 128 |  1528  |  65.1 |  1472  |  64.8 |   768  |  63.4
  4 | 591 | 6144 | 3072 | 128 |  3064  |  66.0 |  3008  |  66.0 |  1536  |  66.1

 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |1182 |12288 | 6144 | 128 |  6136  |  65.8 |  6080  |  65.6 |  3072  |  65.0

 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST ---

If the speeds changed with different values you should run a NORMAL/LONG test.
If speeds didn't change then adjusting Tunables likely won't help your system.

Completed: 0 Hrs 3 Min 30 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with Unraid,
      especially if you have any add-ons or plug-ins installed.


System Info:  BlackHole
              Unraid version 6.7.2
                   md_num_stripes=5920
                   md_sync_window=2664
                   md_sync_thresh=2000
                   nr_requests=128 (Global Setting)
                   sbNumDisks=24
              CPU: Genuine Intel(R) CPU @ 2.00GHz
              RAM: System Memory
         System Memory
         System Memory
         System Memory

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       49371152     9959400    37455020     1486356     1956732    37404184
Low:       49371152    11916132    37455020
High:             0           0           0
Swap:             0           0           0


SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0    usb-storage -    
    parity        sdy        WDC WD80EMAZ-00W

[1] scsi1    megaraid_sas -    MegaRAID SAS 2008 [Falcon]

[N0] scsiN0    nvme0 -    NVMe
    parity        sdy        WDC WD80EMAZ-00W

[N1] scsiN1    nvme1 -    NVMe
    parity        sdy        WDC WD80EMAZ-00W


                      *** END OF REPORT ***


lsscsi -st:


root@BlackHole:/tmp# lsscsi -st
[0:0:0:0]    disk    usb:3-9:1.0                     /dev/sda   62.7GB
[1:0:10:0]   enclosu                                 -               -
[1:0:11:0]   disk                                    /dev/sdb   8.00TB
[1:0:12:0]   disk                                    /dev/sdc   8.00TB
[1:0:13:0]   disk                                    /dev/sdd   8.00TB
[1:0:14:0]   disk                                    /dev/sde   8.00TB
[1:0:15:0]   disk                                    /dev/sdf   8.00TB
[1:0:16:0]   disk                                    /dev/sdg   8.00TB
[1:0:17:0]   disk                                    /dev/sdh   8.00TB
[1:0:18:0]   disk                                    /dev/sdi   8.00TB
[1:0:19:0]   disk                                    /dev/sdj   8.00TB
[1:0:20:0]   disk                                    /dev/sdk   8.00TB
[1:0:21:0]   disk                                    /dev/sdl   8.00TB
[1:0:22:0]   disk                                    /dev/sdm   8.00TB
[1:0:23:0]   disk                                    /dev/sdn   8.00TB
[1:0:24:0]   disk                                    /dev/sdo   8.00TB
[1:0:25:0]   disk                                    /dev/sdp   8.00TB
[1:0:26:0]   disk                                    /dev/sdq   8.00TB
[1:0:27:0]   disk                                    /dev/sdr   8.00TB
[1:0:28:0]   disk                                    /dev/sds   8.00TB
[1:0:29:0]   disk                                    /dev/sdt   8.00TB
[1:0:30:0]   disk                                    /dev/sdu   8.00TB
[1:0:31:0]   disk                                    /dev/sdv   8.00TB
[1:0:32:0]   disk                                    /dev/sdw   8.00TB
[1:0:33:0]   disk                                    /dev/sdx   8.00TB
[1:0:34:0]   disk                                    /dev/sdy   8.00TB
[N:0:1:1]    disk    pcie 0x8086:0x390d                         /dev/nvme0n1  1.02TB
[N:1:1:1]    disk    pcie 0x8086:0x390d                         /dev/nvme1n1  1.02TB

 

lshw -C storage:


root@BlackHole:/tmp# lshw -c Storage
  *-storage                 
       description: RAID bus controller
       product: MegaRAID SAS 2008 [Falcon]
       vendor: Broadcom / LSI
       physical id: 0
       bus info: pci@0000:01:00.0
       logical name: scsi1
       version: 03
       width: 64 bits
       clock: 33MHz
       capabilities: storage pm pciexpress vpd msi msix bus_master cap_list rom
       configuration: driver=megaraid_sas latency=0
       resources: irq:24 ioport:6000(size=256) memory:c7560000-c7563fff memory:c7500000-c753ffff memory:c7540000-c755ffff
  *-storage
       description: Non-Volatile memory controller
       product: SSDPEKNW020T8 [660p, 2TB]
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:03:00.0
       version: 03
       width: 64 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list
       configuration: driver=nvme latency=0
       resources: irq:36 memory:c7400000-c7403fff
  *-storage
       description: Non-Volatile memory controller
       product: SSDPEKNW020T8 [660p, 2TB]
       vendor: Intel Corporation
       physical id: 0
       bus info: pci@0000:04:00.0
       version: 03
       width: 64 bits
       clock: 33MHz
       capabilities: storage pm msi pciexpress msix nvm_express bus_master cap_list
       configuration: driver=nvme latency=0
       resources: irq:26 memory:c7300000-c7303fff
  *-scsi
       physical id: a1
       bus info: usb@3:9
       logical name: scsi0
       capabilities: emulated

 

Hope this is at least helpful. Should be able to get my computer set back up this week, finally.

Also, 
I didn't expect my system to work out the gate. The NVME disks at least show up in the report under their own N# controllers. Still that odd issue with the last parity device being stored as the USB device. And then none of my disks actually show up.

I did think of a different lookup and storage system, by the way - multi-dimensional arrays.

Make an array for Controllers.
For each controller make a new array named as that controller.
For the first element of that array, place your desired product info string(s).
Create a new array for disks.
Add disks to that array.

Place the array of disks as the second element of the controller array.
Add that array to the array of Controllers.

When going to print the data you'd then:

foreach Controller in $Controllers; do

    printf "$Controller[0];"

    foreach Disk in $Controller[1]; do
          printf "$Disk"
    done
done

 

This way it becomes less possible to transpose disks across the array structure.

Not sure if this is a viable approach with the formatting you want to do, though.

Edited by Xaero
Link to comment
2 hours ago, StevenD said:

Its still on pass 2, but I think it has found the max speed for my array. 

Yeah, not everyone gets a super exiting report.  Sorry.  ;)

 

5 hours ago, DanielCoffey said:

Ugh - brought my server up this morning and it has dropped two disks (Parity 1 and Data 1 - probably loose cable). I'll have to rebuild Parity before running the script. I'll report back when that is done.

Fingers crossed!  I feel your pain, been there.

 

11 hours ago, StevenD said:

Here's a short test on v4.1-beta.  I just kicked off a long test. I will report back tomorrow.

Thanks.  Very disappointing I didn't get the report right.

 

10 hours ago, Xaero said:

I didn't expect my system to work out the gate. The NVME disks at least show up in the report under their own N# controllers. Still that odd issue with the last parity device being stored as the USB device. And then none of my disks actually show up.

I did think of a different lookup and storage system, by the way - multi-dimensional arrays.

I wonder if your drives running 10 to 34 instead of 0 to 24 is having an impact on the logic.  Multi-dimensional arrays sounds interesting, but I think the current flaw is very minor and just looks really bad.  I'll give it one more go before trying a new approach.

 

Actually, the NVMe disks did not show, just the NVMe controllers.  Here's mine (NVMe way down at the bottom, which I got to show after adding the new lsscsi v0.30 upgrade function):

 

SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0	usb-storage -	
[0:0:0:0]	flash		sda	4.00GB	Patriot Memory

[1] scsi1	ahci -	

[2] scsi2	ahci -	

[3] scsi3	ahci -	

[4] scsi4	ahci -	

[5] scsi5	ahci -	

[6] scsi6	ahci -	

[7] scsi7	ahci -	

[8] scsi8	ahci -	

[9] scsi9	ahci -	

[10] scsi10	ahci -	

[11] scsi11	ahci -	

[12] scsi12	mvsas -	HighPoint Technologies, Inc.
[12:0:0:0]	disk17		sdb	3.00TB	WDC WD30EFRX-68A
[12:0:1:0]	disk18		sdc	3.00TB	WDC WD30EFRX-68A
[12:0:2:0]	disk19		sdd	3.00TB	WDC WD30EFRX-68E
[12:0:3:0]	disk20		sde	3.00TB	WDC WD30EFRX-68E
[12:0:4:0]	parity2		sdf	8.00TB	HGST HUH728080AL
[12:0:5:0]	parity		sdg	8.00TB	HGST HUH728080AL

[13] scsi13	mvsas -	HighPoint Technologies, Inc.
[13:0:0:0]	disk1		sdh	8.00TB	HGST HUH728080AL
[13:0:1:0]	disk2		sdi	3.00TB	WDC WD30EFRX-68A
[13:0:2:0]	disk3		sdj	3.00TB	WDC WD30EFRX-68E
[13:0:3:0]	disk4		sdk	3.00TB	WDC WD30EFRX-68A
[13:0:4:0]	disk5		sdl	3.00TB	WDC WD30EFRX-68A
[13:0:5:0]	disk6		sdm	3.00TB	WDC WD30EFRX-68A
[13:0:6:0]	disk7		sdn	3.00TB	WDC WD30EFRX-68A
[13:0:7:0]	disk8		sdo	3.00TB	WDC WD30EFRX-68A

[14] scsi14	mvsas -	HighPoint Technologies, Inc.
[14:0:0:0]	disk9		sdp	3.00TB	WDC WD30EFRX-68A
[14:0:1:0]	disk10		sdq	3.00TB	WDC WD30EFRX-68A
[14:0:2:0]	disk11		sdr	3.00TB	WDC WD30EFRX-68A
[14:0:3:0]	disk12		sds	3.00TB	WDC WD30EFRX-68A
[14:0:4:0]	disk13		sdt	3.00TB	WDC WD30EFRX-68A
[14:0:5:0]	disk14		sdu	3.00TB	WDC WD30EFRX-68E
[14:0:6:0]	disk15		sdv	4.00TB	ST4000VN000-1H41
[14:0:7:0]	disk16		sdw	4.00TB	ST4000VN000-1H41

[N0] scsiN0	nvme0 -	NVMe
[N:0:2:1]	cache		nvme0n1	1.00TB	Samsung SSD 960 

 

Link to comment


rdevName.0=sdy

rdevName.1=sdw

rdevName.2=sde

rdevName.3=sdf

rdevName.4=sdg

rdevName.5=sdc

rdevName.6=sdt

rdevName.7=sdd

rdevName.8=sdj

rdevName.9=sdu

rdevName.10=sdh

rdevName.11=sdl

rdevName.12=sdk

rdevName.13=sdb

rdevName.14=sdv

rdevName.15=sdm

rdevName.16=sdn

rdevName.17=sdq

rdevName.18=sdr

rdevName.19=sdo

rdevName.20=sds

rdevName.21=sdi

rdevName.22=sdp

 

As you can see - my disks are actually numbered from 0 as far as unraid is concerned.

I believe that's the host addresses starting at 11 - and that makes sense from a physical perspective as address 1 is the controller itself, addresses 2-9 are the links to the port expander, address 9 is the port expander itself (shown as  "enclosu" in the report) and then address 11 is the first disk device.

 

As you can see - they are actually numbered starting at zero as far as the md array for unraid is concerned.

And yeah I suggest multi-dimensional arrays specifically because it nullifies issues like this, as instead of relying on indices and array sizes, we rely on "for each object" logic. Which will return in the order it was input, regardless of whether or not everything is incremental.

 

Link to comment
1 hour ago, Pauven said:

@StevenD, I'm working on using your values to plug into the report on my system, that way I should be able to 100% simulate your disc report output and get it fixed.

 

I need a one more thing, if you can:


egrep -i "\[|idx|name|type|device|color" /var/local/emhttp/disks.ini

  

 

 

root@nas:~# egrep -i "\[|idx|name|type|device|color" /var/local/emhttp/disks.ini
["parity"]
idx="0"
name="parity"
device="sdj"
type="Parity"
color="green-on"
deviceSb=""
["disk1"]
idx="1"
name="disk1"
device="sdi"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md1"
["disk2"]
idx="2"
name="disk2"
device="sdf"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md2"
["disk3"]
idx="3"
name="disk3"
device="sde"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md3"
["disk4"]
idx="4"
name="disk4"
device="sdp"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md4"
["disk5"]
idx="5"
name="disk5"
device="sdo"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md5"
["disk6"]
idx="6"
name="disk6"
device="sdq"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md6"
["disk7"]
idx="7"
name="disk7"
device="sdr"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md7"
["disk8"]
idx="8"
name="disk8"
device="sdh"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md8"
["disk9"]
idx="9"
name="disk9"
device="sdg"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md9"
["disk10"]
idx="10"
name="disk10"
device="sdm"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md10"
["disk11"]
idx="11"
name="disk11"
device="sdl"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md11"
["disk12"]
idx="12"
name="disk12"
device="sdn"
type="Data"
color="green-on"
fsType="xfs"
fsColor="green-on"
deviceSb="md12"
["parity2"]
idx="29"
name="parity2"
device="sdk"
type="Parity"
color="green-on"
deviceSb=""
["cache"]
idx="30"
name="cache"
device="nvme0n1"
type="Cache"
color="green-on"
fsType="xfs"
fsColor="yellow-on"
deviceSb="nvme0n1p1"
["flash"]
idx="54"
name="flash"
device="sda"
type="Flash"
color="green-on"
comment="Unraid OS boot device"
fsType="vfat"
fsColor="yellow-on"
root@nas:~#

 

Link to comment

Thanks @StevenD!  I've fixed some things in the report, does this look right to you?

 

SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0	usb-storage -	
[0:0:0:0]	flash		sda	31.9GB	

[1] scsi1	ata_piix -	

[2] scsi2	ata_piix -	

[3] scsi3	vmw_pvscsi -	PVSCSI SCSI Controller

[4] scsi4	vmw_pvscsi -	PVSCSI SCSI Controller

[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
[5:0:0:0]	disk3		sde	8.00TB	
[5:0:10:0]	disk8		sdn	8.00TB	
[5:0:11:0]	disk7		sdo	8.00TB	
[5:0:12:0]	disk6		sdp	8.00TB	
[5:0:13:0]	disk4		sdq	8.00TB	
[5:0:14:0]	disk12		sdr	8.00TB	
[5:0:2:0]	disk1		sdf	8.00TB	
[5:0:3:0]	disk9		sdg	8.00TB	
[5:0:4:0]	disk10		sdh	8.00TB	
[5:0:5:0]	parity		sdi	8.00TB	
[5:0:6:0]	disk2		sdj	8.00TB	
[5:0:7:0]	disk11		sdk	8.00TB	
[5:0:8:0]	disk5		sdl	8.00TB	
[5:0:9:0]	parity2		sdm	8.00TB	

[N0] scsiN0	nvme0 -	NVMe
[N:0:4:1]	cache		nvme0n1	512GB	

 

Link to comment
3 minutes ago, Pauven said:

Thanks @StevenD!  I've fixed some things in the report, does this look right to you?

 


SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0	usb-storage -	
[0:0:0:0]	flash		sda	31.9GB	

[1] scsi1	ata_piix -	

[2] scsi2	ata_piix -	

[3] scsi3	vmw_pvscsi -	PVSCSI SCSI Controller

[4] scsi4	vmw_pvscsi -	PVSCSI SCSI Controller

[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
[5:0:0:0]	disk3		sde	8.00TB	
[5:0:10:0]	disk8		sdn	8.00TB	
[5:0:11:0]	disk7		sdo	8.00TB	
[5:0:12:0]	disk6		sdp	8.00TB	
[5:0:13:0]	disk4		sdq	8.00TB	
[5:0:14:0]	disk12		sdr	8.00TB	
[5:0:2:0]	disk1		sdf	8.00TB	
[5:0:3:0]	disk9		sdg	8.00TB	
[5:0:4:0]	disk10		sdh	8.00TB	
[5:0:5:0]	parity		sdi	8.00TB	
[5:0:6:0]	disk2		sdj	8.00TB	
[5:0:7:0]	disk11		sdk	8.00TB	
[5:0:8:0]	disk5		sdl	8.00TB	
[5:0:9:0]	parity2		sdm	8.00TB	

[N0] scsiN0	nvme0 -	NVMe
[N:0:4:1]	cache		nvme0n1	512GB	

 

 

It does, except for the disk order.  I assume [5:0:x:0] is the port number. They don't line up, but that doesn't really matter.

 

 

Link to comment
Just now, StevenD said:

 

It does, except for the disk order.  I assume [5:0:x:0] is the port number. They don't line up, but that doesn't really matter.

 

 

They are sorted, but it is an alpha sort, so 1 & 10-19 all sort before 2.

 

Here's the code that outputs those disks and sorts them:

for Disk in ${Disks[@]} 
do
	echo "${DiskSCSI[$Disk]}	${DiskNamePretty[$Disk]}		${DiskName[$Disk]}	${DiskSizePretty[$Disk]}	${DiskID[$Disk]//_/ }" 
done | sort >> $ReportFile

As you can see, I simply pipe all of the lines to the "sort" function.

 

Does anyone know how I can make this sort numerical based upon the 3rd octet in [5:0:x:0]? 

 

The only idea I have is to prefix each line with the port numer, and to make it 2-digit with a leading zero, like this:

[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
00 - [5:0:0:0]	disk3		sde	8.00TB	
02 - [5:0:2:0]	disk1		sdf	8.00TB	
03 - [5:0:3:0]	disk9		sdg	8.00TB	
04 - [5:0:4:0]	disk10		sdh	8.00TB	
05 - [5:0:5:0]	parity		sdi	8.00TB	
06 - [5:0:6:0]	disk2		sdj	8.00TB	
07 - [5:0:7:0]	disk11		sdk	8.00TB	
08 - [5:0:8:0]	disk5		sdl	8.00TB	
09 - [5:0:9:0]	parity2		sdm	8.00TB	
10 - [5:0:10:0]	disk8		sdn	8.00TB	
11 - [5:0:11:0]	disk7		sdo	8.00TB	
12 - [5:0:12:0]	disk6		sdp	8.00TB	
13 - [5:0:13:0]	disk4		sdq	8.00TB	
14 - [5:0:14:0]	disk12		sdr	8.00TB	

 

Link to comment
16 minutes ago, Pauven said:

They are sorted, but it is an alpha sort, so 1 & 10-19 all sort before 2.

 

Here's the code that outputs those disks and sorts them:


for Disk in ${Disks[@]} 
do
	echo "${DiskSCSI[$Disk]}	${DiskNamePretty[$Disk]}		${DiskName[$Disk]}	${DiskSizePretty[$Disk]}	${DiskID[$Disk]//_/ }" 
done | sort >> $ReportFile

As you can see, I simply pipe all of the lines to the "sort" function.

 

Does anyone know how I can make this sort numerical based upon the 3rd octet in [5:0:x:0]? 

 

The only idea I have is to prefix each line with the port numer, and to make it 2-digit with a leading zero, like this:


[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
00 - [5:0:0:0]	disk3		sde	8.00TB	
02 - [5:0:2:0]	disk1		sdf	8.00TB	
03 - [5:0:3:0]	disk9		sdg	8.00TB	
04 - [5:0:4:0]	disk10		sdh	8.00TB	
05 - [5:0:5:0]	parity		sdi	8.00TB	
06 - [5:0:6:0]	disk2		sdj	8.00TB	
07 - [5:0:7:0]	disk11		sdk	8.00TB	
08 - [5:0:8:0]	disk5		sdl	8.00TB	
09 - [5:0:9:0]	parity2		sdm	8.00TB	
10 - [5:0:10:0]	disk8		sdn	8.00TB	
11 - [5:0:11:0]	disk7		sdo	8.00TB	
12 - [5:0:12:0]	disk6		sdp	8.00TB	
13 - [5:0:13:0]	disk4		sdq	8.00TB	
14 - [5:0:14:0]	disk12		sdr	8.00TB	

 

sort -n -t: -k3
I believe should handle this.
You may need to strip the [ and ], so it'd be
sed -e "s/[\[,\]]//g" | sort -n -t: -k3
Or something along those lines. I don't have a terminal accessible to test atm.

To explain:
-n  sort numerals
-t:  change  change delimiter to ":"
-k3 sort by column 3.
 

 

EDIT: forgot the -n flag above.

EDIT 2:
It's also entirely possible that the disk order not line up to the port numbers.

Edited by Xaero
  • Like 1
Link to comment
10 minutes ago, Xaero said:

sort -t: -k3
I believe should handle this.
You may need to strip the [ and ], so it'd be
sed -e "s/[\[,\]]//g" | sort -t: -k3
Or something along those lines. I don't have a terminal accessible to test atm.

With a slight modification, that did the trick, thanks!

 

I had to add a -n to sort numerically, so the final command was sort -n -t: -k3

Link to comment

UTT v4.1 BETA 2 is attached.

 

Same as with BETA 1, I'm primarily concerned about the SCSI Hosts and Discs report, so if I could get a few users to run this with a Short test and post the reports that would be great. 

 

BETA 2 has more fixes for the SCSI Host Controllers and Connected Drives report (including a modified numerical sort on drive port #), and cosmetic tweaks to the server name that shows in the notifications.

 

BETA 2 still has my debugging statements in the code, but they are all commented out.

 

Here's the v4.1 changelog:

# V4.1: Added a function to use the first result with 99.8% max speed for Pass 2
#       Fixed Server Name in Notification messages (was hardcoded TOWER)
#       Many fixes to the SCSI Host Controllers and Connected Drives report
#       Added a function to check lsscsi version and optionally upgrade to v0.30
#       Cosmetic menu tweaks - by Pauven 08/12/2019

 

unraid6x-tunables-tester.sh.v4_1_BETA2.txt

Link to comment

Interesting.

 

                   Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven

             Tunables Report produced Sun Aug 11 21:05:52 CDT 2019

                              Run on server: nas

                           Long Parity Sync Test


Current Values:  md_num_stripes=4480, md_sync_window=2048, md_sync_thresh=2000
                 Global nr_requests=128
                 Disk Specific nr_requests Values:
                    sdj=128, sdi=128, sdf=128, sde=128, sdp=128, sdo=128, 
                    sdq=128, sdr=128, sdh=128, sdg=128, sdm=128, sdl=128, 
                    sdn=128, sdk=128, 


--- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 256 | 4480 | 2048 | 128 |  2000  | 174.9 


--- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 |  73 | 1280 |  384 | 128 |   192  | 172.3 


 --- TEST PASS 1 (2.5 Hrs - 12 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |  43 |  768 |  384 | 128 |   376  | 172.1 |   320  | 172.5 |   192  | 172.2
  2 |  87 | 1536 |  768 | 128 |   760  | 172.9 |   704  | 173.1 |   384  | 172.6
  3 | 175 | 3072 | 1536 | 128 |  1528  | 174.2 |  1472  | 174.1 |   768  | 173.2
  4 | 351 | 6144 | 3072 | 128 |  3064  | 176.2 |  3008  | 176.1 |  1536  | 174.4

 --- TEST PASS 1_HIGH (40 Min - 3 Sample Points @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 | 702 |12288 | 6144 | 128 |  6136  | 177.4 |  6080  | 177.5 |  3072  | 177.4

 --- TEST PASS 1_VERYHIGH (40 Min - 3 Sample Points @ 10min Duration)---
Tst | RAM | stri |  win | req | thresh |  MB/s | thresh |  MB/s | thresh |  MB/s
--------------------------------------------------------------------------------
  1 |1054 |18432 | 9216 | 128 |  9208  | 177.4 |  9152  | 177.5 |  4608  | 177.4

 --- Using md_sync_window=6144 & md_sync_thresh=window-64 for Pass 2 ---

 --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
  1 | 351 | 6144 | 3072 | 128 |  3008  | 176.0
  2 | 366 | 6400 | 3200 | 128 |  3136  | 176.2
  3 | 380 | 6656 | 3328 | 128 |  3264  | 176.3
  4 | 395 | 6912 | 3456 | 128 |  3392  | 176.5
  5 | 410 | 7168 | 3584 | 128 |  3520  | 176.6
  6 | 424 | 7424 | 3712 | 128 |  3648  | 176.7
  7 | 439 | 7680 | 3840 | 128 |  3776  | 176.8
  8 | 453 | 7936 | 3968 | 128 |  3904  | 177.0
  9 | 468 | 8192 | 4096 | 128 |  4032  | 177.1
 10 | 483 | 8448 | 4224 | 128 |  4160  | 177.1
 11 | 497 | 8704 | 4352 | 128 |  4288  | 177.3
 12 | 512 | 8960 | 4480 | 128 |  4416  | 177.5
 13 | 527 | 9216 | 4608 | 128 |  4544  | 177.5
 14 | 541 | 9472 | 4736 | 128 |  4672  | 177.5
 15 | 556 | 9728 | 4864 | 128 |  4800  | 177.5
 16 | 571 | 9984 | 4992 | 128 |  4928  | 177.5
 17 | 585 |10240 | 5120 | 128 |  5056  | 177.5
 18 | 600 |10496 | 5248 | 128 |  5184  | 177.5
 19 | 615 |10752 | 5376 | 128 |  5312  | 177.5
 20 | 629 |11008 | 5504 | 128 |  5440  | 177.4
 21 | 644 |11264 | 5632 | 128 |  5568  | 177.6
 22 | 659 |11520 | 5760 | 128 |  5696  | 177.5
 23 | 673 |11776 | 5888 | 128 |  5824  | 177.5
 24 | 688 |12032 | 6016 | 128 |  5952  | 177.5
 25 | 702 |12288 | 6144 | 128 |  6080  | 177.5
 26 | 717 |12544 | 6272 | 128 |  6208  | 177.4
 27 | 732 |12800 | 6400 | 128 |  6336  | 177.4
 28 | 746 |13056 | 6528 | 128 |  6464  | 177.1
 29 | 761 |13312 | 6656 | 128 |  6592  | 177.5
 30 | 776 |13568 | 6784 | 128 |  6720  | 177.5
 31 | 790 |13824 | 6912 | 128 |  6848  | 177.5
 32 | 805 |14080 | 7040 | 128 |  6976  | 177.4
 33 | 820 |14336 | 7168 | 128 |  7104  | 177.4
 34 | 834 |14592 | 7296 | 128 |  7232  | 177.5
 35 | 849 |14848 | 7424 | 128 |  7360  | 177.5
 36 | 864 |15104 | 7552 | 128 |  7488  | 177.4
 37 | 878 |15360 | 7680 | 128 |  7616  | 177.4
 38 | 893 |15616 | 7808 | 128 |  7744  | 177.5
 39 | 907 |15872 | 7936 | 128 |  7872  | 177.4
 40 | 922 |16128 | 8064 | 128 |  8000  | 177.5
 41 | 937 |16384 | 8192 | 128 |  8128  | 177.5
 42 | 951 |16640 | 8320 | 128 |  8256  | 177.4
 43 | 966 |16896 | 8448 | 128 |  8384  | 177.5
 44 | 981 |17152 | 8576 | 128 |  8512  | 177.6
 45 | 995 |17408 | 8704 | 128 |  8640  | 177.4
 46 |1010 |17664 | 8832 | 128 |  8768  | 177.5
 47 |1025 |17920 | 8960 | 128 |  8896  | 177.5
 48 |1039 |18176 | 9088 | 128 |  9024  | 177.4
 49 |1054 |18432 | 9216 | 128 |  9152  | 177.5

 --- Using fastest result of md_sync_window=5632 for Pass 3 ---

 --- TEST PASS 3 (4 Hrs - 18 Sample Points @ 10min Duration) ---
Tst | RAM | stri |  win | req | thresh |  MB/s
----------------------------------------------
 1a | 644 |11264 | 5632 | 128 |  5631  | 177.6
 1b | 644 |11264 | 5632 | 128 |  5628  | 177.4
 1c | 644 |11264 | 5632 | 128 |  5624  | 177.5
 1d | 644 |11264 | 5632 | 128 |  5620  | 177.5
 1e | 644 |11264 | 5632 | 128 |  5616  | 177.4
 1f | 644 |11264 | 5632 | 128 |  5612  | 177.4
 1g | 644 |11264 | 5632 | 128 |  5608  | 177.5
 1h | 644 |11264 | 5632 | 128 |  5604  | 177.4
 1i | 644 |11264 | 5632 | 128 |  5600  | 177.4
 1j | 644 |11264 | 5632 | 128 |  5596  | 177.5
 1k | 644 |11264 | 5632 | 128 |  5592  | 177.5
 1l | 644 |11264 | 5632 | 128 |  5588  | 177.6
 1m | 644 |11264 | 5632 | 128 |  5584  | 177.5
 1n | 644 |11264 | 5632 | 128 |  5580  | 177.4
 1o | 644 |11264 | 5632 | 128 |  5576  | 177.4
 1p | 644 |11264 | 5632 | 128 |  5572  | 177.5
 1q | 644 |11264 | 5632 | 128 |  5568  | 177.4
 1r | 644 |11264 | 5632 | 128 |  2816  | 177.4

The results below do NOT include the Baseline test of current values.

The Fastest settings tested give a peak speed of 177.6 MB/s
     md_sync_window: 5632          md_num_stripes: 11264
     md_sync_thresh: 5631             nr_requests: 128
This will consume 644 MB (388 MB more than your current utilization of 256 MB)

The Thriftiest settings (95% of Fastest) give a peak speed of 172.1 MB/s
     md_sync_window: 384          md_num_stripes: 768
     md_sync_thresh: 376             nr_requests: 128
This will consume 43 MB (213 MB less than your current utilization of 256 MB)

The Recommended settings (99% of Fastest) give a peak speed of 176.2 MB/s
     md_sync_window: 3072          md_num_stripes: 6144
     md_sync_thresh: 3064             nr_requests: 128
This will consume 351 MB (95 MB more than your current utilization of 256 MB)

NOTE: Adding additional drives will increase memory consumption.

In Unraid, go to Settings > Disk Settings to set your chosen parameter values.

Completed: 15 Hrs 9 Min 0 Sec.


NOTE: Use the smallest set of values that produce good results. Larger values
      increase server memory use, and may cause stability issues with Unraid,
      especially if you have any add-ons or plug-ins installed.


System Info:  nas
              Unraid version 6.7.3-rc1
                   md_num_stripes=4480
                   md_sync_window=2048
                   md_sync_thresh=2000
                   nr_requests=128 (Global Setting)
                   sbNumDisks=14
              CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
              RAM: 32GiB System Memory

Outputting free low memory information...

              total        used        free      shared  buff/cache   available
Mem:       32941156      587340    31756344      579656      597472    31422840
Low:       32941156     1184812    31756344
High:             0           0           0
Swap:             0           0           0


SCSI Host Controllers and Connected Drives
--------------------------------------------------

[0] scsi0	usb-storage -	
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL

[1] scsi1	ata_piix -	

[2] scsi2	ata_piix -	

[3] scsi3	vmw_pvscsi -	PVSCSI SCSI Controller

[4] scsi4	vmw_pvscsi -	PVSCSI SCSI Controller

[5] scsi5	mpt3sas -	SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)
[5:0:0:0]	disk3		sde	8.00TB	HGST HDN728080AL
[5:0:10:0]	disk12		sdn	8.00TB	HGST HDN728080AL
[5:0:11:0]	disk5		sdo	8.00TB	HGST HDN728080AL
[5:0:12:0]	disk4		sdp	8.00TB	HGST HDN728080AL
[5:0:13:0]	disk6		sdq	8.00TB	HGST HDN728080AL
[5:0:2:0]	disk2		sdf	8.00TB	HGST HDN728080AL
[5:0:3:0]	disk9		sdg	8.00TB	HGST HDN728080AL
[5:0:4:0]	disk8		sdh	8.00TB	HGST HDN728080AL
[5:0:5:0]	disk1		sdi	8.00TB	HGST HDN728080AL
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL
[5:0:7:0]	parity2		sdk	8.00TB	HGST HDN728080AL
[5:0:8:0]	disk11		sdl	8.00TB	HGST HDN728080AL
[5:0:9:0]	disk10		sdm	8.00TB	HGST HDN728080AL

[N0] scsiN0	nvme0 -	NVMe
[5:0:6:0]	parity		sdj	8.00TB	HGST HDN728080AL


                      *** END OF REPORT ***

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.