Pauven

Members
  • Posts

    747
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by Pauven

  1. True to form! I'm working on replicating your data this morning. I have everything except: egrep -i "\[|idx|name|type|device|color" /var/local/emhttp/disks.ini
  2. Looks good, thanks for sharing. I see a couple NVMe hosts listed at the bottom, but the NVMe drives are missing. They also weren't listed in the data your shared with me, like the egrep of /var/local/emhttp/disks.ini. The two NVMe hosts were listed in your lshw output you provided: /0/100/1.2/0 storage NVMe SSD Controller SM961/PM961 /0/117/1.2/0 storage WD Black NVMe SSD I've looked around, and I can't find where you've posted the output of lsscsi -st so can you do that for me?
  3. Man, you threw me for a loop there! I thought you were posting the Short test from Beta 2, and I was frustrated that the disk report was still wrong. I finally noticed it was from Beta 1 and a Long test. Looks like you previous settings were close, just a little low to fully unleash all the performance.
  4. More likely that your "SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC)" didn't initialize correctly on boot.
  5. UTT v4.1 BETA 2 is attached. Same as with BETA 1, I'm primarily concerned about the SCSI Hosts and Discs report, so if I could get a few users to run this with a Short test and post the reports that would be great. BETA 2 has more fixes for the SCSI Host Controllers and Connected Drives report (including a modified numerical sort on drive port #), and cosmetic tweaks to the server name that shows in the notifications. BETA 2 still has my debugging statements in the code, but they are all commented out. Here's the v4.1 changelog: # V4.1: Added a function to use the first result with 99.8% max speed for Pass 2 # Fixed Server Name in Notification messages (was hardcoded TOWER) # Many fixes to the SCSI Host Controllers and Connected Drives report # Added a function to check lsscsi version and optionally upgrade to v0.30 # Cosmetic menu tweaks - by Pauven 08/12/2019 unraid6x-tunables-tester.sh.v4_1_BETA2.txt
  6. With a slight modification, that did the trick, thanks! I had to add a -n to sort numerically, so the final command was sort -n -t: -k3
  7. They are sorted, but it is an alpha sort, so 1 & 10-19 all sort before 2. Here's the code that outputs those disks and sorts them: for Disk in ${Disks[@]} do echo "${DiskSCSI[$Disk]} ${DiskNamePretty[$Disk]} ${DiskName[$Disk]} ${DiskSizePretty[$Disk]} ${DiskID[$Disk]//_/ }" done | sort >> $ReportFile As you can see, I simply pipe all of the lines to the "sort" function. Does anyone know how I can make this sort numerical based upon the 3rd octet in [5:0:x:0]? The only idea I have is to prefix each line with the port numer, and to make it 2-digit with a leading zero, like this: [5] scsi5 mpt3sas - SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) 00 - [5:0:0:0] disk3 sde 8.00TB 02 - [5:0:2:0] disk1 sdf 8.00TB 03 - [5:0:3:0] disk9 sdg 8.00TB 04 - [5:0:4:0] disk10 sdh 8.00TB 05 - [5:0:5:0] parity sdi 8.00TB 06 - [5:0:6:0] disk2 sdj 8.00TB 07 - [5:0:7:0] disk11 sdk 8.00TB 08 - [5:0:8:0] disk5 sdl 8.00TB 09 - [5:0:9:0] parity2 sdm 8.00TB 10 - [5:0:10:0] disk8 sdn 8.00TB 11 - [5:0:11:0] disk7 sdo 8.00TB 12 - [5:0:12:0] disk6 sdp 8.00TB 13 - [5:0:13:0] disk4 sdq 8.00TB 14 - [5:0:14:0] disk12 sdr 8.00TB
  8. Thanks @StevenD! I've fixed some things in the report, does this look right to you? SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - [0:0:0:0] flash sda 31.9GB [1] scsi1 ata_piix - [2] scsi2 ata_piix - [3] scsi3 vmw_pvscsi - PVSCSI SCSI Controller [4] scsi4 vmw_pvscsi - PVSCSI SCSI Controller [5] scsi5 mpt3sas - SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) [5:0:0:0] disk3 sde 8.00TB [5:0:10:0] disk8 sdn 8.00TB [5:0:11:0] disk7 sdo 8.00TB [5:0:12:0] disk6 sdp 8.00TB [5:0:13:0] disk4 sdq 8.00TB [5:0:14:0] disk12 sdr 8.00TB [5:0:2:0] disk1 sdf 8.00TB [5:0:3:0] disk9 sdg 8.00TB [5:0:4:0] disk10 sdh 8.00TB [5:0:5:0] parity sdi 8.00TB [5:0:6:0] disk2 sdj 8.00TB [5:0:7:0] disk11 sdk 8.00TB [5:0:8:0] disk5 sdl 8.00TB [5:0:9:0] parity2 sdm 8.00TB [N0] scsiN0 nvme0 - NVMe [N:0:4:1] cache nvme0n1 512GB
  9. @StevenD, I'm working on using your values to plug into the report on my system, that way I should be able to 100% simulate your disc report output and get it fixed. I need a one more thing, if you can: egrep -i "\[|idx|name|type|device|color" /var/local/emhttp/disks.ini
  10. Yeah, not everyone gets a super exiting report. Sorry. Fingers crossed! I feel your pain, been there. Thanks. Very disappointing I didn't get the report right. I wonder if your drives running 10 to 34 instead of 0 to 24 is having an impact on the logic. Multi-dimensional arrays sounds interesting, but I think the current flaw is very minor and just looks really bad. I'll give it one more go before trying a new approach. Actually, the NVMe disks did not show, just the NVMe controllers. Here's mine (NVMe way down at the bottom, which I got to show after adding the new lsscsi v0.30 upgrade function): SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - [0:0:0:0] flash sda 4.00GB Patriot Memory [1] scsi1 ahci - [2] scsi2 ahci - [3] scsi3 ahci - [4] scsi4 ahci - [5] scsi5 ahci - [6] scsi6 ahci - [7] scsi7 ahci - [8] scsi8 ahci - [9] scsi9 ahci - [10] scsi10 ahci - [11] scsi11 ahci - [12] scsi12 mvsas - HighPoint Technologies, Inc. [12:0:0:0] disk17 sdb 3.00TB WDC WD30EFRX-68A [12:0:1:0] disk18 sdc 3.00TB WDC WD30EFRX-68A [12:0:2:0] disk19 sdd 3.00TB WDC WD30EFRX-68E [12:0:3:0] disk20 sde 3.00TB WDC WD30EFRX-68E [12:0:4:0] parity2 sdf 8.00TB HGST HUH728080AL [12:0:5:0] parity sdg 8.00TB HGST HUH728080AL [13] scsi13 mvsas - HighPoint Technologies, Inc. [13:0:0:0] disk1 sdh 8.00TB HGST HUH728080AL [13:0:1:0] disk2 sdi 3.00TB WDC WD30EFRX-68A [13:0:2:0] disk3 sdj 3.00TB WDC WD30EFRX-68E [13:0:3:0] disk4 sdk 3.00TB WDC WD30EFRX-68A [13:0:4:0] disk5 sdl 3.00TB WDC WD30EFRX-68A [13:0:5:0] disk6 sdm 3.00TB WDC WD30EFRX-68A [13:0:6:0] disk7 sdn 3.00TB WDC WD30EFRX-68A [13:0:7:0] disk8 sdo 3.00TB WDC WD30EFRX-68A [14] scsi14 mvsas - HighPoint Technologies, Inc. [14:0:0:0] disk9 sdp 3.00TB WDC WD30EFRX-68A [14:0:1:0] disk10 sdq 3.00TB WDC WD30EFRX-68A [14:0:2:0] disk11 sdr 3.00TB WDC WD30EFRX-68A [14:0:3:0] disk12 sds 3.00TB WDC WD30EFRX-68A [14:0:4:0] disk13 sdt 3.00TB WDC WD30EFRX-68A [14:0:5:0] disk14 sdu 3.00TB WDC WD30EFRX-68E [14:0:6:0] disk15 sdv 4.00TB ST4000VN000-1H41 [14:0:7:0] disk16 sdw 4.00TB ST4000VN000-1H41 [N0] scsiN0 nvme0 - NVMe [N:0:2:1] cache nvme0n1 1.00TB Samsung SSD 960
  11. UTT v4.1 BETA 1 is attached. I'm primarily concerned about the SCSI Hosts and Discs report, so if I could get a few users to run this with a Short test and post the reports that would be great. This does have the new logic to find the leading edge for Pass 2, rather than the peak, so feel free to run the longer tests if you desire, just run a Short first and share those results. Here's the changelog: # V4.1: Added a function to use the first result with 99.8% max speed for Pass 2 # Fixed Server Name in Notification messages (was hardcoded TOWER) # Many fixes to the SCSI Host Controllers and Connected Drives report # Added a function to check lsscsi version and optionally upgrade to v0.30 # Cosmetic menu tweaks - by Pauven 08/11/2019
  12. So for users on older versions of Unraid 6.x, pre-6.7.0, would it be a good feature to have UTT offer to upgrade lsscsi to v0.30? If so, could someone help me out with the commands to do this? I'm a Windows guy, and I really get stumped when it comes to installing packages and updates unless there's a step by step guide. I looked at my own code from years ago to install lshw, and modified it to upgrade lsscsi to v0.30. Looks like it is working.
  13. Thanks @jbartlett, that's exactly what I needed. I wanted to make sure that Cache2 was IDX 31. I'll post Beta 1 of UTT v4.1 here shortly for testing.
  14. I finally figured out why my NVMe drives are not showing. On Unraid 6.6.6 (which is what I am running), the lsscsi version is 0.29, which doesn't have support for NVMe. Later versions of Unraid have lsscsi version 0.30, which is the latest and has NVMe support. Anyone know exactly what version of Unraid upgraded lsscsi to v0.30? Nevermind, I just read in the 6.7.0 release notes that lsscsi was upgraded to 0.30.
  15. Much better. Looks like the accuracy is +/- 0.2 MB/s. The new logic in UTT v4.1 would have used md_sync_window 6144 (from TEST PASS 1_HIGH) for Pass 2, and test from 3072 - 9216. All things considered, I think the v.41 results would be identical to these results for you, as your server has a really flat curve that starts extremely low, and the new logic won't really affect those results.
  16. Could anyone that has at least 2 Cache drives please run this command and provide the output: egrep "\[|idx" /var/local/emhttp/disks.ini
  17. Not yet. I'm trying to get the disk report working correctly, and hope to have UTT 4.1 out soon, maybe even today...
  18. Thanks for all the data @StevenD, you've been very helpful today.
  19. Thanks @Xaero. I just added the -i to the egreps in the UTT script, just in case. Any idea why your nvme1n1 drive doesn't show up in your df -H results?
  20. I'm thinking just array drives. This is for the SCSI Hot Controllers and Connected Drives report at the end of the UTT results. A lot of the report requires configuration data for array drives, and so far all these NVMe drives have been non-array Cache or Unassigned drives, so they don't fully make it into the report. I'm trying to connect data from various sources together, and so I need to see what NVMe array devices look like.
  21. If anyone has an NVMe drive as part of their array (not cache, but data or parity), please run the above.
  22. @StevenD & @Xaero can you provide the output for: mdcmd status | grep "rdevStatus" and mdcmd status | grep "rdevName" and df -h
  23. Thanks @StevenD! Can I bother you to also run: lsscsi -H and lsscsi -st