StevenD

Community Developer
  • Posts

    1608
  • Joined

  • Days Won

    1

Everything posted by StevenD

  1. There’s already an open bug report. Revert to v6.6.7. https://forums.unraid.net/bug-reports/stable-releases/67x-very-slow-array-concurrent-performance-r605/
  2. I like PhotoSync. It works on Android and iOS. It has a setting that as soon as it connects to your home wifi, it will start uploading. I think its $3 or $5 for the app on each device.
  3. You should be able to download it: https://packages.slackware.com/?r=slackware-current&p=screen-4.6.2-i586-2.txz And then run installpkg
  4. I believe the only people who have experienced corruption have appdata on their array. Mine has always been on cache.
  5. Tons of SMB issues. On 6.6.7, I can write to my cache at a steady 1GB/s. On 6.7.x, it fluctuates between 1GB/s and ZERO. It literally pauses during transfers.
  6. Nevermind, I see what it did. The results arent accurate anyway as I have several things using the array at the moment. Unraid 6.x Tunables Tester v4.1 BETA 3 by Pauven --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)--- Test 1 - window=3072, thresh=3064: 1.689 GB in 10.039 sec = 172.3 MB/s --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)--- Setting all drives to nr_requests=128 for the following tests Test 1 - window= 384, thresh= 192: 1.359 GB in 10.041 sec = 138.6 MB/s --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) --- Setting all drives to nr_requests=128 for the following tests Test 1a - window= 384, thresh= 376: 1.071 GB in 10.042 sec = 109.2 MB/s Test 1b - window= 384, thresh= 320: 1.474 GB in 10.038 sec = 150.4 MB/s Test 1c - window= 384, thresh= 192: 0.977 GB in 10.038 sec = 99.7 MB/s Test 2a - window= 768, thresh= 760: 1.685 GB in 10.039 sec = 171.9 MB/s Test 2b - window= 768, thresh= 704: 1.668 GB in 10.041 sec = 170.1 MB/s Test 2c - window= 768, thresh= 384: 1.638 GB in 10.041 sec = 167.1 MB/s Test 3a - window=1536, thresh=1528: 1.693 GB in 10.043 sec = 172.6 MB/s Test 3b - window=1536, thresh=1472: 1.527 GB in 10.043 sec = 155.7 MB/s Test 3c - window=1536, thresh= 768: 1.342 GB in 10.042 sec = 136.9 MB/s Test 4a - window=3072, thresh=3064: 1.476 GB in 10.043 sec = 150.5 MB/s Test 4b - window=3072, thresh=3008: 1.365 GB in 10.042 sec = 139.2 MB/s Test 4c - window=3072, thresh=1536: 1.415 GB in 10.038 sec = 144.3 MB/s --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST --- If the speeds changed with different values you should run a NORMAL/LONG test. If speeds didn't change then adjusting Tunables likely won't help your system. Completed: 0 Hrs 2 Min 34 Sec. Results have been written to ShortSyncTestReport_2019_08_13_1604.txt Show ShortSyncTestReport_2019_08_13_1604.txt now? (Y to show):
  7. That seems to have left off some tests. I course, I already closed the window. I will go run it again
  8. Just FYI... I reverted to 6.6.7 last night. Results of short test: Unraid 6.x Tunables Tester v4.1 BETA 3 by Pauven Tunables Report produced Tue Aug 13 15:59:37 CDT 2019 Run on server: nas Short Parity Sync Test Current Values: md_num_stripes=6144, md_sync_window=3072, md_sync_thresh=3064 Global nr_requests=128 Disk Specific nr_requests Values: sdl=128, sdj=128, sdg=128, sde=128, sdn=128, sdm=128, sdp=128, sdr=128, sdk=128, sdf=128, sdi=128, sdh=128, sdq=128, sdo=128, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 351 | 6144 | 3072 | 128 | 3064 | 171.2 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 73 | 1280 | 384 | 128 | 192 | 62.5 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 43 | 768 | 384 | 128 | 376 | 74.1 | 320 | 62.1 | 192 | 59.9 2 | 87 | 1536 | 768 | 128 | 760 | 117.4 | 704 | 127.2 | 384 | 111.0 3 | 175 | 3072 | 1536 | 128 | 1528 | 133.7 | 1472 | 149.9 | 768 | 118.0 4 | 351 | 6144 | 3072 | 128 | 3064 | 145.7 | 3008 | 145.7 | 1536 | 145.8 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST --- If the speeds changed with different values you should run a NORMAL/LONG test. If speeds didn't change then adjusting Tunables likely won't help your system. Completed: 0 Hrs 3 Min 15 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: nas Unraid version 6.6.7 md_num_stripes=6144 md_sync_window=3072 md_sync_thresh=3064 nr_requests=128 (Global Setting) sbNumDisks=14 CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz RAM: 32GiB System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 32942816 4570920 27368236 674948 1003660 27190564 Low: 32942816 5574580 27368236 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage [0:0:0:0] flash sda 31.9GB Reader SD MS [1] scsi1 ata_piix [2] scsi2 ata_piix [3] scsi3 vmw_pvscsi PVSCSI SCSI Controller [4] scsi4 vmw_pvscsi PVSCSI SCSI Controller [5] scsi5 mpt3sas SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) [5:0:0:0] disk3 sde 8.00TB HGST HDN728080AL [5:0:2:0] disk9 sdf 8.00TB HGST HDN728080AL [5:0:3:0] disk2 sdg 8.00TB HGST HDN728080AL [5:0:4:0] disk11 sdh 8.00TB HGST HDN728080AL [5:0:5:0] disk10 sdi 8.00TB HGST HDN728080AL [5:0:6:0] disk1 sdj 8.00TB HGST HDN728080AL [5:0:7:0] disk8 sdk 8.00TB HGST HDN728080AL [5:0:8:0] parity sdl 8.00TB HGST HDN728080AL [5:0:9:0] disk5 sdm 8.00TB HGST HDN728080AL [5:0:10:0] disk4 sdn 8.00TB HGST HDN728080AL [5:0:11:0] parity2 sdo 8.00TB HGST HDN728080AL [5:0:12:0] disk6 sdp 8.00TB HGST HDN728080AL [5:0:13:0] disk12 sdq 8.00TB HGST HDN728080AL [5:0:14:0] disk7 sdr 8.00TB HGST HDN728080AL [N0] scsiN0 nvme0 NVMe [N:0:4:1] cache nvme0n1 512GB Samsung SSD 970 *** END OF REPORT ***
  9. If it matters, those two "missing" disks are mounted via Unassigned Devices and they are not part of the array.
  10. I, too, reverted to 6.6.7 last night. All is working as expected again.
  11. <whew> Had a little scare after rebooting from safe mode. None of my drives showed up. I reverted the disk settings to default before I rebooted again. This time they came up. not sure if they are related. I will have to play around with it.
  12. Interesting. Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven Tunables Report produced Sun Aug 11 21:05:52 CDT 2019 Run on server: nas Long Parity Sync Test Current Values: md_num_stripes=4480, md_sync_window=2048, md_sync_thresh=2000 Global nr_requests=128 Disk Specific nr_requests Values: sdj=128, sdi=128, sdf=128, sde=128, sdp=128, sdo=128, sdq=128, sdr=128, sdh=128, sdg=128, sdm=128, sdl=128, sdn=128, sdk=128, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 256 | 4480 | 2048 | 128 | 2000 | 174.9 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 73 | 1280 | 384 | 128 | 192 | 172.3 --- TEST PASS 1 (2.5 Hrs - 12 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 43 | 768 | 384 | 128 | 376 | 172.1 | 320 | 172.5 | 192 | 172.2 2 | 87 | 1536 | 768 | 128 | 760 | 172.9 | 704 | 173.1 | 384 | 172.6 3 | 175 | 3072 | 1536 | 128 | 1528 | 174.2 | 1472 | 174.1 | 768 | 173.2 4 | 351 | 6144 | 3072 | 128 | 3064 | 176.2 | 3008 | 176.1 | 1536 | 174.4 --- TEST PASS 1_HIGH (40 Min - 3 Sample Points @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 702 |12288 | 6144 | 128 | 6136 | 177.4 | 6080 | 177.5 | 3072 | 177.4 --- TEST PASS 1_VERYHIGH (40 Min - 3 Sample Points @ 10min Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 |1054 |18432 | 9216 | 128 | 9208 | 177.4 | 9152 | 177.5 | 4608 | 177.4 --- Using md_sync_window=6144 & md_sync_thresh=window-64 for Pass 2 --- --- TEST PASS 2 (10 Hrs - 49 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 351 | 6144 | 3072 | 128 | 3008 | 176.0 2 | 366 | 6400 | 3200 | 128 | 3136 | 176.2 3 | 380 | 6656 | 3328 | 128 | 3264 | 176.3 4 | 395 | 6912 | 3456 | 128 | 3392 | 176.5 5 | 410 | 7168 | 3584 | 128 | 3520 | 176.6 6 | 424 | 7424 | 3712 | 128 | 3648 | 176.7 7 | 439 | 7680 | 3840 | 128 | 3776 | 176.8 8 | 453 | 7936 | 3968 | 128 | 3904 | 177.0 9 | 468 | 8192 | 4096 | 128 | 4032 | 177.1 10 | 483 | 8448 | 4224 | 128 | 4160 | 177.1 11 | 497 | 8704 | 4352 | 128 | 4288 | 177.3 12 | 512 | 8960 | 4480 | 128 | 4416 | 177.5 13 | 527 | 9216 | 4608 | 128 | 4544 | 177.5 14 | 541 | 9472 | 4736 | 128 | 4672 | 177.5 15 | 556 | 9728 | 4864 | 128 | 4800 | 177.5 16 | 571 | 9984 | 4992 | 128 | 4928 | 177.5 17 | 585 |10240 | 5120 | 128 | 5056 | 177.5 18 | 600 |10496 | 5248 | 128 | 5184 | 177.5 19 | 615 |10752 | 5376 | 128 | 5312 | 177.5 20 | 629 |11008 | 5504 | 128 | 5440 | 177.4 21 | 644 |11264 | 5632 | 128 | 5568 | 177.6 22 | 659 |11520 | 5760 | 128 | 5696 | 177.5 23 | 673 |11776 | 5888 | 128 | 5824 | 177.5 24 | 688 |12032 | 6016 | 128 | 5952 | 177.5 25 | 702 |12288 | 6144 | 128 | 6080 | 177.5 26 | 717 |12544 | 6272 | 128 | 6208 | 177.4 27 | 732 |12800 | 6400 | 128 | 6336 | 177.4 28 | 746 |13056 | 6528 | 128 | 6464 | 177.1 29 | 761 |13312 | 6656 | 128 | 6592 | 177.5 30 | 776 |13568 | 6784 | 128 | 6720 | 177.5 31 | 790 |13824 | 6912 | 128 | 6848 | 177.5 32 | 805 |14080 | 7040 | 128 | 6976 | 177.4 33 | 820 |14336 | 7168 | 128 | 7104 | 177.4 34 | 834 |14592 | 7296 | 128 | 7232 | 177.5 35 | 849 |14848 | 7424 | 128 | 7360 | 177.5 36 | 864 |15104 | 7552 | 128 | 7488 | 177.4 37 | 878 |15360 | 7680 | 128 | 7616 | 177.4 38 | 893 |15616 | 7808 | 128 | 7744 | 177.5 39 | 907 |15872 | 7936 | 128 | 7872 | 177.4 40 | 922 |16128 | 8064 | 128 | 8000 | 177.5 41 | 937 |16384 | 8192 | 128 | 8128 | 177.5 42 | 951 |16640 | 8320 | 128 | 8256 | 177.4 43 | 966 |16896 | 8448 | 128 | 8384 | 177.5 44 | 981 |17152 | 8576 | 128 | 8512 | 177.6 45 | 995 |17408 | 8704 | 128 | 8640 | 177.4 46 |1010 |17664 | 8832 | 128 | 8768 | 177.5 47 |1025 |17920 | 8960 | 128 | 8896 | 177.5 48 |1039 |18176 | 9088 | 128 | 9024 | 177.4 49 |1054 |18432 | 9216 | 128 | 9152 | 177.5 --- Using fastest result of md_sync_window=5632 for Pass 3 --- --- TEST PASS 3 (4 Hrs - 18 Sample Points @ 10min Duration) --- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1a | 644 |11264 | 5632 | 128 | 5631 | 177.6 1b | 644 |11264 | 5632 | 128 | 5628 | 177.4 1c | 644 |11264 | 5632 | 128 | 5624 | 177.5 1d | 644 |11264 | 5632 | 128 | 5620 | 177.5 1e | 644 |11264 | 5632 | 128 | 5616 | 177.4 1f | 644 |11264 | 5632 | 128 | 5612 | 177.4 1g | 644 |11264 | 5632 | 128 | 5608 | 177.5 1h | 644 |11264 | 5632 | 128 | 5604 | 177.4 1i | 644 |11264 | 5632 | 128 | 5600 | 177.4 1j | 644 |11264 | 5632 | 128 | 5596 | 177.5 1k | 644 |11264 | 5632 | 128 | 5592 | 177.5 1l | 644 |11264 | 5632 | 128 | 5588 | 177.6 1m | 644 |11264 | 5632 | 128 | 5584 | 177.5 1n | 644 |11264 | 5632 | 128 | 5580 | 177.4 1o | 644 |11264 | 5632 | 128 | 5576 | 177.4 1p | 644 |11264 | 5632 | 128 | 5572 | 177.5 1q | 644 |11264 | 5632 | 128 | 5568 | 177.4 1r | 644 |11264 | 5632 | 128 | 2816 | 177.4 The results below do NOT include the Baseline test of current values. The Fastest settings tested give a peak speed of 177.6 MB/s md_sync_window: 5632 md_num_stripes: 11264 md_sync_thresh: 5631 nr_requests: 128 This will consume 644 MB (388 MB more than your current utilization of 256 MB) The Thriftiest settings (95% of Fastest) give a peak speed of 172.1 MB/s md_sync_window: 384 md_num_stripes: 768 md_sync_thresh: 376 nr_requests: 128 This will consume 43 MB (213 MB less than your current utilization of 256 MB) The Recommended settings (99% of Fastest) give a peak speed of 176.2 MB/s md_sync_window: 3072 md_num_stripes: 6144 md_sync_thresh: 3064 nr_requests: 128 This will consume 351 MB (95 MB more than your current utilization of 256 MB) NOTE: Adding additional drives will increase memory consumption. In Unraid, go to Settings > Disk Settings to set your chosen parameter values. Completed: 15 Hrs 9 Min 0 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: nas Unraid version 6.7.3-rc1 md_num_stripes=4480 md_sync_window=2048 md_sync_thresh=2000 nr_requests=128 (Global Setting) sbNumDisks=14 CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz RAM: 32GiB System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 32941156 587340 31756344 579656 597472 31422840 Low: 32941156 1184812 31756344 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [1] scsi1 ata_piix - [2] scsi2 ata_piix - [3] scsi3 vmw_pvscsi - PVSCSI SCSI Controller [4] scsi4 vmw_pvscsi - PVSCSI SCSI Controller [5] scsi5 mpt3sas - SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) [5:0:0:0] disk3 sde 8.00TB HGST HDN728080AL [5:0:10:0] disk12 sdn 8.00TB HGST HDN728080AL [5:0:11:0] disk5 sdo 8.00TB HGST HDN728080AL [5:0:12:0] disk4 sdp 8.00TB HGST HDN728080AL [5:0:13:0] disk6 sdq 8.00TB HGST HDN728080AL [5:0:2:0] disk2 sdf 8.00TB HGST HDN728080AL [5:0:3:0] disk9 sdg 8.00TB HGST HDN728080AL [5:0:4:0] disk8 sdh 8.00TB HGST HDN728080AL [5:0:5:0] disk1 sdi 8.00TB HGST HDN728080AL [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [5:0:7:0] parity2 sdk 8.00TB HGST HDN728080AL [5:0:8:0] disk11 sdl 8.00TB HGST HDN728080AL [5:0:9:0] disk10 sdm 8.00TB HGST HDN728080AL [N0] scsiN0 nvme0 - NVMe [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL *** END OF REPORT ***
  13. It does, except for the disk order. I assume [5:0:x:0] is the port number. They don't line up, but that doesn't really matter.
  14. root@nas:~# egrep -i "\[|idx|name|type|device|color" /var/local/emhttp/disks.ini ["parity"] idx="0" name="parity" device="sdj" type="Parity" color="green-on" deviceSb="" ["disk1"] idx="1" name="disk1" device="sdi" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md1" ["disk2"] idx="2" name="disk2" device="sdf" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md2" ["disk3"] idx="3" name="disk3" device="sde" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md3" ["disk4"] idx="4" name="disk4" device="sdp" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md4" ["disk5"] idx="5" name="disk5" device="sdo" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md5" ["disk6"] idx="6" name="disk6" device="sdq" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md6" ["disk7"] idx="7" name="disk7" device="sdr" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md7" ["disk8"] idx="8" name="disk8" device="sdh" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md8" ["disk9"] idx="9" name="disk9" device="sdg" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md9" ["disk10"] idx="10" name="disk10" device="sdm" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md10" ["disk11"] idx="11" name="disk11" device="sdl" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md11" ["disk12"] idx="12" name="disk12" device="sdn" type="Data" color="green-on" fsType="xfs" fsColor="green-on" deviceSb="md12" ["parity2"] idx="29" name="parity2" device="sdk" type="Parity" color="green-on" deviceSb="" ["cache"] idx="30" name="cache" device="nvme0n1" type="Cache" color="green-on" fsType="xfs" fsColor="yellow-on" deviceSb="nvme0n1p1" ["flash"] idx="54" name="flash" device="sda" type="Flash" color="green-on" comment="Unraid OS boot device" fsType="vfat" fsColor="yellow-on" root@nas:~#
  15. Its still on pass 2, but I think it has found the max speed for my array.
  16. Here's a short test on v4.1-beta. I just kicked off a long test. I will report back tomorrow. Unraid 6.x Tunables Tester v4.1 BETA 1 by Pauven Tunables Report produced Sun Aug 11 21:00:16 CDT 2019 Run on server: nas Short Parity Sync Test Current Values: md_num_stripes=4480, md_sync_window=2048, md_sync_thresh=2000 Global nr_requests=128 Disk Specific nr_requests Values: sdj=128, sdi=128, sdf=128, sde=128, sdp=128, sdo=128, sdq=128, sdr=128, sdh=128, sdg=128, sdm=128, sdl=128, sdn=128, sdk=128, --- INITIAL BASELINE TEST OF CURRENT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 256 | 4480 | 2048 | 128 | 2000 | 176.3 --- BASELINE TEST OF UNRAID DEFAULT VALUES (1 Sample Point @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s ---------------------------------------------- 1 | 73 | 1280 | 384 | 128 | 192 | 126.7 --- TEST PASS 1 (2 Min - 12 Sample Points @ 10sec Duration) --- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 43 | 768 | 384 | 128 | 376 | 171.3 | 320 | 156.3 | 192 | 127.4 2 | 87 | 1536 | 768 | 128 | 760 | 173.4 | 704 | 173.0 | 384 | 169.7 3 | 175 | 3072 | 1536 | 128 | 1528 | 174.9 | 1472 | 174.1 | 768 | 173.9 4 | 351 | 6144 | 3072 | 128 | 3064 | 176.0 | 3008 | 175.9 | 1536 | 174.4 --- TEST PASS 1_HIGH (30 Sec - 3 Sample Points @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 | 702 |12288 | 6144 | 128 | 6136 | 177.6 | 6080 | 177.3 | 3072 | 177.6 --- TEST PASS 1_VERYHIGH (30 Sec - 3 Sample Points @ 10sec Duration)--- Tst | RAM | stri | win | req | thresh | MB/s | thresh | MB/s | thresh | MB/s -------------------------------------------------------------------------------- 1 |1054 |18432 | 9216 | 128 | 9208 | 177.8 | 9152 | 178.0 | 4608 | 177.4 --- END OF SHORT AUTO TEST FOR DETERMINING IF YOU SHOULD RUN THE REAL TEST --- If the speeds changed with different values you should run a NORMAL/LONG test. If speeds didn't change then adjusting Tunables likely won't help your system. Completed: 0 Hrs 3 Min 36 Sec. NOTE: Use the smallest set of values that produce good results. Larger values increase server memory use, and may cause stability issues with Unraid, especially if you have any add-ons or plug-ins installed. System Info: nas Unraid version 6.7.3-rc1 md_num_stripes=4480 md_sync_window=2048 md_sync_thresh=2000 nr_requests=128 (Global Setting) sbNumDisks=14 CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz RAM: 32GiB System Memory Outputting free low memory information... total used free shared buff/cache available Mem: 32941156 569084 31813128 540572 558944 31479924 Low: 32941156 1128028 31813128 High: 0 0 0 Swap: 0 0 0 SCSI Host Controllers and Connected Drives -------------------------------------------------- [0] scsi0 usb-storage - [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [1] scsi1 ata_piix - [2] scsi2 ata_piix - [3] scsi3 vmw_pvscsi - PVSCSI SCSI Controller [4] scsi4 vmw_pvscsi - PVSCSI SCSI Controller [5] scsi5 mpt3sas - SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) [5:0:0:0] disk3 sde 8.00TB HGST HDN728080AL [5:0:10:0] disk12 sdn 8.00TB HGST HDN728080AL [5:0:11:0] disk5 sdo 8.00TB HGST HDN728080AL [5:0:12:0] disk4 sdp 8.00TB HGST HDN728080AL [5:0:13:0] disk6 sdq 8.00TB HGST HDN728080AL [5:0:2:0] disk2 sdf 8.00TB HGST HDN728080AL [5:0:3:0] disk9 sdg 8.00TB HGST HDN728080AL [5:0:4:0] disk8 sdh 8.00TB HGST HDN728080AL [5:0:5:0] disk1 sdi 8.00TB HGST HDN728080AL [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL [5:0:7:0] parity2 sdk 8.00TB HGST HDN728080AL [5:0:8:0] disk11 sdl 8.00TB HGST HDN728080AL [5:0:9:0] disk10 sdm 8.00TB HGST HDN728080AL [N0] scsiN0 nvme0 - NVMe [5:0:6:0] parity sdj 8.00TB HGST HDN728080AL *** END OF REPORT ***
  17. You're very welcome! Have you posted an updated version recently? I have scheduled downtime for tonight to run the Long Test.
  18. root@nas:~# mdcmd status | grep "rdevStatus" rdevStatus.0=DISK_OK rdevStatus.1=DISK_OK rdevStatus.2=DISK_OK rdevStatus.3=DISK_OK rdevStatus.4=DISK_OK rdevStatus.5=DISK_OK rdevStatus.6=DISK_OK rdevStatus.7=DISK_OK rdevStatus.8=DISK_OK rdevStatus.9=DISK_OK rdevStatus.10=DISK_OK rdevStatus.11=DISK_OK rdevStatus.12=DISK_OK rdevStatus.13=DISK_NP rdevStatus.14=DISK_NP rdevStatus.15=DISK_NP rdevStatus.16=DISK_NP rdevStatus.17=DISK_NP rdevStatus.18=DISK_NP rdevStatus.19=DISK_NP rdevStatus.20=DISK_NP rdevStatus.21=DISK_NP rdevStatus.22=DISK_NP rdevStatus.23=DISK_NP rdevStatus.24=DISK_NP rdevStatus.25=DISK_NP rdevStatus.26=DISK_NP rdevStatus.27=DISK_NP rdevStatus.28=DISK_NP rdevStatus.29=DISK_OK root@nas:~# root@nas:~# root@nas:~# mdcmd status | grep "rdevName" rdevName.0=sdi rdevName.1=sdf rdevName.2=sdj rdevName.3=sde rdevName.4=sdq rdevName.5=sdl rdevName.6=sdp rdevName.7=sdo rdevName.8=sdn rdevName.9=sdg rdevName.10=sdh rdevName.11=sdk rdevName.12=sdr rdevName.13= rdevName.14= rdevName.15= rdevName.16= rdevName.17= rdevName.18= rdevName.19= rdevName.20= rdevName.21= rdevName.22= rdevName.23= rdevName.24= rdevName.25= rdevName.26= rdevName.27= rdevName.28= rdevName.29=sdm root@nas:~# root@nas:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 16G 716M 15G 5% / tmpfs 32M 600K 32M 2% /run devtmpfs 16G 0 16G 0% /dev tmpfs 16G 0 16G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 1.9M 127M 2% /var/log /dev/sda1 30G 490M 30G 2% /boot /dev/loop0 8.7M 8.7M 0 100% /lib/modules /dev/loop1 5.9M 5.9M 0 100% /lib/firmware /dev/md1 7.3T 7.1T 239G 97% /mnt/disk1 /dev/md2 7.3T 3.7T 3.7T 51% /mnt/disk2 /dev/md3 7.3T 6.6T 715G 91% /mnt/disk3 /dev/md4 7.3T 6.6T 734G 91% /mnt/disk4 /dev/md5 7.3T 5.5T 1.9T 75% /mnt/disk5 /dev/md6 7.3T 4.1T 3.3T 56% /mnt/disk6 /dev/md7 7.3T 7.2T 165G 98% /mnt/disk7 /dev/md8 7.3T 5.5T 1.9T 75% /mnt/disk8 /dev/md9 7.3T 7.1T 193G 98% /mnt/disk9 /dev/md10 7.3T 6.5T 850G 89% /mnt/disk10 /dev/md11 7.3T 6.9T 458G 94% /mnt/disk11 /dev/md12 7.3T 3.4T 3.9T 47% /mnt/disk12 /dev/nvme0n1p1 477G 395G 82G 83% /mnt/cache shfs 88T 70T 18T 80% /mnt/user /dev/sdd1 894G 716G 179G 81% /mnt/disks/APPDATA_BACKUP /dev/sdc1 1021M 348M 674M 35% /mnt/disks/UNRAIDBOOT /dev/loop2 20G 4.3G 16G 22% /var/lib/docker shm 64M 0 64M 0% /var/lib/docker/containers/e60fe82b5262b59081476363199cb7cc3082771ec0b2946a38b7accc59a0a502/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/6c04f6088bd948815e7023f949d83fbc5673f2e6bcf14aea7f42aec79f5e612f/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/9e668c38bba52dea641308a12408f339f7d8824f2ce84d573cc633f518a0cbeb/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/b6a9bf81564477a9cc60cdc5079ed23d8c950ab1b60047972ce0d0339bd7a107/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/ebe0cc2382f0dfe4ff00acd561ad8e50f2b8a723d685ea32b3b306463057fc70/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/0d23c9a0d418f1daab9ba74758001d2a9e2372b4228e03f884eecba0a57dabc4/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/779092989e59bed35c5c36d91c3b68a7c3392da84f56d1da4e240f189fb41eb9/mounts/shm shm 64M 8.0K 64M 1% /var/lib/docker/containers/f3ffce0a4ddee82e2de28367ebd9d3d99564bfcd95ff7d7cfc0926f315bb8883/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/9ba4d044a1e36b6049f429fa264569894ccb92c264fbfd360b5ac26c5cb250c8/mounts/shm root@nas:~#
  19. root@nas:~# lsscsi -H [0] usb-storage [1] ata_piix [2] ata_piix [3] vmw_pvscsi [4] vmw_pvscsi [5] mpt3sas [N:0] /dev/nvme0 Samsung SSD 970 PRO 512GB S463NF0M516013Y 1B2QEXP7 root@nas:~# lsscsi -st [0:0:0:0] disk usb:1-1.1:1.0 /dev/sda 31.9GB [0:0:0:1] disk usb:1-1.1:1.0 /dev/sdb - [3:0:0:0] disk /dev/sdc 1.07GB [4:0:0:0] disk /dev/sdd 960GB [5:0:0:0] disk sas:0x300605b00e84f8bf /dev/sde 8.00TB [5:0:1:0] enclosu sas:0x300705b00e84f8b0 - - [5:0:2:0] disk sas:0x300605b00e84f8bb /dev/sdf 8.00TB [5:0:3:0] disk sas:0x300605b00e84f8b3 /dev/sdg 8.00TB [5:0:4:0] disk sas:0x300605b00e84f8b5 /dev/sdh 8.00TB [5:0:5:0] disk sas:0x300605b00e84f8b9 /dev/sdi 8.00TB [5:0:6:0] disk sas:0x300605b00e84f8bd /dev/sdj 8.00TB [5:0:7:0] disk sas:0x300605b00e84f8b7 /dev/sdk 8.00TB [5:0:8:0] disk sas:0x300605b00e84f8ba /dev/sdl 8.00TB [5:0:9:0] disk sas:0x300605b00e84f8b4 /dev/sdm 8.00TB [5:0:10:0] disk sas:0x300605b00e84f8b1 /dev/sdn 8.00TB [5:0:11:0] disk sas:0x300605b00e84f8be /dev/sdo 8.00TB [5:0:12:0] disk sas:0x300605b00e84f8bc /dev/sdp 8.00TB [5:0:13:0] disk sas:0x300605b00e84f8b8 /dev/sdq 8.00TB [5:0:14:0] disk sas:0x300605b00e84f8b0 /dev/sdr 8.00TB [N:0:4:1] disk pcie 0x144d:0xa801 /dev/nvme0n1 512GB
  20. root@nas:~# lshw -quiet -short -c storage H/W path Device Class Description ========================================================== /0/100/7.1 storage 82371AB/EB/MB PIIX4 IDE /0/100/15/0 scsi3 storage PVSCSI SCSI Controller /0/100/15.1/0 storage NVMe SSD Controller SM981/PM981/PM983 /0/100/16.1/0 scsi4 storage PVSCSI SCSI Controller /0/100/17/0 scsi5 storage SAS3416 Fusion-MPT Tri-Mode I/O Controller Chip (IOC) /0/3 scsi0 storage
  21. You could go to 6.6.6. Surprisingly, that was a really good release.
  22. openVMTools_compiled is now listed in Community Applications. You will need to delete your current plugin and install from CA if you wish to get update notifications. Thanks @Squid for all of your help!
  23. This VM only exists for me to compile open-vm-tools, so its 99% virtual (I have a passed through licensed USB). My production box uses a passed-through LSI controller with 8TB drives. Hopefully I can run the tuning script on it soon.