Jump to content

jbartlett

Community Developer
  • Content Count

    1299
  • Joined

  • Last visited

  • Days Won

    7

Everything posted by jbartlett

  1. Actually, I do create a bus tree so I can test each level for a better view.
  2. This seems to highlight a recent thought of adding a system wide benchmark of the drives, reaching multiple controllers together and all together, with all combinations of controllers.
  3. Install the DiskSpeed docker app in my sig, it will show/tell you if your controller is saturated.
  4. I had eight WD 6TB Red Pros on a SAS PCIe 2 controller and I had to take two of them off of it before I could read all drives at the same time at the same speed as each drive on it's own. My UTT scores changed a large amount after.
  5. Try my DiskSpeed (link in my sig) plugin's controller bandwidth test. It can help you determine if you've saturated your controller's capabilities.
  6. If you're like me and can't run in Safe/Maint Mode, disabling network shares (globally or per share) will prevent people & apps from accessing the box from other machines but still let KVM & Docker run. In my case, I run my home web server through Docker and app/email server through a VM but neither is hitting against the array.
  7. My backup server has no Parity drive. The Check button then does a Read check of all the array drives looking for errors.
  8. Bingo! I didn't make the connection until reading the posts just prior to this.
  9. Normal run from Beta 2. I had reset my parity settings to defaults after moving two drives off of the PCIe2 controller so it wasn't being saturated. Looks like defaults work for me. NormalSyncTestReport_2019_08_13_0102.txt Short run from Beta 3 ShortSyncTestReport_2019_08_13_1046.txt
  10. I had but only the nvme drives so it likely was easy to miss. Here's the full report [0:0:0:0] disk usb:1-4:1.0 /dev/sda 32.0GB [1:0:0:0] disk sata:5000cca255c167b1 /dev/sdh 6.00TB [2:0:0:0] disk sata:5000cca24dce4c79 /dev/sdi 6.00TB [3:0:0:0] disk sata:5002538e40270cde /dev/sdj 1.00TB [4:0:0:0] disk sata:5002538e40270f85 /dev/sdk 1.00TB [5:0:0:0] disk sata:5000cca255dc42d7 /dev/sdl 6.00TB [6:0:0:0] disk sata:5001b44ef905211e /dev/sdm 240GB [8:0:0:0] disk sata:5000cca24dcc471a /dev/sdn 6.00TB [11:0:0:0] disk sas:0x4433221100000000 /dev/sdb 6.00TB [11:0:1:0] disk sas:0x4433221101000000 /dev/sdc 6.00TB [11:0:2:0] disk sas:0x4433221102000000 /dev/sdd 6.00TB [11:0:3:0] disk sas:0x4433221103000000 /dev/sde 6.00TB [11:0:4:0] disk sas:0x4433221105000000 /dev/sdf 6.00TB [11:0:5:0] disk sas:0x4433221107000000 /dev/sdg 6.00TB [N:0:2:1] disk pcie 0x144d:0xa801 /dev/nvme0n1 500GB [N:1:0:1] disk pcie 0x1b4b:0x1093 /dev/nvme1n1 256GB
  11. My Beta2 short test. Running a normal test now. ShortSyncTestReport_2019_08_13_0042.txt
  12. From my experience using my DiskSpeed app, the sd? drive assignment is done in the Port No order on the controller - so sdc is one port after sdb for example. All drives on a given controller are given sd? ID's and then all drives on the next controller and so on until all controllers have been processed. So if the unRAID disk order is done in sd? order, then it should match up barring the controllers initializing in a different order on boot.
  13. I removed one of my drives from my array to turn it into a hot spare since I have more protected storage than I needed. I went from nine array drives to eight. One of my shares was set to include Disk9. After I removed one drive, Disk9 no longer existed but the share was still set to include it based on the "Fix common problems" plugin report. The problem is, Disk 9 isn't included in the list of available drives so it couldn't be seen or specifically deselected on it's own. Fixing was simply a matter of checking another drive such as Disk 1 and then clearing it. Diagnostics attached but I had already resolved the issue prior to thinking that I should have taken a snapshot. I don't know what kind of issue would have arrived if the OS tried to write to a non-existing array drive specified in the share settings. nas-diagnostics-20190812-0759.zip
  14. ["parity"] idx="0" ["disk1"] idx="1" ["disk2"] idx="2" ["disk3"] idx="3" ["disk4"] idx="4" ["disk5"] idx="5" ["disk6"] idx="6" ["disk7"] idx="7" ["disk8"] idx="8" ["disk9"] idx="9" ["parity2"] idx="29" ["cache"] idx="30" ["cache2"] idx="31" ["flash"] idx="54"
  15. How about as neither? I have a NVMe drive mounted by UD for a VM. EDIT: I see the first two are only for unraid drives, but the last is still an option.
  16. @Pauven - You were wondering if having a saturated controller would affect the results if drives were moved around to maximize bandwidth - looks like it did. File on the 9th is with my PCIe 2 SAS controller saturated, file from the 10th is after I moved two drives to the MB controller. utt.zip
  17. H/W path Device Class Description ================================================================ /0/100/1.1/0.1 storage X399 Series Chipset SATA Controller /0/100/1.1/0.2/4/0 scsi11 storage SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] /0/100/1.2/0 storage NVMe SSD Controller SM961/PM961 /0/100/8.1/0.2 storage FCH SATA Controller [AHCI mode] /0/117/1.2/0 storage WD Black NVMe SSD /0/117/8.1/0.2 storage FCH SATA Controller [AHCI mode] /0/1 scsi0 storage /0/2 scsi4 storage /0/3 scsi5 storage /0/4 scsi6 storage /0/5 scsi8 storage /0/6 scsi1 storage /0/7 scsi2 storage /0/8 scsi3 storage
  18. Adding the no nvme option excluded the NVMe drives from the output lsscsi -H --no-nvme
  19. Ran 4.0 through on a short test, it reported the following. Probably related to the above issue I posted. It also reported the same controller 3 times at the end. Completed: 0 Hrs 2 Min 34 Sec. ./unraid6x-tunables-tester.sh: line 200: scsiN: 0[scsibus]: syntax error: invalid arithmetic operator (error token is "[scsibus]") ./unraid6x-tunables-tester.sh: line 201: scsiN: 0[driver]: syntax error: invalid arithmetic operator (error token is "[driver]") ./unraid6x-tunables-tester.sh: line 202: scsiN: 0[name]: syntax error: invalid arithmetic operator (error token is "[name]") ./unraid6x-tunables-tester.sh: line 204: ${#scsiN:0[@]}: bad substitution ./unraid6x-tunables-tester.sh: line 209: scsiN: 0[@]: syntax error: invalid arithmetic operator (error token is "[@]") ./unraid6x-tunables-tester.sh: line 200: scsiN: 1[scsibus]: syntax error: invalid arithmetic operator (error token is "[scsibus]") ./unraid6x-tunables-tester.sh: line 201: scsiN: 1[driver]: syntax error: invalid arithmetic operator (error token is "[driver]") ./unraid6x-tunables-tester.sh: line 202: scsiN: 1[name]: syntax error: invalid arithmetic operator (error token is "[name]") ./unraid6x-tunables-tester.sh: line 204: ${#scsiN:1[@]}: bad substitution ./unraid6x-tunables-tester.sh: line 209: scsiN: 1[@]: syntax error: invalid arithmetic operator (error token is "[@]") ShortSyncTestReport_2019_08_09_1101.zip
  20. Two, it looks like. [N:0] /dev/nvme0 Samsung SSD 960 EVO 500GB S3X4NB0K309**** 3B7QCXE7 [N:1] /dev/nvme1 WDC WDS256G1X0C-00ENX0 17501442**** B35900WD (last 4 of SN removed by me)
  21. I tried running version 4.0 against my system and I noticed some errors displaying before the screen cleared and it asked me if I wanted to continue. Querying lsscsi for the SCSI Hosts ./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:0': not a valid identifier ./unraid6x-tunables-tester.sh: line 128: scsiN:0[scsibus]=N:0: command not found ./unraid6x-tunables-tester.sh: line 130: scsiN:0[driver]=/dev/nvme0: No such file or directory ./unraid6x-tunables-tester.sh: line 127: declare: `scsiN:1': not a valid identifier ./unraid6x-tunables-tester.sh: line 128: scsiN:1[scsibus]=N:1: command not found ./unraid6x-tunables-tester.sh: line 130: scsiN:1[driver]=/dev/nvme1: No such file or directory Querying lshw for the SCSI Host Names, please wait (may take several minutes) nas-diagnostics-20190809-0744.zip
  22. Whoops, it was a 1.86GHz CPU. Eh, still low powered. 5x4TB drives (4 data, 1 parity). XFS I believe.
  23. I built a storage-only (with some plugins) unraid server using a 1.2GHz Atom CPU and it handled it with ease.
  24. I've got my drive cleared off and ready to be moved to a different controller and ready to test. 👍