Jump to content

jbartlett

Community Developer
  • Posts

    1,897
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. Beta 2b pushed Change Vendor to Muskin if the model starts with "MKN" - Muskin drives identify as Toshiba. Change Vendor to Crucial if the model starts with "MTFD" - drive Vendor not given Change Vendor to OCZ if the vendor starts with "OCZ-" Change Vendor to "Samsung" if model string starts with "Samsung" - some drives report as "Western Digital" Added Model RegEx "CT[0-9]{3,}(B|M)[A-Z]?[0-9]{2,}SSD" to identify Crucial drives Added Mushkin model cleanup Added Plextor model cleanup
  2. All of the images came from the same source Though some vendors were nice enough to have presentation media to use.
  3. That seems to be the case for all Mushkin drives. I added logic to beta 2a (pushed) to change the Vendor to Mushkin if the model starts with MKN.
  4. Correct. I have 16 drive models identified that aren't in my database.
  5. When I tested this utility against my production server, I noticed that I had a drive going wonky on me. I couldn't even test it at first because it kept tripping the SpeedGap detection in which the minimum & maximum speed over 15 seconds was too great - a sign of disk activity or in my case, a drive giving very inconsistent read speeds. I had to add logic to disable the SpeedGap detection to even be able to fully test the drive. In this case, the drives are all the same make & revision and their curve should be nearly identical but Disk 5 stands out. Viewing Drive 5 by itself, I can see it's curve is not normal. Spinners should have a steady decline over the entire range of the drive. I'll be retiring this drive from my main server and using it for platter heat map test where the entire drive is read and a heatmap of the read speeds given.
  6. This Docker Application will let you view your storage controllers & the drives attached to them and perform Benchmarks on both. Controller Benchmarks helps to identify if the drives attached to it will potentially exceed the capacity of the controller if they were all fully being read from at the same time (such as during a Parity check). Drive Benchmarks lets you monitor the performance over time to look for desegregation or unexpected slow areas while getting a clean SMART report. Installation Via the Community Application: Search for "DiskSpeed" Manual Installation (The Community Applications plugin is having issues currently, here's a work around for now) Save the attached "my-DiskSpeed.xml" file to your NAS under \\tower\flash\config\plugins\dockerMan\templates-user View the Docker tab in your unRAID Administrator , click on "Add Container" Under "Select a template", pick "my-DiskSpeed" The defaults should work as-is unless you have port 18888 already in use. If so, change the Web Port & WebUI settings to a new port number. The Docker will create a directory called "DiskSpeed" in your appdata directory to hold persistent data. Note: Privileged mode is required so that the application can see the controllers & drives on the host OS. This docker will use up to 512MB of RAM. RAM optimization will happen in a later BETA. Running View the Docker tab in your unRAID Administrator and click on the icon next to "DiskSpeed" and select WebUI. Drive Images As of this December 2022, the Hard Drive Database (HDDB) has 3,000+ drive models in 70+ brands. If you have one or more drives that do not have a predefined image in the HDDB, you have a couple options available - wait for me to add the image which will be displayed after you click "Rescan Controllers" or you can add the drive yourself by editing it and uploading a drive image for it. You can view drive images in the HDDB to see if there's an image that'll fit your drive and optionally upload it so others can benefit. Controller & Drive Identification Issues Some drives, notably SSD's, do not reveal the Vendor correctly or at all. If you view the Drive information and it has the same value for the vendor as the model or an incorrect or missing Vendor, please inform me so that I can manually add the drive to the database or add code to handle it. If you have a controller that is not detected, please notify me. Benchmarking Drives Disk Drives with platters are benchmarked by reading the drive at certain percentages for 15 seconds and averages the speed for each second except for the first 2 seconds which tends to trend high. Since drives can be accessed while testing, if a min/max read speed exceeds a threshold, the test is re-performed with an increasing threshold to account for drives with bad areas. Solid State drives are benchmarked by writing large files to the device and then reading them back. In order to benchmark SSD's, they must be mounted in UNRAID and a mapping configured in the DiskSpeed Docker settings. You must restart the DiskSpeed app after mounting a device for it to be detected. For other Docker installations, an example is -v '/mnt':'/mnt/Host':'rw' if you have all your SSD's mounted under /mnt. You may need more than one volume parameter if they are mounted in different areas. Contributing to the Hard Drive Database If you have a drive that doesn't have information in the Hard Drive Database other than the model or you've performed benchmark tests, a button will be displayed at the bottom of the page labeled "Upload Drive & Benchmark Data to the Hard Drive Database". The HDDB will display information given up by the OS for the drives and the average speed graphs for comparison. Application Errors If you get an error message, please post the error here and the steps you took to cause it to happen. There will be a long string of java diagnostics after the error message (java stack) that you do not need to include, just the error message details. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter. Note: The unRAID diagnostic file doesn't provide any help. If submitting a diagnostic file, please use the link at the bottom of the controllers in the Diskspeed GUI. Home Screen (click top label to return to this screen) Controller Information Drive Information While the system cache is bypassed when benchmarking, some devices have a built-in cache that ignores cache bypass commands. An initial high write speed that quickly levels out is a sign of such as shown below. Drive Editor my-DiskSpeed.xml
  7. That CS380 is exactly what I've been looking for in my Ryzen server upgrade! SWEET! It's been hard finding a case that had hot-swap bays with gaps between the drives for efficient cooling without requiring high speed/noisy fans. I also paired it with two IcyDock 3.5" in 5.25" hotswaps that include a fan blowing on the bottom of the drive. https://www.newegg.com/Product/Product.aspx?Item=9SIA4M52YX8129 Looking forward for this to release: https://icydock.com/goods.php?id=269
  8. I'd take note of how many fast drives/SSD's you have on your controllers and some motherboards have more than one SATA controller. You're alpha testing my DiskSpeed docker app, run a controller benchmark to see if you're maxing out a controller and adjust accordingly if so.
  9. Yup. I use the INI files to determine the unRAID slot ID and to get the Registration information to uniquely identify the user if they submit drive images (in case of spam/really not drive images). If the mapping for the INI directory location is removed (such as running under a different OS), it just displays the OS drive ID and comes up with a different way of uniquely identifying the user. In a future update, it'll use the unRAID method of spinning up drives if running under unRAID instead of issuing the OS spin up command & reading from three random sectors.
  10. You don’t even need unRAID (in theory). As long as Tom doesn’t change how the ini files are laid out, it’ll work. And if he does, the only thing that will happen is that it won’t tell you the Drive slot it’s in. But yes, I’m developing against the most recent RC.
  11. Status Update: I'm about ready for open Alpha testing. Drive benchmark testing with pre-alpha team is happening now - will scan one drive per controller at the same time. Working on support to add multiple drives per controller but didn't want that to keep delaying the release.
  12. From my experience, the UNRAID forums are by far one of the better experiences you'd find for a product support forum, especially if you start digging into the nitty gritty bits with creating scripts/VM/Docker stuff. Developers tend to support other developers here because we all benefit from it. That's also true from the standard user front though probably to a little lesser degree. There's a bit of passion given to UNRAID as it has let us store data that is important to us and keep that data secure. I've seen people say that they should search for it for questions that are asked quite frequently but typically because it's an easy to find issue that gives more detail than the person had at the time.
  13. Interesting. The only other reason I can think of is somehow the drive's bandwidth is being capped but the drive can still exceed it from end to end. This mapping logic doesn't make any sense.
  14. Edit the syslinux.cfg file. append initrd=/bzroot pci=nommconf
  15. I'm getting a ton of the following errors on my GIGABYTE X399 Designare EX whenever I write to the array or cache drives though no data corruption seems to be evident. Based on my research, enabling a kernel PCI option can change how the PCIe bus is accessed and resolve/suppress these errors. So my question is - how do I toggle kernel options? Mar 11 03:25:41 NASDev kernel: pcieport 0000:00:01.1: [ 6] Bad TLP Mar 11 03:26:31 NASDev kernel: pcieport 0000:00:01.1: AER: Corrected error received: id=0000 Mar 11 03:26:31 NASDev kernel: pcieport 0000:00:01.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0009(Receiver ID) Mar 11 03:26:31 NASDev kernel: pcieport 0000:00:01.1: device [1022:1453] error status/mask=00000040/00006000 Mar 11 03:26:31 NASDev kernel: pcieport 0000:00:01.1: [ 6] Bad TLP Mar 11 03:26:51 NASDev kernel: pcieport 0000:00:01.1: AER: Corrected error received: id=0000 Mar 11 03:26:51 NASDev kernel: pcieport 0000:00:01.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0009(Receiver ID) Mar 11 03:26:51 NASDev kernel: pcieport 0000:00:01.1: device [1022:1453] error status/mask=00000080/00006000 Mar 11 03:26:51 NASDev kernel: pcieport 0000:00:01.1: [ 7] Bad DLLP Mar 11 03:27:18 NASDev kernel: pcieport 0000:00:01.1: AER: Corrected error received: id=0000 Mar 11 03:27:18 NASDev kernel: pcieport 0000:00:01.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0009(Receiver ID) Mar 11 03:27:18 NASDev kernel: pcieport 0000:00:01.1: device [1022:1453] error status/mask=00000080/00006000 Mar 11 03:27:18 NASDev kernel: pcieport 0000:00:01.1: [ 7] Bad DLLP Mar 11 03:28:38 NASDev kernel: pcieport 0000:00:01.1: AER: Multiple Corrected error received: id=0000 Mar 11 03:28:38 NASDev kernel: pcieport 0000:00:01.1: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0009(Receiver ID) Mar 11 03:28:38 NASDev kernel: pcieport 0000:00:01.1: device [1022:1453] error status/mask=00000040/00006000 nasdev-diagnostics-20180311-0149.zip
  16. Motherboard: GIGABYTE X399 Designare EX Installed a GeForce GTX 960 into PCIe slot #1 and a el-cheapo GeForce GT 610 into PCIe slot #4 and configured BIOS to use the 610 as the primary - though setting it to primary only worked with the 610 in slot 4. IOMMU group 0: [1022:1452] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 1: [1022:1453] 00:01.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 2: [1022:1452] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 3: [1022:1452] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 4: [1022:1453] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 5: [1022:1452] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 6: [1022:1452] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 09:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a [1022:1456] 09:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145c] 09:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller IOMMU group 7: [1022:1452] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 [1022:7901] 0a:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) [1022:1457] 0a:00.3 Audio device: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller IOMMU group 8: [1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 59) [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51) IOMMU group 9: [1022:1460] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1467] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 10: [1022:1460] 00:19.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 0 [1022:1461] 00:19.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 1 [1022:1462] 00:19.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 2 [1022:1463] 00:19.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 3 [1022:1464] 00:19.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 4 [1022:1465] 00:19.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 5 [1022:1466] 00:19.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric Device 18h Function 6 [1022:1467] 00:19.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Data Fabric: Device 18h; Function 7 IOMMU group 11: [1022:43ba] 01:00.0 USB controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller (rev 02) [1022:43b6] 01:00.1 SATA controller: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset SATA Controller (rev 02) [1022:43b1] 01:00.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset PCIe Bridge (rev 02) [1022:43b4] 02:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:01.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:03.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [1022:43b4] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] 300 Series Chipset PCIe Port (rev 02) [8086:1539] 04:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) [8086:24fd] 05:00.0 Network controller: Intel Corporation Wireless 8265 / 8275 (rev 78) [8086:1539] 06:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) IOMMU group 12: [10de:104a] 08:00.0 VGA compatible controller: NVIDIA Corporation GF119 [GeForce GT 610] (rev a1) [10de:0e08] 08:00.1 Audio device: NVIDIA Corporation GF119 HDMI Audio Controller (rev a1) IOMMU group 13: [1022:1452] 40:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 14: [1022:1452] 40:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 15: [1022:1452] 40:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 16: [1022:1453] 40:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge IOMMU group 17: [1022:1452] 40:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge IOMMU group 18: [1022:1452] 40:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 40:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:145a] 42:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 145a [1022:1456] 42:00.2 Encryption controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Platform Security Processor [1022:145c] 42:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller IOMMU group 19: [1022:1452] 40:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1454] 40:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) Internal PCIe GPP Bridge 0 to Bus B [1022:1455] 43:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Device 1455 [1022:7901] 43:00.2 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51) IOMMU group 20: [10de:1401] 41:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) [10de:0fba] 41:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1) USB Devices Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0a2b Intel Corp. Bus 001 Device 003: ID 048d:8295 Integrated Technology Express, Inc. Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 154b:007a PNY Classic Attache Flash Drive Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 005 Device 002: ID 045e:0745 Microsoft Corp. Nano Transceiver v1.0 for Bluetooth Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub CPU Pairings Pair 1: cpu 0 / cpu 16 Pair 2: cpu 1 / cpu 17 Pair 3: cpu 2 / cpu 18 Pair 4: cpu 3 / cpu 19 Pair 5: cpu 4 / cpu 20 Pair 6: cpu 5 / cpu 21 Pair 7: cpu 6 / cpu 22 Pair 8: cpu 7 / cpu 23 Pair 9: cpu 8 / cpu 24 Pair 10: cpu 9 / cpu 25 Pair 11: cpu 10 / cpu 26 Pair 12: cpu 11 / cpu 27 Pair 13: cpu 12 / cpu 28 Pair 14: cpu 13 / cpu 29 Pair 15: cpu 14 / cpu 30 Pair 16: cpu 15 / cpu 31
  17. Depends on what you're doing with the 6700K.
  18. unRAID: Cheaper than Qnap unRAID: One OS to rule them all, one OS to find them, One OS to bring them all and in the 50TB bind them. unRAID: More fun than a JBOD array with a failed drive unRAID: Turning PC's into NAS devices since 2005
  19. Just pulled the trigger on a Ryzen 1950X + Gigabyte X399 Designare EX. I'll post stats when it posts. I don't know wither to cackle madly at upgrading from an 5930K or to pass a brick at the price.
  20. It's on the far off To-Do list along with a heat map of each platter.
  21. I've got the code ported to Docker and reworked a lot of the controller & drive detection & optimization as my knowledge of such things increased. The foundation is set for finally adding the drive benchmarking now that the controller & drive optimization is done. I'm coding it to support testing multiple drives on the same controller at the same time after testing how many drives the controller can actually handle at the same time (exceeding bandwidth/etc). So if you have two controllers with 4 drives attached to each and bandwidth is not maxed out on either or the system bus, you'll be able to run a benchmark on all 8 drives at once. If the controller bandwidth is maxed out reading all drives at once (such as loaded up with SSD's), it'll test no more than x drives at once, testing additional drives once other drives on the controller are done testing. All done via an auto-generated bash script. That's gonna be fun to develop. haha So what I need now is people to do a sanity alpha test with controller & drive detection & testing before I go public alpha. Send me a PM if you are interested.
  22. I can tell you with that many drives, you'll want to run dual parity or break it up into two NAS units. Maybe use a cube case that supports dual motherboards and create two NAS servers in one? Tom has spoken up in the past about the future possibility of having a master/slave unRAID setup where multiple unRAID servers would look like one on the Network.
  23. I've been pondering the same thing for my annual bonus with a balls-to-the-wall multi-CPU powerhouse though not as crazy on the drives as you. Pondering the balance between core speed vs core count -the primary use would be to re-encode my huge Plex library to h265 from lossless MKV's and plex encodings is leading me towards core count.
×
×
  • Create New...