jbartlett Posted May 20 Author Share Posted May 20 (edited) 15 hours ago, spazoid said: I tried force updating the container, but it did not resolve it. Any ideas? Please update the app, it should reflect 2.10.9.2. This issue should be fixed. If you encounter any further errors, I just need the line number block, don't need the stack trace. Edited May 20 by jbartlett Quote Link to comment
spazoid Posted May 21 Share Posted May 21 11 hours ago, jbartlett said: Please update the app, it should reflect 2.10.9.2. This issue should be fixed. And so it is thanks a bunch! 1 Quote Link to comment
tower defense Posted May 26 Share Posted May 26 On 4/4/2024 at 11:21 AM, jbartlett said: Can you email/dm me a new debug file? I've enhanced the logging since the previous one to include more detail for me to look at. Emailing you another log, the last update did not fix the isue Quote Link to comment
PST Posted June 17 Share Posted June 17 Hi guys I'm quite new to Unraid and installd the DiskSpeed app but for whatever reason I am unable to run the benchmarks. Unable to benchmark for the following reason * Docker volume mount not detected Any help would be appreciated. Thanks Quote Link to comment
jbartlett Posted June 30 Author Share Posted June 30 On 6/17/2024 at 2:00 AM, PST said: Hi guys I'm quite new to Unraid and installd the DiskSpeed app but for whatever reason I am unable to run the benchmarks. Unable to benchmark for the following reason * Docker volume mount not detected Any help would be appreciated. Thanks There's a FAQ link on the Benchmark page that explains how to set it up. Quote Link to comment
Nebur692 Posted July 15 Share Posted July 15 I have this error on the cache disk and I have space to spare: Quote Link to comment
jbartlett Posted July 16 Author Share Posted July 16 (edited) On 7/15/2024 at 8:49 AM, Nebur692 said: I have this error on the cache disk and I have space to spare: Can you open a shell prompt inside the Docker container and then enter: df -B 1KB | grep cache Reply with a screen shot of the result Edited July 16 by jbartlett Quote Link to comment
pyrosrockthisworld Posted July 31 Share Posted July 31 Just doing some testing and got a weird one where the controller on the motherboard lists not having any drives attached. Disk 1 is plugged into the Motherboard and gets benchmarked but not listed Also when trying to upload drive and benchmark data via the button at the bottom just get an error Quote Link to comment
enJOyIT Posted August 8 Share Posted August 8 (edited) What could be the reason that the single drive speed is at about 115 MB/s but when it runs all drives at the same drive, each drives max out at about 250 MB/s? It doesn't make any sense to me. I alreads reran the test and restarted the docker. There is only one drive which does >200 MB/s at single drive speed. Controller: Fusion-MPT 24GSAS/PCIe SAS40xx/41xx Broadcom / LSI RAID bus controller Type: Add-on Card in PCIe Slot SLOT4 PCIe 5.0 X8 (x8 PCI Express 5 x8) Current & Maximum Link Speed: 16GT/s width x8 (15.75 GB/s max throughput) Port 1: sdaa 14TB Seagate ST14000NM001G Rev SN04 (Disk 6) Port 2: sdab 14TB Seagate ST14000NM002G Rev 0 (Disk 14) Port 3: sdc 1TB NVMe WDS100T3X0C-00SJ Rev 0 (docker) Port 4: sdd 1TB NVMe WD_BLACK SN770 1 Rev 0 (docker2) Port 5: sde 14TB Seagate ST14000NM001G Rev SN04 (Disk 1) Port 6: sdf 10TB Seagate ST10000VN0004 Rev SC61 (Disk 2) Port 7: sdg 10TB Seagate ST10000VN0004 Rev SC61 (Disk 3) Port 8: sdh 10TB Seagate ST10000VN0004 Rev SC61 (Disk 4) Port 9: sdj 10TB Seagate ST10000VN0004 Rev SC61 (Disk 5) Port 10: sdk 12TB Seagate ST12000NM001G Rev SN04 (Disk 13) Port 11: sdl 14TB Western Digital WUH721414ALE6L4 Rev LDGNW07G (Disk 7) Port 12: sdm 14TB Seagate ST14000NM001G Rev SN04 (Disk 8) Port 13: sdn 14TB Western Digital WUH721414ALE6L4 Rev LDGNW240 (Disk 9) Port 14: sdo 12TB Seagate ST12000NM0008 Rev SN04 (Disk 10) Port 15: sdq 12TB Western Digital WD120EFAX Rev 81.00A81 (Disk 11) Port 16: sdr 18TB Seagate ST18000NM000J Rev SN02 Port 17: sds 18TB Seagate ST18000NM000J Rev SN04 Port 18: sdt 18TB Seagate ST18000NM000J Rev SN01 (Disk 12) Port 19: sdu 18TB Seagate ST18000NM000J Rev SN01 (Disk 15) Port 20: sdv 18TB Seagate ST18000NM000J Rev SN02 (Parity) Port 21: sdw 18TB Seagate ST18000NM000J Rev SN02 (Parity 2) Port 22: sdx 1TB Western Digital WDS100T1R0A Rev 411010WR (Cache 3) Port 23: sdy 1TB Western Digital WDS100T1R0A Rev 411000WR (Cache) Port 24: sdz 1TB Western Digital WDS100T1R0A Rev 411000WR (Cache 2) Quote 24 drives reported a significantly slower single drive speed than all the drives reading at the same time. This is an abnormal test result. Please re-run this benchmark. If this result occurs again, restart the DiskSpeed docker app and try again. What's wrong here? Running unraid 7.0 beta2 hdparm looks "OK"? root@unraid:/mnt/disk3# hdparm -tT /dev/sdv /dev/sdv: Timing cached reads: 41692 MB in 1.99 seconds = 20945.44 MB/sec Timing buffered disk reads: 580 MB in 3.00 seconds = 193.13 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdw /dev/sdw: Timing cached reads: 41634 MB in 1.99 seconds = 20917.79 MB/sec Timing buffered disk reads: 562 MB in 3.00 seconds = 187.14 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdx /dev/sdx: Timing cached reads: 40174 MB in 1.99 seconds = 20182.08 MB/sec Timing buffered disk reads: 604 MB in 3.00 seconds = 201.29 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdc /dev/sdc: Timing cached reads: 39974 MB in 1.99 seconds = 20080.78 MB/sec Timing buffered disk reads: 3626 MB in 3.00 seconds = 1208.21 MB/sec root@unraid:/mnt/disk3# Edited August 8 by enJOyIT Quote Link to comment
matthewdavis Posted August 9 Share Posted August 9 Running unraid 6.12.11 Container version: 2.10.9.3 When I click on the "upload drive & benchmark data" I get a 500 error. An error showed up in the container log, unsure if its directly related. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by lucee.commons.lang.ClassUtil (jar:/opt/lucee/server/lucee-server/patches/6.0.3.1.lco) to constructor com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl() WARNING: Please consider reporting this to the maintainers of lucee.commons.lang.ClassUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Quote Link to comment
jbartlett Posted August 25 Author Share Posted August 25 On 8/8/2024 at 5:56 PM, matthewdavis said: Running unraid 6.12.11 Container version: 2.10.9.3 When I click on the "upload drive & benchmark data" I get a 500 error. An error showed up in the container log, unsure if its directly related. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by lucee.commons.lang.ClassUtil (jar:/opt/lucee/server/lucee-server/patches/6.0.3.1.lco) to constructor com.sun.org.apache.xerces.internal.jaxp.DocumentBuilderFactoryImpl() WARNING: Please consider reporting this to the maintainers of lucee.commons.lang.ClassUtil WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release Are you still getting an error trying to upload the drive information? The container log is unrelated. Quote Link to comment
jbartlett Posted August 25 Author Share Posted August 25 On 8/8/2024 at 3:05 PM, enJOyIT said: What could be the reason that the single drive speed is at about 115 MB/s but when it runs all drives at the same drive, each drives max out at about 250 MB/s? It doesn't make any sense to me. I alreads reran the test and restarted the docker. There is only one drive which does >200 MB/s at single drive speed. Controller: Fusion-MPT 24GSAS/PCIe SAS40xx/41xx Broadcom / LSI RAID bus controller Type: Add-on Card in PCIe Slot SLOT4 PCIe 5.0 X8 (x8 PCI Express 5 x8) Current & Maximum Link Speed: 16GT/s width x8 (15.75 GB/s max throughput) Port 1: sdaa 14TB Seagate ST14000NM001G Rev SN04 (Disk 6) Port 2: sdab 14TB Seagate ST14000NM002G Rev 0 (Disk 14) Port 3: sdc 1TB NVMe WDS100T3X0C-00SJ Rev 0 (docker) Port 4: sdd 1TB NVMe WD_BLACK SN770 1 Rev 0 (docker2) Port 5: sde 14TB Seagate ST14000NM001G Rev SN04 (Disk 1) Port 6: sdf 10TB Seagate ST10000VN0004 Rev SC61 (Disk 2) Port 7: sdg 10TB Seagate ST10000VN0004 Rev SC61 (Disk 3) Port 8: sdh 10TB Seagate ST10000VN0004 Rev SC61 (Disk 4) Port 9: sdj 10TB Seagate ST10000VN0004 Rev SC61 (Disk 5) Port 10: sdk 12TB Seagate ST12000NM001G Rev SN04 (Disk 13) Port 11: sdl 14TB Western Digital WUH721414ALE6L4 Rev LDGNW07G (Disk 7) Port 12: sdm 14TB Seagate ST14000NM001G Rev SN04 (Disk 8) Port 13: sdn 14TB Western Digital WUH721414ALE6L4 Rev LDGNW240 (Disk 9) Port 14: sdo 12TB Seagate ST12000NM0008 Rev SN04 (Disk 10) Port 15: sdq 12TB Western Digital WD120EFAX Rev 81.00A81 (Disk 11) Port 16: sdr 18TB Seagate ST18000NM000J Rev SN02 Port 17: sds 18TB Seagate ST18000NM000J Rev SN04 Port 18: sdt 18TB Seagate ST18000NM000J Rev SN01 (Disk 12) Port 19: sdu 18TB Seagate ST18000NM000J Rev SN01 (Disk 15) Port 20: sdv 18TB Seagate ST18000NM000J Rev SN02 (Parity) Port 21: sdw 18TB Seagate ST18000NM000J Rev SN02 (Parity 2) Port 22: sdx 1TB Western Digital WDS100T1R0A Rev 411010WR (Cache 3) Port 23: sdy 1TB Western Digital WDS100T1R0A Rev 411000WR (Cache) Port 24: sdz 1TB Western Digital WDS100T1R0A Rev 411000WR (Cache 2) What's wrong here? Running unraid 7.0 beta2 hdparm looks "OK"? root@unraid:/mnt/disk3# hdparm -tT /dev/sdv /dev/sdv: Timing cached reads: 41692 MB in 1.99 seconds = 20945.44 MB/sec Timing buffered disk reads: 580 MB in 3.00 seconds = 193.13 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdw /dev/sdw: Timing cached reads: 41634 MB in 1.99 seconds = 20917.79 MB/sec Timing buffered disk reads: 562 MB in 3.00 seconds = 187.14 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdx /dev/sdx: Timing cached reads: 40174 MB in 1.99 seconds = 20182.08 MB/sec Timing buffered disk reads: 604 MB in 3.00 seconds = 201.29 MB/sec root@unraid:/mnt/disk3# hdparm -tT /dev/sdc /dev/sdc: Timing cached reads: 39974 MB in 1.99 seconds = 20080.78 MB/sec Timing buffered disk reads: 3626 MB in 3.00 seconds = 1208.21 MB/sec root@unraid:/mnt/disk3# Click on the DiskSpeed icon in your Docker apps and select "Console". Then enter the following command with replacing xxx with the drive ID and see if you get the expected speeds or the slower ones. Press CTRL-C to stop. dd if=/dev/xxx of=/dev/null bs=1310720 skip=0 iflag=direct status=progress conv=noerror Quote Link to comment
pyrosrockthisworld Posted August 28 Share Posted August 28 On 8/25/2024 at 2:08 PM, jbartlett said: Are you still getting an error trying to upload the drive information? The container log is unrelated. Still getting the 500 internal server error here 1 Quote Link to comment
jbartlett Posted August 30 Author Share Posted August 30 (edited) On 8/27/2024 at 7:51 PM, pyrosrockthisworld said: Still getting the 500 internal server error here Please try again. I believe it is resolved now. If you still get an error, please let me know what drive vendors you have (Seagate, Samsung, etc.) Edited August 30 by jbartlett Quote Link to comment
pyrosrockthisworld Posted August 30 Share Posted August 30 27 minutes ago, jbartlett said: Please try again. I believe it is resolved now. If you still get an error, please let me know what drive vendors you have (Seagate, Samsung, etc.) The information has been successfully uploaded. Thank you for your contributions! Working! 1 Quote Link to comment
johnsanc Posted September 5 Share Posted September 5 I don't know how I overlooked this app for so many years but its great! I used it to identify a few optimizations to my setup. I did have a couple questions / comments though: I didn't see a way to test all controllers concurrently, or a way to test combinations of drives concurrently. Do you think this is feasible to implement? It would be very useful for triaging bottlenecks. I don't see the negotiated speed anywhere in the UI. I know all my disks are 6GB/s but some are negotiated at current speed of 3GB/s I don't think some of the throughput calculations are correct. For example I have a 9207-8e on an 8x connection and it says: "8GT/s width x8 (7.88 GB/s max throughput)". I also have a 9207-8i on a 4x PCIe connection and it says: "8GT/s width x4 (7.88 GB/s max throughput)". How can these both have the same max throughput? Thanks, and I also submitted a new disk to your database I'm curious what other people get for this one since it appears to be a 24TB HAMR drive, got em from serverpartsdeals. Not impressed with speed, but I do like the density. https://www.strangejourney.net/hddb/ModelDatabase.cfm?Vendor=Seagate&Model=ST24000NM000C Quote Link to comment
jbartlett Posted September 5 Author Share Posted September 5 (edited) 21 hours ago, johnsanc said: I don't know how I overlooked this app for so many years but its great! I used it to identify a few optimizations to my setup. I did have a couple questions / comments though: I didn't see a way to test all controllers concurrently, or a way to test combinations of drives concurrently. Do you think this is feasible to implement? It would be very useful for triaging bottlenecks. I don't see the negotiated speed anywhere in the UI. I know all my disks are 6GB/s but some are negotiated at current speed of 3GB/s I don't think some of the throughput calculations are correct. For example I have a 9207-8e on an 8x connection and it says: "8GT/s width x8 (7.88 GB/s max throughput)". I also have a 9207-8i on a 4x PCIe connection and it says: "8GT/s width x4 (7.88 GB/s max throughput)". How can these both have the same max throughput? Thanks, and I also submitted a new disk to your database I'm curious what other people get for this one since it appears to be a 24TB HAMR drive, got em from serverpartsdeals. Not impressed with speed, but I do like the density. https://www.strangejourney.net/hddb/ModelDatabase.cfm?Vendor=Seagate&Model=ST24000NM000C 1. I can look into doing both, benchmarking all drive controllers at the same time along with benchmarking certain drives across the entire system at the same time. 2. You can view the signal speed of a drive here That data is as current as the last controller scan so if you suspect a drive has dropped to a lower signaling speed, rescan the controllers and then view the drive in question again. 3. They can both have 7.88 GB/s if one is on PCIe 3 and the other on PCIe 4. Here's the chart I have coded in, the value displayed is a cross reference between PCIe version & connection. The values were pulled off of some website ages ago, and looks like I need to add an entry for PCIe 6. I guess adding the PCIe version would help but you can reverence the GT/s value with the transfer rate column to see which one it is. Edited September 5 by jbartlett Quote Link to comment
jbartlett Posted September 5 Author Share Posted September 5 @johnsanc - Actually, I just added the PCIe version to the display. If you refresh the app, it should reflect version 2.10.9.6 and viewing the controller page will display the PCIe version if the controller reveals the data. And WOW on those speeds on the ST24000NM000C at the high end. Goes to show you that capacity isn't all that's cracked up to be. Quote Link to comment
johnsanc Posted September 5 Share Posted September 5 (edited) @jbartlett - Thanks i updated but I see the same thing basically: Current & Maximum Link Speed: 8GT/s (PCIe 3) width x8 (7.88 GB/s max throughput) Current & Maximum Link Speed: 8GT/s (PCIe 3) width x4 (7.88 GB/s max throughput) How can x4 and x8 on PCIe 3 have the same throughput? Not sure if it matters, but these are PCIe 4 slots, but the cards in them are PCIe 3. My comment about the link speed was comparing what diskspeed says vs the Identity tab on a particular disk. I see diskspeed always says 6GB/s, but the Identity tab also shows the "current" speed like this: Edited September 5 by johnsanc Quote Link to comment
jbartlett Posted September 6 Author Share Posted September 6 59 minutes ago, johnsanc said: Current & Maximum Link Speed: 8GT/s (PCIe 3) width x8 (7.88 GB/s max throughput) Current & Maximum Link Speed: 8GT/s (PCIe 3) width x4 (7.88 GB/s max throughput) Yup, that looks to be a bug! The x4 should be displaying 3.94 GB/s. 1 hour ago, johnsanc said: Not sure if it matters, but these are PCIe 4 slots, but the cards in them are PCIe 3 The values are pulled from the controller card, so it would always report PCIe 3 even in a higher slot. Though now I wonder if I can get the slot values.... Quote Link to comment
TreksterDK Posted September 6 Share Posted September 6 Just updated the docker to Version: 2.10.9.6, and now I get this error when I open the Web Interface: I did not change anything in the docker settings. Just updated. I can revert back to the previous version, but maybe you are unaware of this bug? Quote Link to comment
johnsanc Posted September 6 Share Posted September 6 Couple other suggestions: Disk numbers should probably be in ascending order everywhere in the UI (graphs, checkbox selections). Today it looks like its all sorted as strings which puts things out of order and makes things more difficult to find. It would be really nice to be able to filter the graph with all the drives by model number. I find myself having to cross reference disk numbers with models on the left to be able to filter what I need. This would also be nice for seeing relative performance between drive models. Quote Link to comment
jbartlett Posted September 6 Author Share Posted September 6 11 hours ago, TreksterDK said: Just updated the docker to Version: 2.10.9.6, and now I get this error when I open the Web Interface I added partial support for CD-ROM's in displaying them on the controller page but it required a lot of checks to be added throughout the app. This particular issue doesn't happen for me on my dev & prod system so not sure why it's happening for you but I added the CD-ROM check there. Please update and try again. It will reflect version 2.10.9.7 1 Quote Link to comment
jbartlett Posted September 6 Author Share Posted September 6 4 hours ago, johnsanc said: Couple other suggestions: Disk numbers should probably be in ascending order everywhere in the UI (graphs, checkbox selections). Today it looks like its all sorted as strings which puts things out of order and makes things more difficult to find. It would be really nice to be able to filter the graph with all the drives by model number. I find myself having to cross reference disk numbers with models on the left to be able to filter what I need. This would also be nice for seeing relative performance between drive models. 1. It sorts by assigned drive label putting the Parity first, then data drives, then cache drives. It probably puts pools after that, haven't tested the scenario yet. Then it's by drive letter. The main page showing the drives assigned to a controller is in controller port number sequence, as is the controller info page. 2. Interesting idea. As of now, you can toggle the visibility of a graph item by clicking on the drive in the legend. Quote Link to comment
johnsanc Posted September 6 Share Posted September 6 14 minutes ago, jbartlett said: 1. It sorts by assigned drive label putting the Parity first, then data drives, then cache drives. It probably puts pools after that, haven't tested the scenario yet. Then it's by drive letter. The main page showing the drives assigned to a controller is in controller port number sequence, as is the controller info page. 2. Interesting idea. As of now, you can toggle the visibility of a graph item by clicking on the drive in the legend. Sorry, I probably wasn't clear. A picture is always better: I can see its ordered by Parity, then data, then pools. But the data disks are sorted as strings instead of by disk number. I think it should be Data 1, Data 2, Data 3, etc. instead of Data 1, Data 10, Data 11, etc. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.