jbartlett Posted April 21, 2018 Author Share Posted April 21, 2018 12 hours ago, Zonediver said: May i ask, which model these Cam's are? The different camera views are all powered by the Logitech Brio set to 4K output and broadcasting at 1080p. Quote Link to comment
Zonediver Posted April 21, 2018 Share Posted April 21, 2018 33 minutes ago, jbartlett said: The different camera views are all powered by the Logitech Brio set to 4K output and broadcasting at 1080p. Thanks for this Info John Quote Link to comment
jbartlett Posted April 22, 2018 Author Share Posted April 22, 2018 On 4/21/2018 at 2:01 AM, Zonediver said: which model these Cam's are? Logitech Brio. Quote Link to comment
DBKynd Posted April 22, 2018 Share Posted April 22, 2018 This worked great for me. Thanks! Quote Link to comment
wgstarks Posted April 23, 2018 Share Posted April 23, 2018 After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be- Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Also got an email with the same warning. brunnhilde-diagnostics-20180423-1813.zip Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 31 minutes ago, wgstarks said: After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be What version did you upgrade from? The reason I ask is that I'm currently using 6.5.1-rc6 and I don't see the problem you report so I'm wondering if the cause is in the diff between -rc6 and -final. Quote Link to comment
wgstarks Posted April 23, 2018 Share Posted April 23, 2018 Just now, John_M said: What version did you upgrade from? The reason I ask is that I'm currently using 6.5.1-rc6 and I don't see the problem you report so I'm wondering if the cause is in the diff between -rc6 and -final. Upgraded from 6.5.0. Ran the update assistant tool first and it didn’t report any issues. Quote Link to comment
Squid Posted April 23, 2018 Share Posted April 23, 2018 1 hour ago, wgstarks said: After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be- Those are hidden metadata files created by your Mac. Safe to delete Quote Link to comment
wgstarks Posted April 23, 2018 Share Posted April 23, 2018 17 minutes ago, Squid said: Those are hidden metadata files created by your Mac. Safe to delete Thanks squid. Removed the hidden file, deleted the orphaned image and re-installed from CA. Everything looks good now. Maybe I need to modify my cleanup user script to delete these as well as the .ds files? Quote Link to comment
Smitty2k1 Posted April 24, 2018 Share Posted April 24, 2018 Guess it's time to replace my aging 2TB HDD Quote Link to comment
Virtike Posted April 25, 2018 Share Posted April 25, 2018 Works well for me on uR 6.4.0, confirmed my suspicions that I should definitely not add those Seagate 5TB drives to the array. Quote Link to comment
jbartlett Posted April 25, 2018 Author Share Posted April 25, 2018 41 minutes ago, Virtike said: Seagate 5TB I've been shying away from all the Seagate DM drives except for NAS backup systems. They all seem to spin at 5,980 RPM and none of the drives I've tested has a smooth graph from start to end. Quote Link to comment
dikkiedirk Posted April 25, 2018 Share Posted April 25, 2018 (edited) While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched. I attach diagnostics. [0:0:0:0] disk Generic- USB3.0 CRW -0 1.00 /dev/sda 3.90GB [0:0:0:1] disk Generic- USB3.0 CRW -1 1.00 /dev/sdb - [1:0:0:0] disk Areca ARC-1200-VOL R001 /dev/sdc 3.00TB [2:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdd 2.00TB [2:0:1:0] disk ATA Hitachi HDS72202 A20N /dev/sde 2.00TB [3:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdf 2.00TB [4:0:0:0] disk ATA ST9500325AS SDM1 /dev/sdg 500GB [5:0:0:0] disk ATA Hitachi HUA72202 A3FD /dev/sdh 2.00TB preclear-diagnostics-20180425-1355.zip Edited April 25, 2018 by dikkiedirk Quote Link to comment
jbartlett Posted April 25, 2018 Author Share Posted April 25, 2018 8 hours ago, dikkiedirk said: While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched. I attach diagnostics. If you mean that it didn't detect the drives, the next beta will partially resolve that. Quote Link to comment
Caldorian Posted April 28, 2018 Share Posted April 28, 2018 On 2018-04-25 at 8:04 AM, dikkiedirk said: While there are 5 regular disks and one ARC 1200 RAID Volume on my server, only 4 disks, disk 2,3,4 and cache get benched. I attach diagnostics. [0:0:0:0] disk Generic- USB3.0 CRW -0 1.00 /dev/sda 3.90GB [0:0:0:1] disk Generic- USB3.0 CRW -1 1.00 /dev/sdb - [1:0:0:0] disk Areca ARC-1200-VOL R001 /dev/sdc 3.00TB [2:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdd 2.00TB [2:0:1:0] disk ATA Hitachi HDS72202 A20N /dev/sde 2.00TB [3:0:0:0] disk ATA ST2000DM001-1CH1 CC43 /dev/sdf 2.00TB [4:0:0:0] disk ATA ST9500325AS SDM1 /dev/sdg 500GB [5:0:0:0] disk ATA Hitachi HUA72202 A3FD /dev/sdh 2.00TB preclear-diagnostics-20180425-1355.zip Getting the same issue. 2 of my disks are being detected, but aren't getting benched. Where can I get you the diags/logs? Quote Link to comment
jbartlett Posted April 28, 2018 Author Share Posted April 28, 2018 1 hour ago, Caldorian said: Getting the same issue. 2 of my disks are being detected, but aren't getting benched. Where can I get you the diags/logs? Wait until the next beta and then click on the "Create Debug File" link at the bottom of the drives. I'm finalizing testing on it now. I'll post a change log when it's ready. Quote Link to comment
papnikol Posted April 28, 2018 Share Posted April 28, 2018 Hi. I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else? Quote Link to comment
jbartlett Posted April 30, 2018 Author Share Posted April 30, 2018 On 4/27/2018 at 11:56 PM, papnikol said: I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else? That's fascinating. It does look like that but I haven't seen any controller not load-share the reads like that. It should have kept scanning fewer drives though until it gets under 90% of the max throughput detected or it runs out of drives. Here's a controller test on my test rig. It gives all the drives equal time on the first pass which maxed out the bandwidth even though it meant capping the read speeds of each drive. Quote Link to comment
jbartlett Posted April 30, 2018 Author Share Posted April 30, 2018 (edited) Beta 3A NOTE: Ryzen Threadripper (X399 Chipsets) with IOMMU enabled will not run this version successfully. I am diagnosing the issue. If you have missing controllers or missing drives, please click on "Create Debug File" link at the bottom of the DiskSpeed home page to create a debug file with controller information included. Do not submit unRAID diagnostic files unless asked. Email the debug file to [email protected] Change Log Single thread the process which analyzes & cleans up the benchmarks to prevent against a race condition where two threads try to process it at the same time Reworked how NVMe drives are detected Scan all PCI root ports for controllers Modified IOMMU detection Handle drives rebranded with Apple Drive model cleanup for ADATA, Corsair, LITEONIT If the first word of the Model matches the Vendor, remove it If the drive is not found in the Hard Drive Database, add a link to allow the Vendor to be modified. Move benchmark processing files to Docker contained temp file to allow for auto-cleanup during docker updates Detect if a drive was added to the Host after the Docker started Do not submit benchmark results if the drive was tested at fewer than every 10% Do not submit benchmark results if bandwidth capping was detected Do not include labels for drives never benchmarked on the main page benchmark graph Expand the debug file creation to include certain PCI device information to help identify controllers not yet supported Don't display benchmark average if there's only one benchmark done Fix text overlay style error on drive images, hopefully drives will now match the editor Change the label of the "Purge Everything" button to make it more clear and add a confirmation dialog Edited April 30, 2018 by jbartlett Quote Link to comment
papnikol Posted April 30, 2018 Share Posted April 30, 2018 (edited) 7 hours ago, jbartlett said: That's fascinating. It does look like that but I haven't seen any controller not load-share the reads like that. It should have kept scanning fewer drives though until it gets under 90% of the max throughput detected or it runs out of drives. Wow, I like your graph much more than mine. So, my array has 2 AOC-SASLP controllers with 8 and 3 HDDs respectivelly. I moved one drive so that there are 7 and 4 HDDs on each controller, in order to exceed both controllers' max bandwidth. The balancing problem appears in both controllers. Maybe it is a problem with the controller settings. Maybe I should try disabling int 13h (although I never understood what that is).... Edited April 30, 2018 by papnikol Quote Link to comment
LammeN3rd Posted April 30, 2018 Share Posted April 30, 2018 I think my SAS Expander does not play nice with drive detection.... Already send an email with the debug-file Quote Link to comment
Squid Posted May 3, 2018 Share Posted May 3, 2018 On 4/23/2018 at 6:14 PM, wgstarks said: After updating to unraid 6.5.1 I’m getting this display in the docker tab where disk speed should be- Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Warning: DOMDocument::load(): Document is empty in /boot/config/plugins/dockerMan/templates-user/._my-DiskSpeed.xml, line: 1 in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 215 Also got an email with the same warning. brunnhilde-diagnostics-20180423-1813.zip Fixed in 6.5.2-rc1+ Quote Link to comment
jbartlett Posted May 8, 2018 Author Share Posted May 8, 2018 Beta 3a: Remove debug line which had drives testing for only 5 seconds instead of the intended 15. This may have resulted in slightly less accurate speed scores but should still be in an acceptable margin of error. Quote Link to comment
jbartlett Posted May 8, 2018 Author Share Posted May 8, 2018 On 4/27/2018 at 11:56 PM, papnikol said: Hi. I run diskspeed for a controller that has 8 disks attached (attached file). The controller is an AOC-SASLp-MV8 (PCIe x4). Do those results indicate that the 4x PCIe lanes are saturated by the 4 first drives? Or could it be something else? How many CPU threads do you have available to unRAID (& Docker)? As in, not pinned to any VM or the like? Quote Link to comment
papnikol Posted May 8, 2018 Share Posted May 8, 2018 (edited) 1. I have no VMs. 2. I have already changed my MOBO/CPU/RAM to much better configuration (but the 2 SASLP controllers remain). MSI RD480/AMD Athlon 64 3700+/3GB -> Asrock FM2A88X Pro+/: AMD Athlon™ X4 840 Quad Core @ 3100/8GB 3. These are the results for one of the controllers with the new configuration: There is more bandwidth due to the move from PCI e 1.0 to PCIe 2.x. But still, the bandwidth is not balanced across all disks. The bandwidth usage is maximized for each drive until the last 2-3 drives have very little bandwidth available. Edited May 8, 2018 by papnikol Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.