jbartlett

Community Developer
  • Posts

    1896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. Somehow, your config files were corrupted and the quickest way to force a rescan of your hardware when you can't even view the page is to update the docker. I have logic that detects that and forces a hardware scan. I'll see about adding logic to catch this in the future. Thank you for reporting it.
  2. This shouldn't happen, the CONFIG structure should exist for every drive. Can you try doing a "force update" on the Docker tab for DiskSpeed and relaunch, you should see it say "Scanning hardware" before displaying the normal page. If it still fails, please use this URL and update it to reflect your UNRAID's IP to create a debug file and PM it to me. http://[IP]:18888/isolated/CreateDebugInfo.cfm
  3. FYI I've been running into this ever switching to the built in Aquantia 10GB nic. /etc/rc.d/rc.inet1 stop /etc/rc.d/rc.inet1 start /etc/rc.d/rc.inet1 restart Found the answer here:
  4. Please update your DiskSpeed docker container. I added logic to catch this rare issue.
  5. Can you email/dm me a new debug file? I've enhanced the logging since the previous one to include more detail for me to look at.
  6. Version 2.10.8 Allow non-solid state drives in a pool or single drive UNRAID arrays with single parity to be benchmarked Add title (on hover) to drive labels in case they're too long to display Correct benchmark graph from not displaying on some systems with SSD drives Add error trap in case the process that opens up all permissions on generated files gets in a race condition and can't run because it's already running Add missing Mount Drive FAQ images Handle extended Unicode characters in drive info Add an error trap around fetching a drives partition information to catch timeouts Prevent the warning on benchmarking a container with only 4 CPU's visible to Docker from stopping the benchmark process This version should correct the white benchmark screen.😃
  7. Yes, multi-device testing will be available in a future update. RAID support is already added, other types will be as well such as ZFS pools.
  8. Can you create a Debug file from the DiskSpeed main page (link at the bottom) and PM me it to me?
  9. Please try again. I was performing some database maintenance.
  10. Not sure I follow. Your screen shots show that both nvme & ssd are not testable - and this is by design because it is a multi-device pool. I'm now moving multi-device benchmarking from being introduced in v3 to the current v2 so soon you should be able to.
  11. If they are in a multi-device pool, they won't be benchmarked by this version as it's intended to benchmark single devices. If they aren't pooled devices but can be accessed separately via SMD shares for example, then ensure you have a mount point created. The FAQ link on the benchmark page will help.
  12. Sorry for the lack of responses, I've been pretty busy the past few weeks. I should be able to dedicate time to figuring out this other blank screen issue next week.
  13. NVME and SSD drives are tested by creating normal, but large, files. They are not written directly to. You can observe the files being created on the root directory of the drive while the test is underway. They are automatically deleted when the benchmark is done with them. In the event of an error that leaves the files behind, simply redisplaying the DiskSpeed app will clean them up. While it could be argued that it would increase the wear on the drive, there is nothing damaging in what the app does. Since the app requires you to create a mount point so it can test the solid state drives, nobody walks in blindly.
  14. Version 2 of DiskSpeed is designed to test individual drives. Read speed of a drive in a pool are affected by other devices in the pool. I'm removing that restriction in Version 3 (under development) but benchmarks will be local only (not able to be submitted). I'm contemplating removing the restriction in version 2....
  15. If you experienced the blank benchmark screen, please update the DiskSpeed docker app to show version 2.10.7.5 and try again.
  16. Concerning the blank screen on benchmarking a drive, it seems to be related to a cloned or restored drive in which both the old & new drive are still on the system. I'm adding logic to detect and allow the benchmark if one of the duplicate drives is not mounted. Current workarounds would be to change the Partition ID (s) on the old drive or do delete/recreate the partition.
  17. It won't benchmark multi-drive devices. But if the drives have their own block device, you can run the following command on each replacing "/dev/sdi" for yours. CTRL-C to end or it'll end at end of drive. dd if=/dev/sdi of=/dev/null bs=256MB iflag=direct conv=noerror status=progress
  18. Thank you for the file. Looks like there's no error happening. I'm still investigating.
  19. The blank line dropping fast between 4TB and 5TB or the yellow line between 10TB and 11TB? Yeah, I would say something's up with those drives, especially if they tested normally previously. Search for your drive on the companion website HDDB and see what others are getting with their scans. Sometimes a drive will drop from SATA 3 to a lower speed. A full power-off tends to correct those but I've seen them come back later. Potentially a faulty cable. If they test the same by itself with no other drives also running benchmarks, it's likely the drive is in some sort of safe mode. I'm not sure what causes the jumping line like that. See what others are getting in their drive to see if it's expected. A "Speed Gap" is my term for when a drive is reading at a steady rate and then there's a big drop in the amount of data being read and then it goes back up to continue normally. Typically, this are the drives being accessed by some other process at the same time but it could also indicate that there is a spot that has remapped sectors. A consecutive and non-decreasing rate of read speeds (within a small percentage) indicates the drive is capable of sending data faster than the data link can support. This can be easily seen on a older multi-drive controller loaded down with SSD's. If Drive 15 is also the one that looks like the sine wave, I don't have logic that clears the bandwidth indicator if it suddenly spikes higher than when the steady read occurred. There are likely other reasons for it, but I don't have such drives to add logic to detect them. As to Point C, multiple devices going at the same time - but not likely to be the cause here. The Controller Benchmark does read all drives at the same time to search for controller capacity being maxed out. Could be something wonky with how the drive was designed. Could be a shucked drive that has platters/heads disabled. <shrug> It's displayed during testing, but I can look into adding it to the overview. b - should be "read", corrected for the next release. To make sure you're aware of it if it happens, that one of the drives could be wonky or have unexpected behaviors.
  20. Version 2.10.7.3 has been pushed with the increased timeout and typo correction.
  21. I'm afraid to look at my own feet..... Then I think that's the issue. It looks like the main process is timing out after the configured 50 minutes before it finishes doing all the drives on the system. I figured that would be enough but nope, looks like it's not. Didn't want to omit any timeout in case of a rogue runaway process - should never happen but when you deal with computers, the "never happens" can be known to happen. I'll update the timeout to 2 hours. It's possible that the Lucee app server is ending the process but before the "Show this error" is displayed so it looks like it freezes instead. Glad to help!
  22. When this happens, can you right-click anywhere and select "Save Page As" and attach that to a reply?