Jump to content

jbartlett

Community Developer
  • Posts

    1,897
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. Share a picture. If the hump is at the start, some Seagate drives are like this where the first 25GB-50GB is much slower. Keep an eye on the drive and see if the graph changes over time.
  2. Thanks for continuing to test it guys. I've got over 80 drives in my "Not found" folder which is helping to fine tune vendor & model detection. So far, 138 unique drives has had information uploaded and 66 models have benchmark data.
  3. Well, if ya got a spare one of those too... haha I almost went with a system close to that before going to the Ryzen 1950X.
  4. Well, if you happen to have a spare controller laying around that's the same that won't detect, I'm willing to pay for shipping here & back. But baring that, I'll be adding additional diagnostics for reporting in the next beta that'll help me figure out the controllers.
  5. I answered one of them on the 2nd post on this page (page 3) which may help you get past the spindown issue. Spin up all your drives and set the checkbox to disable SpeedGap testing. I haven't added unraid's spin-up method, it's on my to-do list.
  6. If unraid can see it, then it's simply a matter of understanding how the drives are laid out in the OS under /sys/devices. SATA, SAS, and NVMe drives are all represented differently. The best way to do it is to have such a controller but I've already made too many expenditures to pick up a card to add support right now. If you're willing to help me find the information to add support (will be a bit of back-n-forth), send me a PM. I'll also add support for a "Unidentified Controller" to stick any orphan drives under.
  7. Frontier in my area. Finally came back up after 14 hours and looks like it may be sticking around. I don't think their tech support is allowed to say what's causing the outage even if they know. Personally, I think the number of cat pics in the Seattle area hit a critical threshold.
  8. Speed gaps are where the minimum & maximum speeds during the test had a sizeable gap which may indicate drive activity. You can disable that on the test selection screen.
  9. Submitting drive info & benchmarking won't work currently, there's an Fios outage in my area for over 6 hours now. Been meaning to work on better catching network errors like that, now's my chance! (trying to jinx the issue to be fixed)
  10. Dude. What you've been getting shouldn't be possible. SSD's should always be a nice smooth line. In fact, earlier today I took the graphs off the HDDB and replaced it with an average read speed over the entire drive because graphs are practically useless for SSD's.
  11. Beta 3a posted Single thread the process which analyzes & cleans up the benchmarks to prevent against a race condition where two threads try to process it at the same time Reworked how NVMe drives are detected Scan all PCI root ports for controllers Modified IOMMU detection If you had missing drives or couldn't get past the scanning hardware screen, please update and try again. If you can't get past the Scanning Hardware screen, change the URL from http://[ip]:[port]/ScanControllers.cfm to http://[ip]:[port]/isolated/CreateDebugInfo.cfm and hit enter.
  12. Needed to purge all the uploaded benchmarks due to how the last 100% scan spot was given. When the benchmarking process reads the spot just prior to 100%, it sees how much data was read and then moves back from the end of the drive that amount plus a couple extra blocks. It was marking the start of the last block as the scan position instead of as the total capacity of the drive itself. On the HDDB, this resulted in scans from multiple people having different end points based on the negotiation between the drive & OS for the optimal block size and the graphs would have a flat line at the right end. In a previous beta, I fixed that so the last spot at 100% is recorded as happening at the end of the drive instead of the offset but there was no way I could easily identify sets of data in the table since there was no field grouping sets together. Now there is. Benchmarks will continue to float in over time to rebuild the HSDB and I'll be able to manually fix any benchmarks where the last spot which doesn't match the drive capacity. The drive information in the HSDB wasn't touched.
  13. That looks like an issue with one of the docker hub mirrors. If you try again after after a short bit, it may work. But there's nothing I can really do to fix it that I can think of other than making another push.
  14. Can you run this URL? Plug in the IP with your unRAID IP address or just copy & paste everything after the port http://[ip]:18888/isolated/CreateDebugInfo.cfm
  15. Got it, thanks! Working on adding code to let me switch between debug files and load the exec results instead of actually executing the command.
  16. Beta 2d uploaded Add Scroll to Top button Save the last benchmark spot as the capacity of the drive instead of the computed offset where it reads from to reach the end of the drive
  17. Nyghthawk & MMW - please update the Docker app and pull up the app. Scroll down to the bottom of the page below the Rescan buttons are located and click on "Create Debug File". This will create a file located in the <appdata>/DiskSpeed directory to email me at [email protected]
  18. Beta 2c pushed Added additional Mushkin model cleanup Added Port Number for the drives on the Home Screen Added troubleshooting debug file logic Keep drive edits after rescanning the controllers
  19. Onboard SATA controllers aren't being detected. I'll add code to pack up the saved data files so I can see exactly what you see and troubleshoot.
  20. Do you have a 3rd controller that the three missing drives are attached to? If so, what is it?
  21. Nope, sure isn't proper behavior. The storage.json file is recreated every time the system is scanned and oldstorage.json is created if I need to preserve any data but it should be saving the drive configuration in a save directory under <appdata>/DiskSpeed/Instances/local/driveinfo for the drive in a file named "config.json" which doesn't seem to be always created. I'll look into it.
  22. Check your Docker setup to see if Local Storage is defined and the directory location contains files. This is a sign that it's not saving data to the appdata share and the changes are being stored in the Docker container itself which is lost whenever you update the application via Docker Update.
  23. I won't be able to correct these. It's displaying what the OS is reporting. Likely the PCI ID database for the the controller is wrong in the master PCI ID list.
×
×
  • Create New...