jbartlett

Community Developer
  • Posts

    1896
  • Joined

  • Last visited

  • Days Won

    8

Everything posted by jbartlett

  1. Can you give a link to the PCI-e adapter you added? Also, please update this URL to reflect your NAS's IP address to create a debug file using the Middle button to include controller information: http://[ip]:18888/isolated/CreateDebugInfo.cfm then email the file to [email protected]
  2. If you clicked the Upload button before you purged, then you can restore the first benchmark by clicking on the individual drives as instructed in my previous post. If you hadn't uploaded, then they are gone and can't be recovered. If you don't suspect anything wrong with a drive, then you're probably fine with the latest run. As a general rule, you typically don't want to purge and start from scratch. It'll automatically rescan if it detects a change with your drives (or between DiskSpeed versions) but if it doesn't, then you can click the manual scan (left most button). Having a slow drive in your cache doesn't affect your array speeds.
  3. The main page displays all the benchmarks, clicking the page header on any page will take you there. The benchmark final result as shown isn't available once you leave it but you can click on the 3 line button on the top-right corner to download an image version of it. Here's how it looks on my system
  4. Okay, everything looks almost ok. Are you saying that you clicked the "Bookmark Drives" button and it finished and displayed the graphs but now it does not? It should be displayed right under that button. Likewise, if you click on any of the drives, it's benchmark will be displayed on it's information page. Here, it doesn't seem that any benchmarks happened. There's a couple things that could cause this. If you clicked "Purge Everything and start over", that erases your past benchmarks from your system. If the benchmark is somehow invalid such as the abort button was pressed or a benchmark did not complete, it is removed. If the "Abort Benchmark" button updated to "Continue", the benchmark process fully ran successfully. It's safe to click "Benchmark Drives" or click on one drive and run the benchmark on it for a faster result. If the benchmarks still don't display, run another single drive benchmark and click just to the right of the period at the end of the "Click on a drive label to hide or show it." to toggle the visibility of the hidden iframes that are doing the actual work. If there was an error, it would be displayed there. If you accidentally erased your benchmarks, you can recover your past benchmarks if you had uploaded them to the hddb by clicking the "Upload Drive & Benchmarks" button. View a drive and click on the "Manage Benchmarks" button. The next page will display every benchmark on your system and what had been uploaded. Click on the drive label in the legend to hide or show individual benchmarks. Any that are visible when you click "Update Benchmarks" are saved locally, the rest are removed locally. Ideal if you have a ton of benchmarks but only want to see your oldest and newest for example. On a side note, many of your drives don't have images when they exist in the HDDB. When clicking the "Rescan Controllers" button, does it indicate any issues fetching the images?
  5. Version 2.10.1 has error trapping added around calling the Spinup function. It's probably still happening, but it ignores the spinup issue and keeps moving forward. I'm looking into the debug files that were sent.
  6. If you're asking this, you're likely not using the Docker version of the application or you would see the benchmarks when you opened up the application's site in the given URL's. What are you running?
  7. If you are running DiskSpeed via UNRAID, click on the Docker tab and then click on the DiskSpeed icon. Select "WebUI". For other installations, use the given IP of the machine running Docker with port 18888. Examples: http://localhost:18888/ http://192.168.1.2:18888/
  8. Please update the docker, it should reflect version 2.10.1. I corrected the logic error that caused this issue.
  9. That drive has an interesting block size of either 4,096 / 4,160 / 4,224 bytes. Please update the Docker DiskSpeed settings to pull from repository "jbartlett777/diskspeed:2.10a" - this version has more robust logging plus additional error trapping added. It should allow you to get past that point (but may error elsewhere) and will display a link at the bottom to create a debug file if it successfully finishes. However, prior to doing so, please click on the DiskSpeed icon in Unraid and select "Console" and then enter the following command: cp /tmp/DiskSpeedTmp/spinup.sh /tmp/DiskSpeed/spinup.sh If it errors again, update the URL to replace "/ScanControllers.cfm" with "/isolated/CreateDebugInfo.cfm?Back=1" When creating the debug file, please select the middle option for missing controllers or drives.
  10. @MustardTiger - I added a debug version of DiskSpeed for you to try which forces the diagnostic files from not being removed after the hardware scan. Please update the Docker repository to "jbartlett777/diskspeed:2.10a" and open DiskSpeed. After the hardware is scanned, it'll prompt you to create a debug file, please do so and email it to the address given.
  11. I pushed a tweaked 2.10 up, please update and try again. I added a list of partitions to the check to see if the drive assignments have been updated but I missed that it forces the drives to spin up one at a time. As for the benchmarking part, after updating the app, open and click on the "Create Debug File" link at the bottom and then the left button on the dialog. Attach to a reply or email it to [email protected]
  12. It's kinda an inclusive statement. If the drive is not mounted *or* is mounted but with 25GB or less available *or* is part of a pool, then it can't be benchmarked. If you are referring to the unassigned device, you'll need to use the "Unassigned Devices" plugin to mount the drive (may need formatting) and then restart DiskSpeed so it can see the change. I'll see about updating to indicate more specifically why it can't be benchmarked. The mounted requirement is because files have to be created on the device and it has to be mounted to do so. Please open a command shell to your unraid box or the DiskSpeed app and enter the following commands and reply with the result: lsblk /dev/nvme0n1 lsblk /dev/nvme1n1
  13. Right Proper SSD benchmarking has been added. To perform a benchmark, a configurable number of test files of a given size (defaults to 10 2GB files) are written to the drive and then read back. The overall time taken for each is used to compute the MB/Sec average for the file. Restriction: SSD's that exist in a multi-drive pool are excluded. To benchmark a SSD intended to be in a pool, use the "Unassigned Devices" plugin to format & mount the SSD and restart the DiskSpeed docker. After benchmarking, use "Unassigned Devices" to clear the partitions and then add to your pool. While the system cache is bypassed when benchmarking, some devices have a built-in cache that ignores cache bypass commands. An initial high write speed that quickly levels out is a sign of such as shown below.
  14. I've found that this error by itself is harmless, it's the OS trying to identify what the drive supports. If you want to try to isolate what is causing the problem, here are the series of commands that are executed. Start the DiskSpeed docker app but do not open the web interface. Open a command shell on the system and enter the following: docker exec -it DiskSpeed bash Unfortunately, the debug files you sent did not include any of the data files that are created while scanning the drive. Please try running the following to see if any of these cause issues. I don't need to know what worked or what was returned, just which one blew up if any. /usr/bin/lspci /bin/ls -l /sys/dev/block /bin/lsblk /bin/ls -l /sys/block /usr/sbin/hwinfo --pci --bridge --storage-ctrl --disk --ide --scsi /usr/bin/lspci -D /usr/bin/find /sys/devices -name usb? /bin/ls -l /sys/block /usr/sbin/nvme list /usr/bin/lshw -c storage /sbin/blockdev --getmaxsect /dev/nvme0n1 /sbin/blockdev --getsize64 /dev/nvme0n1 /sbin/hdparm -I /dev/nvme0n1 /usr/bin/lshw -xml /usr/sbin/dmidecode -t 2 /usr/sbin/dmidecode -t 9 /bin/df -B 1KB /sbin/parted -m /dev/nvme0n1 unit B print free /sbin/blkid -o export /dev/nvme0n1 /sbin/blkid -n -o mountpoint /dev/nvme0n1 If these all work, I'll see about adding support for adding a "sync" after each file creation to enforce the data to be saved prior to running the more dynamically generated commands.
  15. You can remote in as instructed in my previous post and SFTP the files out. The problem is the files are in a proprietary format so they won't be of much use to you without a tool to convert them. I read of someone doing so but it was with an older version of Unifi Protect so it might not be viable.
  16. @MustardTiger - Please create a diagnostic file from this URL: http://[nas ip]:18888/isolated/CreateDebugInfo.cfm and click on the "Create Debug File" button. This button will create a file that has the container diagnostics of what it was doing. Using the age of the files, I can see what it was attempting when your server crashed. A Docker shouldn't ever be able to crash its host. Do you have anything "odd" to your setup?
  17. FYI, I'll be off visiting my parents so I now won't be able to look resolve that issue code wise until after the 20th.
  18. What drive is sdu? It performed a parted command and got back something unexpected. Open up a command shell to your unraid box and enter in the following and reply with the result. The first line takes you into the DiskSpeed Docker container and the 2nd returns the partition information. docker exec -it DiskSpeed bash parted -m /dev/sdu unit B print free You should get back something like the following: root@NAS:~# parted -m /dev/sdf unit B print free BYT; /dev/sdf:6001175126016B:scsi:512:4096:gpt:ATA WDC WD6002FFWX-6:; 1:17408B:32767B:15360B:free; 1:32768B:6001175109119B:6001175076352B:btrfs::;
  19. Kick off a benchmark of two or more drives so the "Click on a drive label to hide or show it." label shows up. The period at the end, or the space right after, is a hidden link that displays the hidden iframes that are performing the actual tests. The error message will be displayed in there along with the command or text describing what it tried to do. Share that info but you can exclude any long stack trace.
  20. This error can be ignored. It's something that the Lucee application server team has to resolve. As long as the DiskSpeed application itself recovers...
  21. That'll be inadvertently resolved in version 2.10. I'm adding proper benchmarking of solid state devices by writing multiple files to the device and then reading them back taking the averages. Highcharts is giving me issues with the multiple x & y axis on one graph, the width of the bars get smaller the larger the spinners included are. Bench marking solid state devices will be limited to ones in an array slot or a single drive pool and requires a mounted partition with 25GB free space available (based on default benchmarking configuration). You'll be able to test write/reading x number of files of y size. I'm also leaning towards allowing write benchmarking of spinners but that'll definitely require a drive with no partitions and will likely not be included into version 3.0.
  22. @FQs19 - you can ignore those logs in most cases. Anything that shows up there tends to be integration issues between Lucee and Java and they'll eventually work themselves out as the Lucee org takes cares of them.
  23. @IZSkiSurfer - 2.9.7.1 pushed to include the i-1 change to the Spinup process, decided to get that out instead of waiting for 2.10 with the SSD benchmarking as the cosmetic tweaks getting SSD's and Spinners to look good together on the same graph is taking longer than I expected.
  24. @FQs19 - You've got two errors here, the 2nd one (logs from DiskSpeed) shows an out of memory condition. How much memory does your system have?