jbartlett

Community Developer
  • Content Count

    1623
  • Joined

  • Last visited

  • Days Won

    7

jbartlett last won the day on August 6 2019

jbartlett had the most liked content!

Community Reputation

193 Very Good

2 Followers

About jbartlett

  • Rank
    John Bartlett
  • Birthday 07/20/1970

Converted

  • Gender
    Male
  • URL
    https://www.youtube.com/c/TheCritterRoom/
  • Location
    Seattle, WA
  • Personal Text
    Foster parent for cats & kittens

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Tried using pci-stub instead of VFIO but that netted the same result. Looks like if I want to use a graphics card with both Docker and a VM (but of course not at the same time), I'll need to create separate boot configs - one with the graphics card stubbed out for the VM and one without for the Docker.
  2. Apologies if it has been covered but I've been over half of the pages and haven't seen anything related. I have two nVidia cards installed on my system. [10de:1c30] 41:00.0 VGA compatible controller: NVIDIA Corporation GP106GL [Quadro P2000] (rev a1) [10de:10f1] 41:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) [10de:1d01] 42:00.0 VGA compatible controller: NVIDIA Corporation GP108 [GeForce GT 1030] (rev a1) [10de:0fb8] 42:00.1 Audio device: NVIDIA Corporation GP108 High Definition Audio Controller (rev a1)
  3. Look up the drives on the HDDB to see if other people are getting similar speeds. If so, I may need to adjust the bandwidth cap threshold. Note that you can click on the drive labels on the line graphs to hide drives to better show the ones affected.
  4. If it's on an individual disk it would be represented by a somewhat flat line on the graph over several test locations. Since the trend is always downward on a spinner, a flat area means the drive is capable of outputting more data than the controller (or downstream) can handle. If it's on a controller benchmark, it means that the controller can't handle the maximum output of all the drives connected to it at the same time and is a bottleneck. Submit graphs to better explain what you are referencing if this doesn't cover it.
  5. This is largely a case-by-case basis and depends on the motherboard, it's physical PCI slot to CPU/NUMA/device connections, and even BIOS version. If you populate a video card in a given PCIe slot and it shows up in its own IOMMU group, you should be able to pass it to different VM/Dockers. If that is true for multiple PCIe slots, you should be able to populate all of those PCIe slots with a different graphics card and pass each one to a different VM/Docker. In theory. You can have many VM/Dockers set up to use a given video card but only one of those can be
  6. The plugin "Disable Mitigation Settings" might help you troubleshoot that.
  7. The bandwidth capped means that the drive is likely outputting data faster than the system can utilize it and is represented by a flat'ish line for a portion of the graph. Question - when you were getting the Speed Gap errors, was the max allowed size increasing? It's supposed to increase every time it retries to eventually pass but here it doesn't look like it was doing that.
  8. docker exec -it DiskSpeed bash root@af468d0f3720:/usr/local/tomcat# nvme id-ns /dev/nvme0n1 NVME Identify Namespace 1: nsze : 0x3a386030 ncap : 0x3a386030 nuse : 0x10facc48 nsze: Total size of the name space LBA ncap: Max number of LBA nuse: LBA's allocated to the name space. It looks like if I do a dd read on the device starting at the start not to exceed "nuse" would return data read If nuse is under a given duration/size, a benchmark can not be done. Alternately, if a file is found in excess of a given size that has no unwritten extends reported
  9. So the correct logic would be to take the lowest speed as the maximum read speed? There will be interface issues on some systems so an average of the lowest speed of every reported SSD of the same make/model/revision would be better representative of the whole.
  10. That's a trend across pretty much every SSD and I don't have an answer for you as to why. On the HDDB, I take the peek speed and report that as the transfer speed.
  11. Check the SMART report to see if there are pending reallocated sectors, such could explain slow spots because it's trying to do multiple reads of a sector. You can force a check for bad sectors by performing a preclear on it with no pre or post reads. Note that this is only if you would intend to use the drive as long term no update no risk storage. I had a drive with similar slow spots that developed over 20k pending sectors after a preclear.
  12. You are correct. I had an empty XFS volume that I copied large files to and when I created a file allocation map, they were NOT located at the start of the drive but spread out in three general areas across the drive.
  13. 1. I don't think there would be any difference in speeds regardless of file system in reading of existing files. The creation & deletion of files, the file system can play an impact in the speed of the operations but that is also highly variable such as tree depth, number of files in a directory, even drive utilization percentage. I haven't thought about trying to benchmark that, not sure if there's enough value to the results to warrant it. 2. I've thought about it but I don't think many people would truly test such logic with their data, even if the write is writing what was