DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7


Recommended Posts

2 hours ago, kizer said:

Seattle Cat Picks Huh? I don't have Cat Pick Issues here in Marysville WA. :D

 

Foster Kitten Cam actually. I foster for Purrfect Pals which is located in Arlington just north of you. I'm just a stone's throw from the southern border of Mill Creek. Fostered 59 sets since 2008, mostly kittens or queen's and their litter and a few adults. Waiting for the call for #60.

 

Currently, my foster kitten cam is showing highlight clips made of my fosters over the years, you can watch for a week and not see any duplicates. I'm known as "Foster Dad John" over there, or FDJ for short. https://gaming.youtube.com/watch?v=gnsZwg09u6A

Link to comment
6 hours ago, jbartlett said:

 

Foster Kitten Cam actually. I foster for Purrfect Pals which is located in Arlington just north of you. I'm just a stone's throw from the southern border of Mill Creek. Fostered 59 sets since 2008, mostly kittens or queen's and their litter and a few adults. Waiting for the call for #60.

 

Currently, my foster kitten cam is showing highlight clips made of my fosters over the years, you can watch for a week and not see any duplicates. I'm known as "Foster Dad John" over there, or FDJ for short. https://gaming.youtube.com/watch?v=gnsZwg09u6A

 

Tis funny you say that. We brought home a Kitten and didn't realize she was pregnant. Magically we have 6 cats 2years ago this very day. I call them Brat Pack. Luckly we have some land and they are indoor/outdoor or I'd go insane. 

Link to comment

Tried to install this docker with default settings on the docker page but see this error:

 

Error: failed to register layer: ApplyLayer fork/exec /proc/self/exe: cannot allocate memory stdout: stderr:

 

Should I change any setting or what else can be wrong?

 

This server is running unraid 6.5.1-RC5, probably a compatibility issue.

 

 

I installed this docker on a server running 6.5.0 without any issue.

 

 

 

 

 

Edited by dikkiedirk
Link to comment
2 hours ago, dikkiedirk said:

This server is running unraid 6.5.1-RC5, probably a compatibility issue

Installs fine for me on 6.5.1-rc5

 

2 hours ago, dikkiedirk said:

what else can be wrong?

The error implies out of memory.

 

@jbartlett.  Since this is a java app, would it make sense to add in environment variables to limit the java memory as some other java containers have done?

Link to comment
7 hours ago, Squid said:

@jbartlett.  Since this is a java app, would it make sense to add in environment variables to limit the java memory as some other java containers have done?

 

That's on my To Do list to optimize memory utilization. The default settings will use a max of 512MB of RAM which is stated in the initial post. I'm still adding features, it's too soon to optimize things for a beta but I'll check to see how much room it's using now and shrink it a bit if there's a lot of headroom.

Link to comment
6 hours ago, dikkiedirk said:

Can the docker image of 20 GB be too small? Or is it physical memory on the server? It only got 4 GB. How can I troubleshoot this?

 

The Docker settings will display how much space is being utilized by your containers and you can increase it from there. The Docker currently needs 727 MB of space to install.

Link to comment

Hi John, removing the the Sabnzb fixed the install issue for me. I am now on 60% memory utilization. One bigger issue I have is that while the Areca ARC-1200 is detected, the ports and disks connected to it are not seen, and the RAID1 volume is not benched because of that. Previous CLI versions would bench these volumes though. 

Edited by dikkiedirk
Link to comment
35 minutes ago, dikkiedirk said:

Hi John, removing the the Sabnzb fixed the install issue for me. I am now on 60% memory utilization. One bigger issue I have is that while the Areca ARC-1200 is detected, the ports and disks connected to it are not seen, and the RAID1 volume is not benched because of that. Previous CLI versions would bench these volumes though. 

 

I've added code for the next beta that'll create a debug archive with extra information for helping to code for undetected controllers. Once I finish adding code that'll detect drives that have their throughput constrained by bandwidth limitations (such as a fast drive on a slow port) to disable uploading benchmarks for it, I'll be ready to push the next beta.

 

The beta version that's out will only scan under /sys/devices/pci0000:00 so some controllers are being missed entirely, notably systems with IOMMU enabled. The next beta will scan /sys/devices/pci*

Link to comment

Nice utility. It worked fine on two servers and found all the drives. On the third I have an external box of four drives which uses a single eSATA connection and a port multiplier. It only finds one of these four drives (the one on port ata7.03, Disk 6) but your old script finds them all (the others are Disks 3, 4 and 5 on ports ata7.00, ata7.01 and ata7.02). Information about my hardware here.

 

Diagnostics: northolt-diagnostics-20180416-1301.zip 

 

I'm happy to provide any other information you might need.

 

chart.thumb.png.8854f03b2773de3237eea8dc0bc72a7b.png

Link to comment
5 hours ago, John_M said:

Nice utility. It worked fine on two servers and found all the drives. On the third I have an external box of four drives which uses a single eSATA connection and a port multiplier. It only finds one of these four drives (the one on port ata7.03, Disk 6) but your old script finds them all (the others are Disks 3, 4 and 5 on ports ata7.00, ata7.01 and ata7.02). Information about my hardware here.

 

The old script loops over /sys/block to get all of the drives. The next version will have an "Unknown Controller" section where drives that weren't detected through a controller will be listed so they can be benchmarked - and it'll include new support for the debug file to help identify controllers.

  • Like 1
Link to comment
2 hours ago, maxistviews said:

All of you have such smooth curves for all your drives. I recently shucked a WD MyBook 8TB helium drive and I am getting very wobbly lines...

I take it I need to turn off dockers while testing?

 

Was there any "SpeedGap" detected? That's the alert if it detects disk activity while it was reading the drive for 15 seconds if there is a gap of 45MB/sec difference between the slowest speed & fastest speed. If in doubt, shut down any VM & Dockers that may be accessing the drives and retest.

 

You've got two widely different tests so that hints that something else was hitting the drive hard while you were testing it.

Link to comment
5 minutes ago, jbartlett said:

Was there any "SpeedGap" detected?

I wouldn't know how to check, but I didn't get any pop ups or anything akin to that. Where can I find if SpeedGap was detected?

 

7 minutes ago, jbartlett said:

You've got two widely different tests so that hints that something else was hitting the drive hard while you were testing it.

The Blue line was a test done at the default %, I believe, so perhaps that skewed the results?

 

Turned off all my Dockers (Don't have any VMs) and ran the test again, this time at 10% and turned on the SpeedGap setting just to see what would happen.

 

image.thumb.png.58f791487110796899de1c690c28ab5b.png

 

Still a bit wobbly but much less so than my original blueline.  I will do another 1% test soon. 

 

By the way, since this program/script relies on me testing the drives every so often, what can I use to schedule a test every week or every month? Im quite new to unraid, but I'm sure there's a way to trigger this. Thanks!

Link to comment
3 hours ago, maxistviews said:

turned on the SpeedGap

 

Note that the SpeedGap detection is on by default. It shows up during the test as "Speed Gap of 128MB (max allowed is 45MB), retrying (1)". The number at the end is how many times it re-read the same block and it'll increase the max-allowed number by 5MB each iteration. The option to disable the SpeedGap logic was added because I had one drive on my main server that had such erratic reads that it took forever to test and would test each block dozens of times until the allowed gap finally grew large enough to allow it to continue.

  • Like 1
Link to comment
6 hours ago, uldise said:

Hi,

any ideas how to test write speed in similar manner?

 

There's no way I can guide writes to a spot on a drive by creating a file and nobody will trust a read/write test (read the block, write it back). Basically, I wouldn't touch that, code wise or with a different application.

Link to comment

New beta delayed by kittens. Picked up new fosters last Tuesday when they were three days old. Foster Kitten Cam They're time consuming.

 

What I'm currently working on with the next beta:

  • I've added support for the debug file to upload a full directory tree of /sys/devices/pci* both in text form (tree command) and a copy of any file with the words vendor, model, size, portno, firmware, serial, rev, and phy- in the file name with their directory tree duplicated. This will help me identify/gather information on drive controllers the app currently doesn't identify.
  • Added an "Unknown Controller" section that all orphan drives are attached to to enable information & benchmarking
  • Adding USB Hubs as a controller and the drives attached to them
  • Adding the means to enter in the drive Vendor when it is misidentified, incorrect (such as rebranded), or not provided.
  • Adding support to detect if the drive's bandwidth is being exceeded and not allow those benchmarks to be uploaded to the HDDB.
Link to comment
  • jbartlett changed the title to DiskSpeed, hdd/ssd benchmarking (unRAID 6+), version 2.10.7

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.