unraid-tunables-tester.sh - A New Utility to Optimize unRAID md_* Tunables


Recommended Posts

To expand on StevenD's answer:

  1. Change into your UTT directory:  cd /boot/utt
  2. *** Download screen:  wget http://mirrors.slackware.com/slackware/slackware-current/slackware/ap/screen-4.6.2-i586-2.txz
  3. Install screen:  upgradepkg --install-new screen-4.6.2-i586-2.txz

  4. Run screen:  screen

*** NOTE:  You should only have to download screen once, and you can do this from your Window's PC and save it to your \\<servername>\flash\utt directory or via the wget command line above.  Each time you reboot, screen is no longer installed as Unraid boots from a static image, so you would still need to do steps 1, 3 & 4, but skip step 2 since you had downloaded it previously.

Edited by Pauven
Corrected the download URL
  • Upvote 1
Link to comment
4 minutes ago, Pauven said:

To expand on StevenD's answer:

  1. Change into your UTT directory:  cd /boot/utt
  2. *** Download screen:  wget https://packages.slackware.com/?r=slackware-current&p=screen-4.6.2-i586-2.txz
  3. Install screen:  upgradepkg --install-new screen-4.6.2-i586-2.txz

  4. Run screen:  screen

*** NOTE:  You should only have to download screen once, and you can do this from your Window's PC and save it to your \\<servername>\flash\utt directory or via the wget command line above.  Each time you reboot, screen is no longer installed as Unraid boots from a static image, so you would still need to do steps 1, 3 & 4, but skip step 2 since you had downloaded it previously.

You could make up a BASH shell script to do steps 1,3, and 4.  If you wrote with full path names, you could even skip step 1.  And I would bet that you could also add another line to start utt...

Link to comment

If you're like me and can't run in Safe/Maint Mode, disabling network shares (globally or per share) will prevent people & apps from accessing the box from other machines but still let KVM & Docker run. In my case, I run my home web server through Docker and app/email server through a VM but neither is hitting against the array.

Link to comment
5 hours ago, Pauven said:

This doesn't work-

root@Brunnhilde:/boot/scripts/utt# wget https://packages.slackware.com/?r=slackware-current&p=screen-4.6.2-i586-2.txz
[1] 3581
root@Brunnhilde:/boot/scripts/utt# 
Redirecting output to ‘wget-log’.
upgradepkg --install-new screen-4.6.2-i586-2.txz
Cannot install screen-4.6.2-i586-2.txz: file not found
[1]+  Exit 3                  wget https://packages.slackware.com/?r=slackware-current
root@Brunnhilde:/boot/scripts/utt#

Looks like the txz file doesn't get downloaded so the upgradepkg fails.

Link to comment
2 minutes ago, wgstarks said:

This doesn't work-


root@Brunnhilde:/boot/scripts/utt# wget https://packages.slackware.com/?r=slackware-current&p=screen-4.6.2-i586-2.txz
[1] 3581
root@Brunnhilde:/boot/scripts/utt# 
Redirecting output to ‘wget-log’.
upgradepkg --install-new screen-4.6.2-i586-2.txz
Cannot install screen-4.6.2-i586-2.txz: file not found
[1]+  Exit 3                  wget https://packages.slackware.com/?r=slackware-current
root@Brunnhilde:/boot/scripts/utt#

Looks like the txz file doesn't get downloaded so the upgradepkg fails.

Sorry, I should have tried the link that StevenD provided.  I didn't realize it wasn't a direct link to the file, but rather to a web page from where you can start a download.

 

This URL should work:  http://mirrors.slackware.com/slackware/slackware-current/slackware/ap/screen-4.6.2-i586-2.txz

 

I'll update my post above too.

  • Upvote 1
Link to comment

Ran the Long Test overnight with Docker disabled.

I have set my server to use the Fastest setting based on results. I was hoping for better throughput numbers. Do I need to get rid of all my shingled drives and run all 7200 RPM to get the scores I am seeing from others? Or is 89.2 MB/s the best I can expect?

LongSyncTestReport_2019_08_14_2118.txt

Edited by interwebtech
Link to comment
6 hours ago, interwebtech said:

Ran the Long Test overnight with Docker disabled.

I have set my server to use the Fastest setting based on results. I was hoping for better throughput numbers. Do I need to get rid of all my shingled drives and run all 7200 RPM to get the scores I am seeing from others? Or is 89.2 MB/s the best I can expect? 

 

Try my DiskSpeed (link in my sig) plugin's controller bandwidth test. It can help you determine if you've saturated your controller's capabilities.

  • Like 1
  • Upvote 1
Link to comment
20 minutes ago, wgstarks said:

The “block” option also doesn’t seem to work. I got one parity check started notification about 30 minutes after starting this script and then a second one maybe 10 hours later. Perhaps I’m misunderstanding how this option works?

UTT v4.x also sends out a test begin and a test end notification, instead of the hundreds of notifications you would get with out the block.  Any chance you're confusing the UTT notifications with the Unraid Parity Check notifications?

 

 

Link to comment
9 hours ago, interwebtech said:

Ran the Long Test overnight with Docker disabled.

I have set my server to use the Fastest setting based on results. I was hoping for better throughput numbers. Do I need to get rid of all my shingled drives and run all 7200 RPM to get the scores I am seeing from others? Or is 89.2 MB/s the best I can expect?

That sounds artificially low.

 

7 hours ago, BRiT said:

All my drives are shingled (8TB Seagates) yet I'm in the 182 MB/s range. Your bottleneck is something else.

Agreed.  Looking at your test results, I see a couple things. 

 

First, you have a mixture of drives:  8TB, 6TB and 4TB.  This has a impact on max speeds.  How?  Imagine a foot race with world's fastest man, Olympic champion Usain Bolt, your local high school's 40m track champion, a 5-year-old boy, and a surprisingly agile 92-year-old grandmother.  I know you're thinking Usain will win, but wait... All four runners are on the same team, and they are roped together, and the race requirement is that no one gets yanked down to the ground - everyone has to finish standing up.  Now it seems a bit more obvious that no matter how fast Usain is, he and his teammates basically have to walk alongside the 92-year-old grandmother who is setting the pace for the race.  This is how Parity Checks work on Unraid.

 

In my server, my 3TB 5400 RPM drives are the slowest, so they set the pace at 140 MB/s, even though my 8TB 7200 RPM drives can easily exceed 200 MB/s on their own.  I'm not sure which drives are slowest in your system, your 4TB drives look like 7200 RPM units, so it might be the 6TB drives.  But even though your drive mixture is slowing you down some, even your slowest drive should be good for 150+ MB/s.  So something else is slowing your server down.

 

To determine what that bottleneck is, math is your friend.  I see that you have 16 drives connected to your SAS2116 PCI-Express Fusion-MPT SAS-2 controller.  To understand what kind of bandwidth that controller is seeing, simply multiply the max speed by the number of drives:

  • 16 Drives * 89.2 MB/s = 1,427 MB/s

 

But that is just the drive data throughput.  SATA drives use an 8b/10b encoding which has a 20% overhead throughput penalty, so your realized bandwidth is only 80% of what the controller is seeing.  So we need to add the overhead back into that number:

  • 1427 MB/s  / 0.80 = 1784 MB/s

 

We also need to factor in the PCI-Express overhead.  While the 8b/10b protocol overhead in PCIe v1 and v2 is already factored into those speeds, there are additional overheads like TLP that further reduce the published speeds.  You might only get at most 92% of published PCI-e bandwidth numbers, possibly less:

  • 1784 MB/s / 0.92 = 1939 MB/s being handled by your PCI-Express slot.

 

1939 MB/s is a very interesting number, as it is very close to 2000 MB/s, which is equivalent to PCIe v1.0 x 8 lanes, and PCIe v2.0 x 4 lanes.

 

So, long classroom lecture short, most likely what is happening is that your SAS controller is connecting to your system at PCIe 1.0 x8 or PCIe 2.0 x4.  I'm not certain what controller you have, but based upon the driver I think the card has a PCIe 2.0 x8 max connection speed, which should be good for double what you are getting (perhaps around 182 MB/s for 16 drives). 

 

So you probably have plugged the controller into the wrong slot.  On many motherboards, some of the x16 slots are only wired for x4, so while your PCIe 2.0 x8 card would fit in the x16 slot, the speed gets reduced to half-speed, PCIe 2.0 x4.  Alternatively, you might have a really old system that only supports PCIe 1.0, which again would cut your speeds in half.  Your signature doesn't specify your exact hardware, so I don't know which it would be.

 

One last tip:  If you are doing Windows VM's with passthrough graphics, and you are putting your graphics card in the fastest PCIe slot hoping for max speed - that probably isn't needed.  I did some testing a couple years back, putting the video card in PCIe 3.0 x 16 and PCIe 3.0 x 4 slots, and in 3D Mark the score was nearly the same.  I know all the hardware review websites like to make a big deal about PCIe bandwidth and video cards, but the reality is that for gaming it really doesn't make much of a difference.  On the other hand, 16 fast hard drives can easily saturate a PCIe 2.0 x8 connection, so it is very important to put your HD controller in the fastest available slot.

 

</class>

 

Paul

  • Upvote 1
Link to comment
44 minutes ago, wgstarks said:

This is what I am getting-

 


unRAID Server: Notice [BRUNNHILDE] - Parity check started

 

Event: Unraid Parity check
Subject: Notice [BRUNNHILDE] - Parity check started
Description: Size: 8 TB
Importance: warning

 

Hmmm.  Well, I guess the good news is that you are only getting a couple notifications instead of hundreds, so it seems like it is mostly working.  I'm not sure how a couple are slipping through.  The one at the end is actually not that surprising, as there can be a delay for Unraid to send out each parity check start/finished notification, and the UTT script might have already removed the block at the end of the script before that notification comes through, so it should almost be expected that the very last parity check finished notification slips through.

 

But the one at the beginning has me stumped, since the block is put into place before any parity checks are started.

 

I see you are on Unraid 6.7.x - perhaps something has changed related to notifications since Unraid 6.6.x.  I did all my development on Unraid 6.6.6, and I refuse to use 6.7.x until the numerous SMB and SQLite issues have been resolved.

Link to comment
1 hour ago, Pauven said:

Hmmm.  Well, I guess the good news is that you are only getting a couple notifications instead of hundreds, so it seems like it is mostly working. 

Yeah. Not really a big deal. The issues with screen are possibly more important. I just gave up on it and ran the script from console, but I’m sure there are quite a few users running headless systems that would benefit from it, if it can be made to work in safe mode. Or maybe this problem was unique to my system. Has anyone else tried to actually install and run screen in safe mode?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.