v6 on Atom D525


Recommended Posts

 
jb, can you give some further detail on
nr_requests: 128 (default works well for most controllers, use 8 if the server has either a SASLP or SAS2LP)

I've recently installed an LSI SAS9211-8i card to drive my SATA disks.  I'm not sure whether your SASLP/SAS2LP comment relates to my config or not.

For an LSI use the default 128.
Link to comment

Disk tunables is a must.  I'm looking forward to the updated script going public!  This shaves a couple of hours off of parity check.

 

The biggest tweak on SMB writes from a windows machine (and Linux depending on what version of samba is being used)  actually require some tweaks on the windows machine! 

 

I was bitten by the marvel bug after one of the 6.x updates, and swapped my saslp card for a lsi. 

 

If memory serves, my dual parity drives and my cache drive are on the motherboard headers.

 

I'm running cache dirs, disk integrity, recycle bin, UD,  and user scripts.   That's near the limits. 

 

I actually have no longer have any shares exported via SMB.  Instead I have my primary towers shares mounted via unassigned devices, and have an rsync job in user scripts that does not run on parity check day! 

 

Mover is set to once/day. 

 

As a backup Nas, this is still a great device. 

Edited by landS
Link to comment
On ‎1‎/‎8‎/‎2018 at 6:01 AM, landS said:

As a backup NAS, this is still a great device.

 

Agree.   Just upgraded it to v6.4 and while the transfer speeds and parity check times are still the same (actually the parity check was 6 minutes faster than the last one I ran on 6.3.5), it's still MUCH slower than v5 ... but plenty "good enough".    And the GUI responsiveness in 6.4 is AMAZING.    That alone makes it worth running despite the slower speeds.   I DO wish whatever is causing this performance hit would be resolved, but I doubt that will ever happen  (it's not even close to pegging the CPU).

 

Nevertheless, it's a great little NAS.

Link to comment

Even the transfer speeds to a SSD dropped greatly in comparison to v5 writes to a HDD.  I posted some steps that could be taken on a Windows machine which allowed near v5 write speeds (similar steps exist on Linux machines), but that's a pita and wasn't required on v5. 

 

Once I moved this to a pure backup NAS to my MAIN Unraid storage, I killed off sharing the shares, and now use UD to mount the MAIN shares and User Scripts to schedule rysnc *backups* of the MAIN data.   Doing this the speed doesn't matter, but pulling the files is significantly faster than pushing them... Near v5 write times!

 

I was fortunate to run the old tunables script which helped to shave some parity check speed off.  All my disks are hgst 4 TB units. 

 

Typically my hgst disks hit 48* during a dual-drive parity check, they are all hovering at 50-52 for this 18 hour check now that I'm on 6.4.... So once done I'll need to pull the rig and check the fans to see if it hardware related or if 6.4 pushes the disks harder.  My house certainly is cooler this time of year and I've never seen a disk hit 50.  Everything should be dust free. 

Link to comment
2 hours ago, landS said:

Typically my hgst disks hit 48* during a dual-drive parity check, they are all hovering at 50-52 for this 18 hour check now that I'm on 6.4.... So once done I'll need to pull the rig and check the fans to see if it hardware related or if 6.4 pushes the disks harder.  My house certainly is cooler this time of year and I've never seen a disk hit 50.  Everything should be dust free.

 

The probability is quite low that unRAID version will affect the disk temperature during the parity scan - unless you combine the scan with other accesses so the drives has to perform a large number of seeks. As long as there aren't any errors that needs to be corrected, unRAID should be able to just sequentially read out all the data without adding any stress at all to the drives.

Link to comment
10 hours ago, bonienl said:

 

Have you tried fiddling with the disk tunables?

I followed the recommendation of johnnie.black.

image.png.a4d6024fa087253f432256d74498397b.png

 

Yes, I've "fiddled" a good bit with the tunables.   Tried the values that worked great with v5;  tried the values LandS uses;  tried the numbers in your picture (suggested as good for most setups by Johnnie Black);  tried the results of Pauven's "tunables tester";  etc.    Nothing seems to make any significant difference.   Frustrating since with v5 it was MUCH faster.   It's more a frustration than a real problem, however, as this is just a backup NAS and is still plenty fast enough to stream a video from it (although I don't use it for that).   But on v5 transfers were ~ 100MB, while v6 gets around 70;  on v5 a parity check took just over 8 hours (3TB WD Reds);  on v6 they take 15.   NO other changes.   And it's not pegging the CPU, which is what I initially thought was the problem (in fact the first versions of v6 did peg it; but that improved markedly somewhere around 6.2 or 6.3).

 

I fiddled with this a LOT about a year ago; then just decided to stay with v5 on this server; but a couple weeks ago I decided to just move it back to 6 anyway just so all my servers were on the same version.   With the release of 6.4 I'm glad I did, as despite the disk performance reduction the BIG improvement in GUI responsiveness is really nice ... that alone is worth the lower transfer speeds and longer parity checks.    [The longer parity checks are less of a nuisance than the slower xfers]

 

I still like my trusty old Atom as a backup NAS ... it's rock-solid reliable and idles at under 20w :)

  • Upvote 1
Link to comment
8 hours ago, landS said:

Typically my hgst disks hit 48* during a dual-drive parity check, they are all hovering at 50-52 for this 18 hour check now that I'm on 6.4.... So once done I'll need to pull the rig and check the fans to see if it hardware related or if 6.4 pushes the disks harder.  My house certainly is cooler this time of year and I've never seen a disk hit 50.  Everything should be dust free. 

 

The 40's are fine for 7200rpm NAS drives during a high-stress activity like a parity check; but I'd definitely check what's going on with temps in the 50's.   You may have had a fan failure.   The Q25B (as you know) has excellent airflow, so it seems likely that either a fan is running too slow or it has simply failed.

 

One of my other systems has 8TB 7200rpm HGST NAS units, and they also get into the 40's (44-45) on parity checks.   They're in Icy Dock 5-in-3 cages, which have 80mm fans at the rear "pulling" air through the cage.    I'm inclined to change to the Icy Dock Vortex units (I have those on another system), which has 120mm fans in front that push the air more uniformly across the drives, and does a notably better job of cooling.    Clearly these aren't options with the Q25B, however -- and should absolutely not be needed.    My Q25B temps never get above the mid-30's on parity checks with WD Reds (which are slower and cooler than the HGSTs) => I'd expect 5-8 degrees higher for the 7200rpm HGSTs, but certainly not into the 50's.

 

50-52 is still well below the rated 60 degree max, but it's not a range you want these to operate at routinely.   If the fans are working, I'd replace them with higher airflow units (easy to do).

 

Link to comment

These are actual still in my fractal define with 2 120mm intake fans blowing directly over the disks which checked out fine and 1 top/rear exhaust which has failed.   For now, the machine is powered down and I ordered 2 140 top mount exhaust fans to replace the 1 rear failed 120.  

 

Another oddity that I'll need to track down, remembering that I have no exported shares, is that the disks refused to spin down on 6.4.  I could manually trigger it and within 30 seconds they'd all be back up again. 

 

I'd guess it's a plugin, but this machine does not have too many of those enabled. 

 

*oh how I love the gui performance upgrade! 

Link to comment

@garycase

Sadly no.  I had a failed fan, but it must have failed a long time ago.  Replaced the fan, and under load (parity, verify) the drives still creep up to 56 (more slowly) .  They dip to 38 at idle.  The controller is a flashed dell h310.  I have 2 120 noctua intakes and 2 140  noctua exhaust fans.   For now I'm considering  turning off verify and changed spindown time from 30 min to 15.  Parity check is set to monthly.  Backup writes occur 1-2 times/week... So really the temp spike will be 19 hours 1 day / week.   The disks spin down now no problem now (it was a plugin, however I've forgotten which). 

 

Oddly my main server has no drive temp difference, it sits on the shelf next to this unit... And uses 1 of the same hgst drives.  It uses the onboard  sata controller

 

Both are dust free with clean air screens. 

Edited by landS
Link to comment

The most critical fans are the 2 120mm fans that blow directly across the disks.    With the temps you're seeing, I have to wonder if those fans are moving enough air.   You might want to consider using higher airflow fans than what you have now.   Just remember that the more air it pushes, the louder a fan is likely to be.

 

But airflow absolutely makes a difference.    I just moved my 8TB HGSTs from 5-in-3 cages to 4-in-3 Icy Dock Vortexes, and now even during a parity check they never hit 40 ... they idle around 30; hit 32-33 under heavy lead; and hit 36-38 on parity checks.    Not at all bad for 8TB 7200rpm drives :D

Link to comment

I am so thankful – my teething tot fell asleep by 8.40 tonight!

 

The case housing the D525 is in a Fractal Design R3 while the main server is a R4.

 

The R3 was purchased in 2011 and I believe the intake fans have been in service since – Noctua NF-F12 PWM.

 

I pulled the case down and confirmed that the cords are not using any of Noctua’s bundled Low Noise Adapters.

 

Luckily the X7SPA-HF has IPMI… where I could see that the system fan is running at 770 rpm! Or half the 1500 rpm expected. … but the IPMI’s fan control was not accessible... too bad it is on the latest ipmi firmware....

 

First I tried to use the Dynamix System AutoFan Plugin (which appears to need Dynamix System Temp… which needs perl… which needs nerdpack). After all of this The System AutoFan Plugin sadly does NOT recognize this boards PWM controller :(

 

So it was time to peek to these very forums for fan speed control... which do not appear to work under 6.4. 

https://lime-technology.com/forums/topic/10668-x7spa-hf-based-small-perfect-server-build/

 

So for now Ill spring for a Noctua PWM controller

https://www.amazon.com/Noctua-NA-FC1-4-pin-PWM-Controller/dp/B072M2HKSN

 

If this still does not solve the issue then it'll be time for fresh fans.... I see some Noctua 3000 rpm pwm units. 

 

Odd that this did not crop up until the 6.3.5 to 6.4 update... 

 

edit:  console java redirection was not working... but i remembered a post from 2013 about a kvm standalone app that supermicro provides so going to check if the bios setting for the fan is set to default...  https://lime-technology.com/forums/topic/26551-supermicro-ipmi-view-and-ikvm-setup-blank-kvm-terminal-solution/

And... oh nuts, this too relies upon the latest java runtime for kvm/console redirection.   Back to using the PWM controller

 

Edited by landS
Link to comment
10 hours ago, landS said:

I see some Noctua 3000 rpm pwm units. 

 

I suspect the air moved by these compared to what you have now will be like the difference between night-and-day :D ... or at least between hot and cool fan temps !!

 

It's very interesting that you didn't have this issue until v6 => I have to wonder if something is actually forcing this lower speed through PWM control.   I'd try not connecting the PWM wire -- it should then run at full speed (1500rpm), which may be all you need to do.    Clearly a higher airflow unit would be even better ... but if it's been working okay until now then just getting the fan back to full speed may be all you need.   Note that even at full speed this is only a 54.9 CFM fan.

 

The 3000 rpm until would move twice as much air ... but is likely a pretty loud fan.   Their 2000 rpm unit bumps airflow to 72CFM and is a lot quieter than the 3000 rpm fan.   I'd certainly this this would be plenty of air movement.   https://www.newegg.com/Product/Product.aspx?Item=N82E16835608051

 

Link to comment

Nice find on the 2000 rpm fan Garycase! 

 

The pwm controller is on its way, I'm going to give that a shot first however will buy 2 of those fans for the significant pressure improvement if the temp doesn't go back to the normal levels. 

 

The temp spike happened immediately after the 6.35 to 6.4 update so I am also curious about the impact on pwm speed controls with thus board.  I checked this morning and both fans reported 350ish rpm with all the disks spun down. 

Link to comment

Have you tried just connecting the voltage connections to the fans without the PWM connection?   (You also don't need the tachometer connection, although without it you won't be able to see the RPM values).     The easiest way to do this without cutting any wires or pulling plugs out of a connector is to just use a Molex -> fan adapter ... you may have a few in your "junk box" if you've kept them (they often come with fans)  -- or you can buy one very inexpensively [ https://www.newegg.com/Product/Product.aspx?Item=N82E16812423171&cm_re=molex_to_3_pin_fan_adapter-_-12-423-171-_-Product ]

 

It definitely sounds like something in 6.4 is causing the PWM control to dramatically lower your fan speeds -- although if you were already seeing mid-40s temps before that, I'd still move up to a 2000 rpm fan with the higher airflow.

 

Link to comment

Sounds like your problem is resolved -- with even better cooling in the near future with the 2000 rpm fans.

 

It DOES, however, beg the question as to WHY v6.4 is causing the lower speeds for the PWM fans.    I'll have to pop open my Q25B and see if the fans in it are PWM or not (don't recall offhand).    I suspect they're not, or I'd have probably seen the same issue when I moved it to 6.4, since it's the same motherboard.   [Or I may have connected them via Molex power]

 

 

 

Link to comment

The fans in my Q25B aren't PWM fans, so that explains why I haven't had this issue with 6.4

 

It does seem that anyone with that SuperMicro board and 6.4 would likely see the same thing you did, however.

 

Begs the question r.e. whether or not other motherboards might have the same issue.

 

 

Link to comment
On ‎1‎/‎8‎/‎2018 at 2:36 AM, dalben said:

I've recently installed an LSI SAS9211-8i card to drive my SATA disks.  I'm not sure whether your SASLP/SAS2LP comment relates to my config or not.

 

Johnnie will answer when he sees this, but I'm fairly certain it does not apply to an LSI card.      I don't recall the specific controller, but this issue impacted systems using certain controller chips (I have one system that was impacted).    It's VERY easy to confirm whether or not you need the change:    With nr_requests at the default 128, Start a parity check, wait a couple minutes and see what speed it's running at; then stop it.   Now change the nr_requests parameter to 8 and repeat that process.   If the speed jumps a LOT then your controller needs it;  if not, just change it back to 128 and leave it alone.    For controllers where it helps, the difference is VERY (and immediately) noticeable.

 

Link to comment
On 1/22/2018 at 11:01 PM, landS said:

First I tried to use the Dynamix System AutoFan Plugin (which appears to need Dynamix System Temp… which needs perl… which needs nerdpack). After all of this The System AutoFan Plugin sadly does NOT recognize this boards PWM controller :(

 

FWIW I have a Supermicro MBD-C2SEE-O and found I had to use the below in the  go file (now moved to user scripts plugin) for the Dynamix fan plugin to work

 

modprobe coretemp
modprobe w83627ehf

 

I have 4 fans in a SuperMicro SuperChassis 933T-R760B 3U Rackmount case.

 

I'm just seeing this thread now in prep for upgrading to 6.4 

 

I'll be pretty bummed if I can't use PWM with 6.4 since running my fans full speed is super noisy.

Edited by dabl
  • Like 1
Link to comment
On 1/22/2017 at 4:05 AM, johnnie.black said:

For parity check try these tunables, they are not universal but in my experience work well with most hardware:


nr_requests: 128 (default works well for most controllers, use 8 if the server has either a SASLP or SAS2LP)
md_num_stripes: 4096
md_sync_window: 2048
md_sync_thresh: 2000
for nr_requests would you suggest 128 for an IBM ServeRAID M1015 pci-e controller which I understand is based on the LSI SAS2008?

EDIT: oops I see above for LSI you say use 128

 

 

 

Edited by dabl
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.