MicroServer N36L/N40L/N54L - 6 Drive Edition


739 posts in this topic Last Reply

Recommended Posts

Posting this for anyone else who wants all temps showing using Dynamix System Temperature on their N54L (also works for N40L). I found the majority of this information here https://github.com/fetzerch/hp-n54l-drivers.

 

Since Kernel 4.5 (Unraid is now on 4.19 at the time of writing) there has been an patch applied to the kernel to allow for all temps and fan speed to be shown.

 

I have done this by adding the below lines to my go file

# Fix to show all temps and fan speeds
cp /boot/config/sensors.conf /etc/sensors.d
modprobe -a i2c_piix4 jc42 w83795 k10temp

and inside /boot/config/sensors.conf i have the below.

bus "i2c-0" "SMBus PIIX4 adapter port 0 at 0b00"
bus "i2c-1" "SMBus PIIX4 adapter port 2 at 0b00"


# CPU: AMD Turion II Neo N36L,N40L,N54L
# Limits are hardwired and cannot be changed.
chip "k10temp-pci-00c3"

    label temp1 "CPU Core Temp"


# RAM Slot1
chip "jc42-i2c-0-18"

    label temp1 "RAM1 Temp"
    set temp1_max 60
    set temp1_crit 70
    set temp1_crit_hyst 65


# RAM Slot2
chip "jc42-i2c-0-19"

    label temp1 "RAM2 Temp"
    set temp1_max 60
    set temp1_crit 70
    set temp1_crit_hyst 65


# Hardware Monitor: Nuvoton W83795ADG
# +3.3V (in12) and 3VSB (in13) are hardwired.
chip "w83795adg-*"

    label fan1 "Array Fan"

    label temp1 "CPU Temp"
    label temp2 "NB Temp"
    label temp5 "MB Temp"

    label in0  "Vcore"
    label in1  "Vdimm"
    ignore in2 # unclear (VSEN2 in BIOS)
    ignore in3 # unclear (not shown in BIOS)

    # Unknown Vcore for embedded AMD Turion II Neo processor.
    # Measured values from 0.720 to 1.194 on N54L.
    set in0_min 0.5
    set in0_max 1.4

    # DDR3 limits from JEDEC Standard No. 79-3F: 1.5 V +- 0.075 V, 1.8 V crit.
    set in1_min 1.5 - 0.075
    set in1_max 1.5 + 0.075

    ignore intrusion0

A few notes, the MB Temp is actually Ambient Temp. NB being North Bridge.

 

P.s. My N54L is still going strong, 6 3.5" HDDs and 1 2.5" SSD!

Edited by GrantR
Link to post
  • 4 months later...
  • Replies 738
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Hi All,   For anyone struggling with then sensors on the N54L (this probably applies to the N36/40L as well), specifically with lm_sensors and the Dynamix System Temp plugin, and the tempera

No limitation! -- for at least another decade   (Since you mentioned that you'd be running 6 newer (ie, faster) array drives,) The SATA controller in the N40L (and 36L + 54L) has a maximum

Posting this for anyone else who wants all temps showing using Dynamix System Temperature on their N54L (also works for N40L). I found the majority of this information here https://github.com/fetzerch

Posted Images

Thank you for this thread and really good information and suggestions. I have been focusing my interest on getting a few of these. One as backup Unraid at home, and one to be sent to a relative in another part of Sweden for remote backup, instead of buying Synology or Qnap.

The configuration I have found available for purchase:

CPU: AMD Athlon II Neo N36L 1.3Ghz Dualcore
Memory: 5GB
Would this hardware run latest Unraid versions? (6.9x?) Can anyone confirm?
Thanks!

Link to post

I'm running 6.8.3 on an N40L with no issues, hardware is identical to N36L except CPU speed. Nothing reported makes me think it won't run fine on later versions, however I only upgrade to official releases on my backup server..... as it has my most current backup.

 

N40L and N54L are common, always in Ebay in my region so worth seeking out.

 

I use a 2 port ASM1061 card for 2 of the disks to avoid putting all 6 through the controller which is already bandwidth limited.

 

 

Link to post
1 hour ago, Decto said:

I'm running 6.8.3 on an N40L with no issues, hardware is identical to N36L except CPU speed. Nothing reported makes me think it won't run fine on later versions, however I only upgrade to official releases on my backup server..... as it has my most current backup.

 

N40L and N54L are common, always in Ebay in my region so worth seeking out.

 

I use a 2 port ASM1061 card for 2 of the disks to avoid putting all 6 through the controller which is already bandwidth limited.

 

 

Thank You Decto.

I am intending to use these only for backup purposes, so I am not worried about CPU speed. You have a great point only to run stable versions and I will do just that in this case, also good idea about dividing between onboard and sata card.
Ordering one now. Looking forward to fiddle with it :D

 

 

Link to post
  • 3 months later...
On 3/8/2011 at 4:45 PM, neilt0 said:

I ran an eSATA to SATA cable back in through a PCI slot, up in to the cavity behind the 5.25" bay:

 

fbMTU.png

Hi guys.

I am new user in this forum. I have a HP PROLIANT N54L 7th generation.

 

Below is its description:

> 2 x 4gb ram.

> 1 x 250gb for OS SISTEM on original HP disk, connected to the internal SATA port on the motherboard.

> 2 x 4tb disks for NAS (data recovery).

> 2 x 2tb disks for NAS (data recovery).

> 1 x 2tb desktop drive connected via USB.

 

I would like to connect a new 3.5" hdd to the external eSATA port, like in picture (back side server). I would like to see if I can get good performance or is it better to leave it. to do this I found this eSATA-SATA cable -- > https://tinyl.io/3G9x and this power adapter cable  -- > https://tinyl.io/3GA0

I would like to connect to the external eSATA port the disk with OS

what do you think about?

 

Thanks for your replies and best regards.

 

Translated with www.DeepL.com/Translator (free version)

Edited by oradicena
Link to post
1 hour ago, oradicena said:

Hi guys.

I am new user in this forum. I have a HP PROLIANT N54L 7th generation.

 

Below is its description:

> 2 x 4gb ram.

> 1 x 250gb for OS SISTEM on original HP disk, connected to the internal SATA port on the motherboard.

> 2 x 4tb disks for NAS (data recovery).

> 2 x 2tb disks for NAS (data recovery).

> 1 x 2tb desktop drive connected via USB.

 

I would like to connect a new 3.5" hdd to the external eSATA port, like in picture (back side server). I would like to see if I can get good performance or is it better to leave it. to do this I found this eSATA-SATA cable -- > https://tinyl.io/3G9x and this power adapter cable  -- > https://tinyl.io/3GA0

I would like to connect to the external eSATA port the disk with OS

what do you think about?

 

Thanks for your replies and best regards.

 

Translated with www.DeepL.com/Translator (free version)

That should work, but I don’t really understand why you think performance should be better?

 

What OS at you running? Windows, unraid, Linux?

 

I have a N40L with:

8GB RAM

6x3,5” HDD (4x in the original drive sleds, 2x mounted in the 5,25” slot with Nexus DoubleTwin)

1x2,5” SSD for cache in the space between the drive cage and the 5,25” slot. (Not screwed in)

1xPCIe 3.0 card - 5 port SATA III (for connecting 2x HDD and the SSD).

I also have an external HDD enclosure with eSATA so I can run extra HDD with unassigned devices. 
Everything works really good with good temperatures (I removed the 5,25” cover and put in some mesh). 
 

Link to post

hi klm_sv.

Maybe I didn't write correctly. hope the performance is not low!
my operating system is WHS2011 with which i feel very comfortable. i have also tried other operating systems (OMV last), but the experience I have on Windows is better.
on the server I have a series of backup tasks of the pc's I have at home (4 pc's in all) but I use the server as a multimedia device with PLEX. recently it also shares all my music library with VOLUMIO OS. everything works!

 

I ask you if you think the eSATA cable is ok for my work.

 

thanks.

 

ps_1. the bios of my server is original, can this be a problem?

ps_2. see the pictures of my server.

f63f0eb3-37d4-4733-b928-ea60ebedd975.jfif 599d61b7-2710-420d-91c1-0c0c3fc58a92.jfif b007e22f-e28c-4c71-97b7-2839f4fe1c02.jfif

Edited by oradicena
Link to post
9 minutes ago, oradicena said:

hi klm_sv.

Maybe I didn't write correctly. hope the performance is not low!
my operating system is WHS2011 with which i feel very comfortable. i have also tried other operating systems (OMV last), but the experience I have on Windows is better.
on the server I have a series of backup tasks of the pc's I have at home (4 pc's in all) but I use the server as a multimedia device with PLEX. recently it also shares all my music library with VOLUMIO OS. everything works!

 

I ask you if you think the eSATA cable is ok for my work.

 

thanks.

 

ps_1. the bios of my server is original, can this be a problem?

ps_2. see the pictures of my server.

f63f0eb3-37d4-4733-b928-ea60ebedd975.jfif 175.3 kB · 1 download 599d61b7-2710-420d-91c1-0c0c3fc58a92.jfif 387.05 kB · 0 downloads b007e22f-e28c-4c71-97b7-2839f4fe1c02.jfif 132.87 kB · 0 downloads

Ok! I have never run WHS, I run mine with unRAID (this is a unraid forum 😜). The eSATA cable and the Molex to SATA splitter will probably work just fine. Where will you put the extra 3,5" HDD? If you ever want to run more disks like me I'd consider the PCIe card I linked above. I'm not sure if the eSATA is 3Gb/s or 6Gb/s but if you're only running standard HDDs it doesn't matter. 

 

I flashed my BIOS to be able to run HDDs with higher capacity then 3TB (🤔?) and use more then 5 SATA ports. Can't remember if this affects the eSATA. But there is tons of information out there, you just have to read up on it, here is a good start: https://n40l.fandom.com/wiki/Bios

 

Link to post
  • 2 weeks later...
9 hours ago, sota said:

For those that have used these, are they viable for being able to run at least 1 VM and/or a couple containers? I'm thinking specifically SageTV as one of the containers.

I run my N40L with these containers:

Sonarr

Radarr

sabNZBd

qBittorrent

Jackett

Krusader

ZeroTier

speedtest-tracker

 

Haven't tried any VM's, but I don't have the need either. Since it only have two cores I guess it's a bit over the top to run VM's.

Link to post

Hi,

I also have the NL36 tuned with a LSI SAS2308 card and a Icydock ExpressCage MB326SP-B

I capture a lot of the university online lectures as MP4 files at the moment. (Most of them are in 1080p)

My question is, is it possible to add a pcie x1 Card (I just found a Geforce GT710 low profile. A Zotac GeForce GT 710 PCIe x1) and use the card as a video encoder to compress the size of the videos? For my usage I thougt I can use the Handbrake docker build or the avidemux from the app store. Many months ago I read somewhere that the chipset doesn´t support passthrough. Because I tried some configurations with vmware esxi and the onboard sata drives arn´t able to passthroug. That was before I knew unraid ;-) 

 

I know that a graphic card with the support of HVEC would be the better decision, but I don,t wont to put a pcie x16 card into the system.

Or is it possible to use the onboard [AMD/ATI] RS880M [Mobility Radeon HD 4225/4250] for my usage?

 

regards,

chris.

Edited by nightfly2000
Link to post
  • 2 months later...
On 3/23/2020 at 4:31 AM, UhClem said:

Sort of ... it shouldn't be used for one of the six array drives, since that would further divide the 650-700. But, it could/should be used for the cache drive; then it could only (slightly) impact mover operations, and only if TurboWrite was enabled. The (2-port?) add-in card would connect array drives 5 and 6. A "full-spec" PCIe x1 Gen2 card (e.g. ASM1061-based) , giving ~350 MB/sec, would not lower the "ceiling" of ~160 (for 4) on the mobo SATA.

 

The only improvement would be that the cache SSD could operate at full (SataIII [~550]) speed, but that is moot, since, as your cache, it is inherently limited by your 1GbE network (on input) and your array (on output). Very little bang for the extra bucks, since you'd need a PCIe >=x2 card to handle >2 drives, and not lower the "ceiling".

 

Here's a neat idea, if you really want to eliminate the speed bottleneck:

(Assuming that your x16 slot is available,) Get a UNRAID-friendly LSI-based card (typically PCIe x8, at >= Gen2), and connect the built-in 4-bay Sata backplane to it. Note that the thick cable/connector (left of Sata-5), which connects that backplane to the mobo, is actually a Mini-SAS 8087. And can instead be connected to (one of the connections on) an add-in LSI card. That completely eliminates the bottleneck for SATA 1-4. "But, wait, there's more ..." Then you put a standard SAS-to-SATA breakout cable into the now-empty mobo connector and use 3 (of the 4) SATAs for drives 5 and 6, and your cache SSD. That gives you full SataIII for the SSD (FWIW) and you've still got SATA-5 and the eSata, plus "breakout #4", to play with for whatever. "But, wait ...." If you get a 4i4e card, you can (later) add 4+ more drives externally, and still no bottleneck! (Pretty neat, huh?) [Important, LSI card must have low-profile bracket.]

I have the same idea @UhClem.  But while doing preclearing on 2 8 TB drives, I noticed my cpu reached 100%, and stayed there the whole time it was preclearing.  Will an LSI card offload cpu utilization when all drives are connected to it?  

Link to post
9 hours ago, jang430 said:

... But while doing preclearing on 2 8 TB drives, I noticed my cpu reached 100%, and stayed there the whole time it was preclearing.  ...

The whole time, or just post-read verify ??

(I don't use Unraid, but I vaguely recollect the details of Joe's preclear.)

Quote

Will an LSI card offload cpu utilization when all drives are connected to it? 

No, it will not affect the CPU usage. It does (effectively) eliminate the I/O bottleneck of the on-board (chipset) Sata sub-system.

That CPU usage you saw during pre-clear (x2) should not guide any (re-configure) decision you make.

 

Edited by UhClem
Link to post

Thanks @UhClem.  I didn't know there was a bottleneck with the existing onboard controller.  The whole time preclearing, it was 100%

 

Extra question, I can buy another unit at super cheap price, it's the N36L model.  I've read in other threads/ forums that all you need to do is to power the drives (can be done by exiting microserver), connect it to a controller (I plan to get a 4i4e controller, plug drives to controller instead of motherboard directly), and use the external port at the back of the 4i4e controller, plug a cable from 2nd unit to the back of main N40L microserver.   This will work even without Unraid license on 2nd microserver?  No OS on 2nd microserver?  Even if it POST or doesn't?  It will basically serve as a cage for me, to provide power and easy removal of hdds.  If this makes sense, my follow-up question is how will you power both units up?  What will be the sequence?  Power on 2nd unit first, then Power on main N40L unit?

 

Will the 4i4e controller (external) provide bottleneck while transfering data to main unit?  I am considering something with 9217-4i4e chipset.

Edited by jang430
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.