MicroServer N36L/N40L/N54L - 6 Drive Edition


Recommended Posts

Hey Guys,

This probably has been asked in this thread before but I stopped reading when it was about 15 pages old as ye had me perfectly set up at that stage! I now have 6 drives in my microserver- i am using the nexus doubletwin in the optical bay with 3.5" attached and a 2.5" cache sitting under it. I have a 1.5tb, 3.5" external toshiba drive where the power cable has become very temperamental from loaning it out and I am thinking to take the disk out and make it my cache drive in place of the 2.5". The thing is i want to put it in one of the bays for cooling as its gonna be on 24/7 and i notice my drive in the optical bay is always 3-4 degress hotter than the rest. What do i have to do in order to achieve this? Is it just as simple as remove one of the lesser used drives from the bay and put it up top (connecting the sata from the old cache) and put the 3.5" new cache in its place?

Thanks very much :)

Link to comment

Hi

here is upload new rc11 version optimized for AMD N**L CPU :)

Because orginal image is optimized for Core2Duo (intel)

 

changed in kernel

Processor type and features  --->

    Subarchitecture Type ()  --->

        (X) PC-compatible

    Processor family ()  --->

        (X) Opteron/Athlon64/Hammer/K8

 

+ some patch

-HOSTCFLAGS   = -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer
+HOSTCFLAGS   = -march=amdfam10 -Wall -Wmissing-prototypes -Wstrict-prototypes -O2 -fomit-frame-pointer
HOSTCXXFLAGS = -O2
-------------
-        cflags-$(CONFIG_MK8) += $(call cc-option,-march=k8)
+        cflags-$(CONFIG_MK8) += $(call cc-option,-march=amdfam10)
--------------
-cflags-$(CONFIG_MK8)    += $(call cc-option,-march=k8,-march=athlon)
+cflags-$(CONFIG_MK8)    += $(call cc-option,-march=amdfam10)

 

after start

from dmesg

CPU: Physical Processor ID: 0

CPU: Processor Core ID: 0

mce: CPU supports 6 MCE banks

LVT offset 0 assigned for vector 0xf9

using AMD E400 aware idle routine

ACPI: Core revision 20120320

Enabling APIC mode:  Flat.  Using 1 I/O APICs

..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1

CPU0: AMD Turion II Neo N40L Dual-Core Processor stepping 03

Performance Events: AMD PMU driver.

... version:                0

... bit width:              48

... generic registers:      4

... value mask:            0000ffffffffffff

... max period:            00007fffffffffff

... fixed-purpose events:  0

... event mask:            000000000000000f

CPU 1 irqstacks, hard=f008c000 soft=f008e000

Booting Node  0, Processors  #1

Initializing CPU#1

Brought up 2 CPUs

System has AMD C1E enabled

Switch to broadcast mode on CPU1

Total of 2 processors activated (5989.95 BogoMIPS).

Switch to broadcast mode on CPU0

PM: Registering ACPI NVS region [mem 0xddfae000-0xddfdffff] (204800 bytes)

NET: Registered protocol family 16

 

egrep svm /proc/cpuinfo

flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc nonstop_tsc extd_apicid pni monitor cx16 popcnt lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt nodeid_msr hw_pstate npt lbrv svm_lock nrip_save

 

im also added to image:

- media three (for support tuners DVB S/S2/C/T etc) - special if HP microserver has 2 PCIE :)

- added exFAT and NTFS support (latest driver 2013)

- added mc

 

Link to file: https://dl.dropbox.com/u/49442039/unraid.rc11.tbs.zip

 

Enjoy ;)

 

Any chance for an R12a update? Thank you!

 

Sent from my SCH-I605 using Tapatalk 2

 

 

Link to comment

Has anyone upgraded their N54L with the modified Bios for the improved SATA options?

 

I successfully updated my N54L's BIOS last night using TheBay's version. Afterwards, I went into the BIOS, loaded optimal defaults, and then tweaked a few settings. Among the adjustments, I set the "SATA 5/6 IDE Mode" (I don't remember the exact label, but it will be close to this) option to "Disabled" under the "SouthBridge Configuration" options. Upon next boot, all my drives populated in AHCI mode and unRAID started up without a hitch.

 

 

Thanks. I'll attempt it this weekend.

Did you ever get this up and going I got all my stuff precleared

Sent from my SGH-I747 using Tapatalk 2

Link to comment

Has anyone installed Esxi on MicroServer with Unraid virtual machine?

 

What Read speed are you getting if you copy large file from Samba to your windows disk? I am only getting 55MB/s read

 

I have installed unRAID 5.Xrc under ESX 5.1 on a microserver with 16GB of ram. I allocated 1GB to unRAID.

 

Using internal SATA, External SATA, ASmedia and SIL3134, SIL3132 cards I get 'raw' read rates from 95MB/s up to 180MB/s on the ASmedia 6GB/s SATA card.  These are raw DD reads of the hard drive itself, bypassing any form of network or filesystem reads.

 

When writing to a drive without parity, the speeds varied anywhere from 90MB/s up to 195MB/s depending on how much data I was writing and whatever filesystem allocation there was. i.e. journaling absorbs some of the raw speed for writing.

 

I used port multipliers also and they were in the 100-120MB/s range for a single drive on the Silicon Image chipset and up to 190MB/s on the ASmedia SATA III card.

 

Tests were done using the Seagte 3TB 7200 RPM drive.

 

I used vmkfstools with the -z and -a pvscsi option to RDM the entire disk.

 

My tests without RDM proved to be quite slow and almost pointless to have resierfs on top of a virtual .vmdk container file on top of a VMFS data store.  Even with fast RAID0 using 2 3TB drives, it was painfully slow and lengthy just to format.

1. Format the data store (fast).

2. Create the .vmdk (about hour and half).

3. Format the reiserfs filesystem (40 minutes or so).

 

RDM of the disk was almost as fast as native bare metal unraid. Very close in speed.

 

As far as transfer over the network. I did not test it.  I knew if unRAID could read/write to the disk under ESX as fast, or nearly as fast, as bare metal, the rest would work at acceptable levels.

 

What network adapter did you choose? 

E1000 or VMXNET?

What SCSI adapter did you use for the RDM disk (if you configured it that way).

I found the LSI SAS adapter to be fast, but it bottle necked at around 120 MB/s.

I immediately reconfigured with the PVSCSI adapter and the speed went up to 180MB/s.

 

as a test do this to see your raw hard drive read capability.

 

dd if=/dev/sd? of=/dev/null bs=4096 count=1024000

 

Where ? is the device's last character.

 

This will read 4GB raw from the hard drive. It will show you the maximum speed you could possibly get from the hard drive for sequential data on the outer tracks.

 

There are all sorts of file system references that occur, so file reads on a formatted file system will be slower.

 

you can do the same to write a file and read it if the hard drive is mounted

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/dev/zero of=/mnt/disk1/test.dd bs=4096 count=1024000

 

 

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/mnt/disk1/test.dd of=/dev/null bs=4096 count=1024000

 

Link to comment
  • 2 weeks later...

Is it possible to utilize all 6 Sata ports and still be able to use a PCIe Sata controller card?

 

You can use the PCIe if it has eSATA ports.

I am doing it with both PCIe slots.

Must be Low Profile.  So before you purchase a card, with eSATA make sure it has a low profile adapter.

 

I've used these with success.

PCI-Express SATA II 3.0Gb/s Controller Card SD-PEX40031

http://www.ebay.com/itm/PCI-Express-SATA-II-3-0Gb-s-Controller-Card-SD-PEX40031-/330558178838?pt=US_Computer_Disk_Controllers_RAID_Cards&hash=item4cf6cd8616

 

 

Addonics ADSA3GPX1-2E PCI Express eSATA and SATA II 2 Port eSATA II RAID Controller

http://www.newegg.com/Product/Product.aspx?Item=N82E16816318005

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816129101

StarTech PEXESAT322I PCI-Express x1 Low Profile Ready SATA III (6.0Gb/s) 2 Int/2 Ext SATA Controller Card

 

The Startech is SATA III. It worked really well for the Samsung PM840 I put in place.

 

Link to comment
  • 3 weeks later...

Has anyone installed Esxi on MicroServer with Unraid virtual machine?

 

What Read speed are you getting if you copy large file from Samba to your windows disk? I am only getting 55MB/s read

 

I have installed unRAID 5.Xrc under ESX 5.1 on a microserver with 16GB of ram. I allocated 1GB to unRAID.

 

Using internal SATA, External SATA, ASmedia and SIL3134, SIL3132 cards I get 'raw' read rates from 95MB/s up to 180MB/s on the ASmedia 6GB/s SATA card.  These are raw DD reads of the hard drive itself, bypassing any form of network or filesystem reads.

 

When writing to a drive without parity, the speeds varied anywhere from 90MB/s up to 195MB/s depending on how much data I was writing and whatever filesystem allocation there was. i.e. journaling absorbs some of the raw speed for writing.

 

I used port multipliers also and they were in the 100-120MB/s range for a single drive on the Silicon Image chipset and up to 190MB/s on the ASmedia SATA III card.

 

Tests were done using the Seagte 3TB 7200 RPM drive.

 

I used vmkfstools with the -z and -a pvscsi option to RDM the entire disk.

 

My tests without RDM proved to be quite slow and almost pointless to have resierfs on top of a virtual .vmdk container file on top of a VMFS data store.  Even with fast RAID0 using 2 3TB drives, it was painfully slow and lengthy just to format.

1. Format the data store (fast).

2. Create the .vmdk (about hour and half).

3. Format the reiserfs filesystem (40 minutes or so).

 

RDM of the disk was almost as fast as native bare metal unraid. Very close in speed.

 

As far as transfer over the network. I did not test it.  I knew if unRAID could read/write to the disk under ESX as fast, or nearly as fast, as bare metal, the rest would work at acceptable levels.

 

What network adapter did you choose? 

E1000 or VMXNET?

What SCSI adapter did you use for the RDM disk (if you configured it that way).

I found the LSI SAS adapter to be fast, but it bottle necked at around 120 MB/s.

I immediately reconfigured with the PVSCSI adapter and the speed went up to 180MB/s.

 

as a test do this to see your raw hard drive read capability.

 

dd if=/dev/sd? of=/dev/null bs=4096 count=1024000

 

Where ? is the device's last character.

 

This will read 4GB raw from the hard drive. It will show you the maximum speed you could possibly get from the hard drive for sequential data on the outer tracks.

 

There are all sorts of file system references that occur, so file reads on a formatted file system will be slower.

 

you can do the same to write a file and read it if the hard drive is mounted

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/dev/zero of=/mnt/disk1/test.dd bs=4096 count=1024000

 

 

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/mnt/disk1/test.dd of=/dev/null bs=4096 count=1024000

 

 

Hi,

When u did these tests, is it

With modified bios,

Esxi

Unraid on top , with rdm

PCIe raid card, with esata ports connected to external enclosure

?

 

 

Btw, how do you use the 4 bay come with the box ?

 

 

 

 

Link to comment

 

Hi,

When u did these tests, is it

With modified bios,

No

 

Esxi

Unraid on top , with rdm

Yes

 

PCIe raid card, with esata ports connected to external enclosure

?

Yes with JBOD and RAID0.

 

Btw, how do you use the 4 bay come with the box ?

The box I selected can connect via USB 3.0 or eSATA. I used eSATA.

Link to comment

So with the N54L. Am I able to use the built in eSATA port for a 4 bay port multiplier enclosure. And also use a x4 PCI express card to also have four other 4 bay port multiplier enclosures? And also use the four internal drives and also have a fifth drive attached to a card in the x1 PCI express slot?

This is my plan anyway.

My first unraid build I use five multi bay external enclosures with twenty drives without issue. And my second unRAID build will eventually use a couple as well.

But I want my third unRAID build to replace my WHS which currently uses seven, 4 bay, external enclosures. So I would like to use as many of those as possible with the N54L Micro Server.

 

So ideally with the N54L I would like to use five external, 4 bay enclosures, and five internal drives. Is this possible with the N54L Micro Server?

Link to comment

Just for my information, which model are those external enclosures you're using without trouble ?

 

I have four SANS DIGITAL TR4M enclosures(4 bay) and one SANS DIGITAL TR5M enclosure(5 bay). One of the TR4M enclosures has 3 drives.

Well I guess I should say had. I moved those three drives to the main PC case this past weekend so I could use that one TR4M enclosure with my my second unRAID setup. But I had all five enclosures connected for almost two years without issue. I'm running unRAID 4.7 on that system.

My second unRAID setup is using 5.0 rc12.

Link to comment

Port Multiplier Technology is neat. It's good for unRAID as a media server.

When you access a drive at a time, you will get top speed.

 

For a very busy server, it may have issues with competition for bandwidth on certain drives.

I.E. when you access multiple drives from multiple apps on the same box/cable.

 

If you lay your drives out well, the technology works.

Your parity drive must be on your fastest internal port.

 

Your busiest drive(s) and cache need to be on your internal port.

 

Other then that you can probably have 4 external 4/5 drive units with current eSATA technology.

 

Since the machine only accepts low profile cards, I've only found x1 cards that provide 2 eSATA ports.

 

I bet with a more advanced raid controller and other external drive connection technology you could get more.

 

For me, I could not get the other eSATA port on the back panel to be reliable with Port Multipliers.

It could be because I was using ESX 5.1.

 

I choose to use the PCIe x1 Startech controllers with the ASM1061 chipset since they supported 6GB/s.

In my tests, it made a small difference.

 

But for Port Multiplier technology, I found the Silicon Image chipsets to be the best for PMP management.

I.E. When using the ASM Chipset, drive writes very chunky, so simultaneous drive access really hampered the performance of the other drives.

 

With Silicon Image chipsets, I found you had a top speed of about 120MB/s, each parallel access halved that.

So two drives together dropped to 60MB/s, 4 Drives dropped to 30MB/s.

 

With the other chipsets it wasn't so smooth. 4 drive access dropped really low to around 10MB/s.

 

So far in all these years of playing with this technology, and my limited testing,  I've found the silicon Image chipsets best for PMP usage.

I've tested with Silicon Image, Marvell and the ASMedia chipsets.

 

With as many drives as you plan to have, I'm not sure all these external are the best way to go.

 

1 large machine with the right technology could actually save you power considering that each box will have a PSU and Fan running all the time.

Link to comment

Just for my information, which model are those external enclosures you're using without trouble ?

 

I've had good results with SANS DIGITAL TR4UTBPN 4Bay USB 3.0 / eSATA Hardware RAID 5 Tower RAID Enclosure (no eSATA card bundled)

http://www.newegg.com/Product/Product.aspx?Item=N82E16816111149

 

I choose this over the simpler model because of the built in PSU and built in RAID.

If I choose to use it elsewhere I can change raid modes with an adjustment.

I also like the slide in design.

Link to comment

Using port multipliers will slow down the speed of a parity check or rebuild - possibly quite a bit.  Accessing the data should be fine however.  Actually after thinking about it a bit it might be as little as a SAS Expander does - so may not even notice it.

 

But can I do that with the N54L?

 

I've been using external enclosures for over two years with my first unRAID. So I understand what speeds I will get. During the parity check I get speeds around 30 MB/s with twenty data drives. I use a cache drive so I can get good speeds when transferring data and the cache drive transfers it to the array later.

 

I just want to make sure this is possible with the N54L. My other unRAIDs I setup with regular motherboards. But the N54L is a micro PC.

Link to comment

Using port multipliers will slow down the speed of a parity check or rebuild - possibly quite a bit.  Accessing the data should be fine however.  Actually after thinking about it a bit it might be as little as a SAS Expander does - so may not even notice it.

 

But can I do that with the N54L?

 

I've been using external enclosures for over two years with my first unRAID. So I understand what speeds I will get. During the parity check I get speeds around 30 MB/s with twenty data drives. I use a cache drive so I can get good speeds when transferring data and the cache drive transfers it to the array later.

 

I just want to make sure this is possible with the N54L. My other unRAIDs I setup with regular motherboards. But the N54L is a micro PC.

 

 

The Technology is there. You can rebuild it. 

 

The issue is how fast your party speed will be, As I mention. The micro servers only support low profile cards.

Two PCIe x1 cards with 2 eSATA ports each.

 

1 or 2 extra internal drives.  or 1 internal drive and an external drive. That external drive can possibly be PMP, but I don't know about the performance of it.

Link to comment

Using port multipliers will slow down the speed of a parity check or rebuild - possibly quite a bit.  Accessing the data should be fine however.  Actually after thinking about it a bit it might be as little as a SAS Expander does - so may not even notice it.

 

But can I do that with the N54L?

 

I've been using external enclosures for over two years with my first unRAID. So I understand what speeds I will get. During the parity check I get speeds around 30 MB/s with twenty data drives. I use a cache drive so I can get good speeds when transferring data and the cache drive transfers it to the array later.

 

I just want to make sure this is possible with the N54L. My other unRAIDs I setup with regular motherboards. But the N54L is a micro PC.

Basically I corrected myself and was saying the speed difference might not be noticable and maybe as little as I see with a SAS expander.  Just didn't want to delete my post so I corrected it.
Link to comment

Port Multiplier Technology is neat. It's good for unRAID as a media server.

When you access a drive at a time, you will get top speed.

 

For a very busy server, it may have issues with competition for bandwidth on certain drives.

I.E. when you access multiple drives from multiple apps on the same box/cable.

 

If you lay your drives out well, the technology works.

Your parity drive must be on your fastest internal port.

 

Your busiest drive(s) and cache need to be on your internal port.

 

Other then that you can probably have 4 external 4/5 drive units with current eSATA technology.

 

Since the machine only accepts low profile cards, I've only found x1 cards that provide 2 eSATA ports.

 

I bet with a more advanced raid controller and other external drive connection technology you could get more.

 

For me, I could not get the other eSATA port on the back panel to be reliable with Port Multipliers.

It could be because I was using ESX 5.1.

 

I choose to use the PCIe x1 Startech controllers with the ASM1061 chipset since they supported 6GB/s.

In my tests, it made a small difference.

 

But for Port Multiplier technology, I found the Silicon Image chipsets to be the best for PMP management.

I.E. When using the ASM Chipset, drive writes very chunky, so simultaneous drive access really hampered the performance of the other drives.

 

With Silicon Image chipsets, I found you had a top speed of about 120MB/s, each parallel access halved that.

So two drives together dropped to 60MB/s, 4 Drives dropped to 30MB/s.

 

With the other chipsets it wasn't so smooth. 4 drive access dropped really low to around 10MB/s.

 

So far in all these years of playing with this technology, and my limited testing,  I've found the silicon Image chipsets best for PMP usage.

I've tested with Silicon Image, Marvell and the ASMedia chipsets.

 

With as many drives as you plan to have, I'm not sure all these external are the best way to go.

 

1 large machine with the right technology could actually save you power considering that each box will have a PSU and Fan running all the time.

 

It sounds like I'll need to try the same card in the N54L that I use in my first unRAID. That is an x4 PCI express SataII port multiplier card. It has two external ports and four internal ports. So you can set it up to use all four internal ports or two internal and two external. I use four, four bay external enclosures with it. The Rosewill RC-218. I got one from Newegg back in March 2011. It looks like they still sell it.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816132018

 

I was hoping there might be something a little cheaper now but I guess not. $80 is good for what it does although it was $5 cheaper back then.

Link to comment

Using port multipliers will slow down the speed of a parity check or rebuild - possibly quite a bit.  Accessing the data should be fine however.  Actually after thinking about it a bit it might be as little as a SAS Expander does - so may not even notice it.

 

But can I do that with the N54L?

 

I've been using external enclosures for over two years with my first unRAID. So I understand what speeds I will get. During the parity check I get speeds around 30 MB/s with twenty data drives. I use a cache drive so I can get good speeds when transferring data and the cache drive transfers it to the array later.

 

I just want to make sure this is possible with the N54L. My other unRAIDs I setup with regular motherboards. But the N54L is a micro PC.

 

 

The Technology is there. You can rebuild it. 

 

The issue is how fast your party speed will be, As I mention. The micro servers only support low profile cards.

Two PCIe x1 cards with 2 eSATA ports each.

 

1 or 2 extra internal drives.  or 1 internal drive and an external drive. That external drive can possibly be PMP, but I don't know about the performance of it.

 

With a PCIx1 card, you really don't want to have more than four drives on it. At least this is what I found out with my testing two years ago. Which is why with 16 drives on a x4 card I can get the 30MB/s speeds during a parity check. Adding more drives than that really slows things down. I guess I need to make sure that Rose will RC-218 has a low profile adapter. Until I look at the inside of the N54L I won't know for sure if it will fit.

 

With my first unRAID the RC-218 handled four enclosures and then I had an PCI express x1 card for fifth enclosure that got me to twenty drives.

 

My hope will be to be able to use the built in port multiplier for the fifth enclosure and get four enclosures from the Rosewill RC-218. Then use the four internal bays for four drives, and my parity drive would be connected to an x1 PCI express card in the x1 slot. Although I would need to route two cables from the inside to the outside in that setup since there is not space to add a bracket that creates two external eSATA ports from the internal sata ports on the RC-218 like I use with my first unRAID.

Link to comment

 

It sounds like I'll need to try the same card in the N54L that I use in my first unRAID. That is an x4 PCI express SataII port multiplier card. It has two external ports and four internal ports. So you can set it up to use all four internal ports or two internal and two external. I use four, four bay external enclosures with it. The Rosewill RC-218. I got one from Newegg back in March 2011. It looks like they still sell it.

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816132018

 

I was hoping there might be something a little cheaper now but I guess not. $80 is good for what it does although it was $5 cheaper back then.

 

I like that card. I had it at one time. it worked OK.

You could search for an another low profile eSATA adapter to externalize the other two ports.

 

This is the X1 card I used for SATA III.

 

StarTech PEXESAT322I PCI-Express x1 Low Profile Ready SATA III (6.0Gb/s) 2 Int/2 Ext SATA Controller Card

http://www.newegg.com/Product/Product.aspx?Item=N82E16816129101

 

 

For me the SATA III was important for the two new additional internal drives.

 

ICY DOCK MB971SP-B DuoSwap 5.25" Hot-Swap Drive Caddy for 2.5" and 3.5" SATA HD/SSD

http://www.newegg.com/Product/Product.aspx?Item=N82E16817994143

 

Although expensive, it's what I needed.  1 SATA III SSD and 1 SATA III 3TB 7200 RPM drive.

 

Link to comment

With a PCIx1 card, you really don't want to have more than four drives on it. At least this is what I found out with my testing two years ago. Which is why with 16 drives on a x4 card I can get the 30MB/s speeds during a parity check. Adding more drives than that really slows things down.

 

FYI, you don't want to have more then 4 Drives that are accessed immediately one after the other.

I bet if you rotated/staggered the drive assignments you would be fine.  I.E. Disk1 on one controller, Disk 2 on another controller, Disk 3 on another controller. etc. etc.

 

One drive access via PMP connection is as fast as 1 drive can go to a max of 120MB/s. It's when you access multiple drives at the same time on the same channel/cable that it gets slow.

 

The caveat is it brings up an issue of complexity when configuring and troubleshooting.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.