MicroServer N36L/N40L/N54L - 6 Drive Edition


Recommended Posts

The MicroServer is an odd choice for a 20+ drive machine.

 

I have however seen someone take out the motherboard and mount it in a very large case and that seems to be a clever idea.

 

My solution is to eventually migrate all my MicroServer drives to 4TB making a total of 20TB and putting the smaller drives in my large "archive" server.

 

Have you ever checked the "total" power usage with killawatt all the external cases when drives are in standby?  I would be curious as to the power utilization.

 

I prefer to house most of my unRAID drives in external cases. It worked great for my WHS with 27 drives in seven external cases(plus 4 drives in the HP MSS case) and for my first unRAID where I used to have twenty drives in five external cases(now 17 in four cases and I moved the other drives inside the motherboard enclosue with the parity and cache drives). And my second unRAID is currently using fourteen drives with 8 in two external cases and six in the motherboard enclsoure).

 

So I wanted the N54L to be used in the location that my WHS is since it is relatively small. Although it was much larger than I realized when I opened it.

Link to comment

That is the one bad thing about using a bunch of external enclosures. A much higher power draw. Not sure offhand what it is in standby. But my main unRAID setup, when I was using five external enclosures, was drawing close to 300 watts(including CPU enclosure) during a parity check with 22 drives.

 

But that is also higher because of the TR4M enclosures which have internal power supplies. My MediaSonic enclosures with external power supplies, that I use with my WHS, draw less power.

Link to comment

That is the one bad thing about using a bunch of external enclosures. A much higher power draw. Not sure offhand what it is in standby. But my main unRAID setup, when I was using five external enclosures, was drawing close to 300 watts(including CPU enclosure) during a parity check with 22 drives.

 

But that is also higher because of the TR4M enclosures which have internal power supplies. My MediaSonic enclosures with external power supplies, that I use with my WHS, draw less power.

 

 

300 watts during a full parity check with 22 drives is not that bad.

It's what the power draw in standby that I would be concerned with while using all those external enclosures.

Link to comment

That is the one bad thing about using a bunch of external enclosures. A much higher power draw. Not sure offhand what it is in standby. But my main unRAID setup, when I was using five external enclosures, was drawing close to 300 watts(including CPU enclosure) during a parity check with 22 drives.

 

But that is also higher because of the TR4M enclosures which have internal power supplies. My MediaSonic enclosures with external power supplies, that I use with my WHS, draw less power.

 

 

300 watts during a full parity check with 22 drives is not that bad.

It's what the power draw in standby that I would be concerned with while using all those external enclosures.

 

I don't run it 24/7 so the higher power usage isn't too big a deal. I typically only turn it on in the evenings and weekends.

 

I was trying to get the drives to spin down but they haven't yet. With the drives spinning and 17 drives in four enclosures and five drives in the CPU enclosure it's drawing around 245 watts.

 

EDIT: They just spun down. It's drawing 145 watts with all the drives spun down.

Link to comment

WeeboTech,

 

Would you mind give me details on how you setup the 2 N54L, hardware configuration and software configuration.

as a reference on maximizing the benefit of ESXi and unRaid on the little boxes.

 

I'll have to write a UCD post. For the most part, most of what I've done has been mentioned in this thread.

 

BTW,  if the startech card is only PCIe x1,  it should have its own limitations in data transfer speed although the chipset can support 6gbps ...... PCIe x1 - maxi data transfer speed is cannot exceed 250MBs.

 

I can tell you that I received benchmarks upwards of 350MB/s sometimes up to 400MB/s using a Samsung PM 840 PRO 256GB SSD.

 

with a 3TB Seagate it was around 195MB/s.

 

These are raw DD benchmarks of reading the drive directly with no filesystem I/O. With filesystem I/O this changes because of journaling and other housekeeping.

 

Also the HPN54L has PCIe gen 2 ports.

 

PCI Express 2.0 PCI-SIG announced the availability of the PCI Express Base 2.0 specification on 15 January 2007.[20] The PCIe 2.0 standard doubles the transfer rate compared with PCIe 1.0 to 5 GT/s and the per-lane throughput rises from 250 MB/s to 500 MB/s. This means a 32-lane PCIe connector (×32) can support throughput up to 16 GB/s aggregate.

 

 

 

I found the correct speed spec for PCIe :

 

Per lane (each direction):

 

v1.x: 250 MB/s (2.5 GT/s)

v2.x: 500 MB/s (5 GT/s)

v3.0: 985 MB/s (8 GT/s)

v4.0: 1969 MB/s (16 GT/s)

So, a 16-lane slot (each direction):

 

v1.x: 4 GB/s (40 GT/s)

v2.x: 8 GB/s (80 GT/s)

v3.0: 15.75 GB/s (128 GT/s)

v4.0: 31.51 GB/s (256 GT/s)

 

you are right,  for PCIe v2,  it is 500MB/s, and if it is 300-400MB/s is consider very fast.

 

.

Link to comment

Has anyone installed Esxi on MicroServer with Unraid virtual machine?

 

What Read speed are you getting if you copy large file from Samba to your windows disk? I am only getting 55MB/s read

 

I have installed unRAID 5.Xrc under ESX 5.1 on a microserver with 16GB of ram. I allocated 1GB to unRAID.

 

Using internal SATA, External SATA, ASmedia and SIL3134, SIL3132 cards I get 'raw' read rates from 95MB/s up to 180MB/s on the ASmedia 6GB/s SATA card.  These are raw DD reads of the hard drive itself, bypassing any form of network or filesystem reads.

 

When writing to a drive without parity, the speeds varied anywhere from 90MB/s up to 195MB/s depending on how much data I was writing and whatever filesystem allocation there was. i.e. journaling absorbs some of the raw speed for writing.

 

I used port multipliers also and they were in the 100-120MB/s range for a single drive on the Silicon Image chipset and up to 190MB/s on the ASmedia SATA III card.

 

Tests were done using the Seagte 3TB 7200 RPM drive.

 

I used vmkfstools with the -z and -a pvscsi option to RDM the entire disk.

 

My tests without RDM proved to be quite slow and almost pointless to have resierfs on top of a virtual .vmdk container file on top of a VMFS data store.  Even with fast RAID0 using 2 3TB drives, it was painfully slow and lengthy just to format.

1. Format the data store (fast).

2. Create the .vmdk (about hour and half).

3. Format the reiserfs filesystem (40 minutes or so).

 

RDM of the disk was almost as fast as native bare metal unraid. Very close in speed.

 

As far as transfer over the network. I did not test it.  I knew if unRAID could read/write to the disk under ESX as fast, or nearly as fast, as bare metal, the rest would work at acceptable levels.

 

What network adapter did you choose? 

E1000 or VMXNET?

What SCSI adapter did you use for the RDM disk (if you configured it that way).

I found the LSI SAS adapter to be fast, but it bottle necked at around 120 MB/s.

I immediately reconfigured with the PVSCSI adapter and the speed went up to 180MB/s.

 

as a test do this to see your raw hard drive read capability.

 

dd if=/dev/sd? of=/dev/null bs=4096 count=1024000

 

Where ? is the device's last character.

 

This will read 4GB raw from the hard drive. It will show you the maximum speed you could possibly get from the hard drive for sequential data on the outer tracks.

 

There are all sorts of file system references that occur, so file reads on a formatted file system will be slower.

 

you can do the same to write a file and read it if the hard drive is mounted

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/dev/zero of=/mnt/disk1/test.dd bs=4096 count=1024000

 

 

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

dd if=/mnt/disk1/test.dd of=/dev/null bs=4096 count=1024000

 

 

Oops, I've missed your reply somehow.

 

If I understand correctly, you using SIL3134, SIL3132 cards to connect with SAD enclosure? I don't have a SAD enclosure.  I am using internal drive bays.

 

I used vmkfstools with the -z but without -a pvscsi option to RDM the disks. For example:

 

vmkfstools -a lsilogic -z /vmfs/devices/disks/t10.ATA_____SAMSUNG_HD204UI_________________________S2H7J90B619xxx______ disk1.vmdk

 

Maybe I should try again -a pvscsi option?

 

I have choosed VMXNET network adapter.

 

as a test do this to see your raw hard drive read capability.

 

root@Tower:~# dd if=/dev/md1 of=/dev/null bs=4096 count=1024000

1024000+0 records in

1024000+0 records out

4194304000 bytes (4.2 GB) copied, 30.097 s, 139 MB/s

 

you can do the same to write a file and read it if the hard drive is mounted

 

echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

root@Tower:~# dd if=/dev/zero of=/mnt/disk1/test.dd bs=4096 count=1024000

1024000+0 records in

1024000+0 records out

4194304000 bytes (4.2 GB) copied, 109.043 s, 38.5 MB/s

 

 

root@Tower:~# echo 3 > /proc/sys/vm/drop_caches  # drop pagecache, dentries and inodes

root@Tower:~# dd if=/mnt/disk1/test.dd of=/dev/null bs=4096 count=1024000

1024000+0 records in

1024000+0 records out

4194304000 bytes (4.2 GB) copied, 32.1857 s, 130 MB/s

 

Poor read speed from Unraid Samba to Windows. Between 50-60MB/s  read. Even on Cache disk.

Link to comment

Hey guys, could you help me out please?

My N36l is full now witht modified BIOS. I have 6 drives in it.

 

It is running unraid

 

I would like to add an external 4-drive enclosure like this for example (only picked it since it was cheap, if you guys recommend something better that would be great)

 

SANS DIGITAL TR4M

 

Can you guys tell me which card I need to get for my n36l that will allow me to use this enclosure? I think there is one PCI-E slot available witht he n36l, correct?

 

Also, if you feel like there is a bigger or better enclosure available for a reasonable price please let me know. Thanks.

 

 

Link to comment

You may be able to use the eSATA port on the back of the N36, unless you are redirecting that internally. Then you can get a SIL3132 type card (check out mono price).

 

 

Another option is to try the eSATA port and get a PCIe x1 card for the two internal drives.

I used an ASMedia card by startech so I could have 2 SATA III ports internally.

If you go back in this thread there are some links and suggestions posted.

Link to comment

Can you guys tell me which card I need to get for my n36l that will allow me to use this enclosure? I think there is one PCI-E slot available witht he n36l, correct?

 

There are a few cards referenced throughout the thread that should work.  The safest approach is to use one of the cards on the SansDigital list of cards known to work with their port multiplier cases:

http://www.sansdigital.com/esata-port-multiplier/index.php

 

This inexpensive card is on the list:  http://www.newegg.com/Product/Product.aspx?Item=N82E16816115073    ... and would work in the PCIe x1 slot in the enclosure.

 

If your x16 slot is available, this x4 card (also on the list) would give you far more bandwidth -- you could add up to 4 of the enclosures  :)      http://www.newegg.com/Product/Product.aspx?Item=N82E16816115036

Link to comment

Can you guys tell me which card I need to get for my n36l that will allow me to use this enclosure? I think there is one PCI-E slot available witht he n36l, correct?

 

There are a few cards referenced throughout the thread that should work.  The safest approach is to use one of the cards on the SansDigital list of cards known to work with their port multiplier cases:

http://www.sansdigital.com/esata-port-multiplier/index.php

 

This inexpensive card is on the list:  http://www.newegg.com/Product/Product.aspx?Item=N82E16816115073    ... and would work in the PCIe x1 slot in the enclosure.

 

If your x16 slot is available, this x4 card (also on the list) would give you far more bandwidth -- you could add up to 4 of the enclosures  :)      http://www.newegg.com/Product/Product.aspx?Item=N82E16816115036

 

Choose a card that has a low profile bracket. That particular x4 card from newegg will NOT fit.

The other card is a Rocket point and there is a comment in the comments saying "Verified by HighPoint - the R622 does NOT support Port Multiplier (PM)"

 

 

Choose a low profile card with a Silicon Image chipset, or the Startech model I've previously mentioned.

The Silicon Image chipsets provide the smoothest Port Multiplier Capability. They are not the fastest for single drive access, but the driver seems to be mature enough that multiple drive access is smoother.

Link to comment

Choose a card that has a low profile bracket.

 

The other card is a Rocket point and there is a comment in the comments saying "Verified by HighPoint - the R622 does NOT support Port Multiplier (PM)"

 

Thanks for pointing out the need for a low-profile card.  I overlooked that (I don't have one of these systems) ... and I'm surprised Sans-Digital included cards on its list that didn't meet that criteria.

 

As for the 622 not supporting port multipliers .... where did you see that note?    I looked at the Newegg link and it's not on any of their descriptive pages, and in fact one of the reviewers noted: "... Wanted a card that supports port multiplication to just access individual drives. It wasn't clear in the specs, at the maufacturer's website, or in the reviews that this card will do anything besides RAID or JBOD, but it works fine ..."    That, plus the fact it's on Sans Digital's list of "eSATA Port Multiplier" cards that they reference in their product descriptions would tend to support that it should work!  [ http://www.sansdigital.com/esata-port-multiplier/index.php ]    Have you actually tried that card?

Link to comment

Choose a card that has a low profile bracket.

 

The other card is a Rocket point and there is a comment in the comments saying "Verified by HighPoint - the R622 does NOT support Port Multiplier (PM)"

 

Thanks for pointing out the need for a low-profile card.  I overlooked that (I don't have one of these systems) ... and I'm surprised Sans-Digital included cards on its list that didn't meet that criteria.

 

As for the 622 not supporting port multipliers .... where did you see that note?    I looked at the Newegg link and it's not on any of their descriptive pages, and in fact one of the reviewers noted: "... Wanted a card that supports port multiplication to just access individual drives. It wasn't clear in the specs, at the maufacturer's website, or in the reviews that this card will do anything besides RAID or JBOD, but it works fine ..."    That, plus the fact it's on Sans Digital's list of "eSATA Port Multiplier" cards that they reference in their product descriptions would tend to support that it should work!  [ http://www.sansdigital.com/esata-port-multiplier/index.php ]    Have you actually tried that card?

 

I have not tried the card, I saw the comment, I cannot remember where it is.

 

What I can say for sure, the silicon image cards work with port multiplers.  The Marvel chipsets work, but not as smoothly as the silicon image chipsests. The Asmedia chipsets work, but again, not as smoothly as the silicon image chipsets.

 

In my tests I would do 1 dd, then 2 in parallel, then 3 in parallel, then 4.

 

With the silicon image chipsets, Each successive parallel dd lowered the throughput by 30MB/s.

I.E. one test was almost 120MB/s, two were at 60MB/s, Three around 40MB/s, 4 at 30MB/s. These were my findings, YMMV.

 

By the end of the test, 4 drives were DD'ing at 30MB/s each at the same time with a very smooth access of data to each drive.  With the other chipsets access was not as smooth. With the ASmedia Chipset, it worked by by the time I had 4 in parallel, the access was on the order of 10MB/s for each drive. Access was very blocky. I.E. When one process was DD'ing. It seemed to lock out (block) access to the other drives. The Marvel wasn't as fast as the Silicon image chipset. (from my findings years ago, current technology may be better).

 

I've been playing with these port multipliers ever since linux started supporting them, these are my findings from testing. There may be better cards. I can only afford so many controllers to test with. (grin).

 

I've used a SYBA Silicon Image 3124 chipset on PCIe x1, Addonics Silicon Image 3132 on PCIe x1 and the ASmedia SATA III PCIe x1 card on this particular box.

 

With the Startech ASmedia Card, SATA III and an SSD, I was able to achieve over 350MB/s on PCIe x1. With a 3TB SATA III 7200 RPM drive I was able to achieve over 190MB/s. Port multiplier speed suffered greatly when accessing multiple drives simultaneously.

 

Before any expenditure, I would suggest testing the eSATA port on the back of the box to see how well it performs.

 

The high point rocket card may work and perform well with the port multipliers. However, there still is the low profile point that has to be considered.

 

Someone else in the thread was exploring a 4 port card.  Perhaps that card could be considered.

Link to comment

Thanks so much guys!

 

Can someone link me to the silicone image Syba card that you used that is low-profile? I just don't want to make a mistake and order the wrong card.

 

Also, I am currently using the eSata port on the back of the box for he 6th drive inside the case. So that's taken up.

 

So from what I understand I should not do more than one 4-drive enclosure otherwise it will be too slow correct?

 

Lastly, that sans digital unit is discontinued (although) I can still find it on amazon.

Do you guys still recommend it? I want to through 4tb drives in it.

 

There's a Promedia enclosure that seems to be cheaper. Any suggestions?

Link to comment

What are the temperatures like on these little things as was thinking of using one to back up the important bits on me main server.

Fine!

 

Very detailed  8)

... I suspect some actual numbers would be a bit more useful !!

 

Numbers would be nice or even a pretty screen shot.  :D

Link to comment

Thanks so much guys!

 

Can someone link me to the silicone image Syba card that you used that is low-profile? I just don't want to make a mistake and order the wrong card.

 

Also, I am currently using the eSata port on the back of the box for he 6th drive inside the case. So that's taken up.

 

So from what I understand I should not do more than one 4-drive enclosure otherwise it will be too slow correct?

 

Lastly, that sans digital unit is discontinued (although) I can still find it on amazon.

Do you guys still recommend it? I want to through 4tb drives in it.

 

There's a Promedia enclosure that seems to be cheaper. Any suggestions?

 

I've used these parts with success.

 

 

Addonics ADSA3GPX1-2E PCI Express eSATA and SATA II 2 Port eSATA II RAID Controller

http://www.newegg.com/Product/Product.aspx?Item=N82E16816318005

 

SANS DIGITAL TR4UTBPN 4Bay USB 3.0 / eSATA Hardware RAID 5 Tower RAID Enclosure (no eSATA card bundled)

http://www.newegg.com/Product/Product.aspx?Item=N82E16816111149

 

I used this for the top two internal drives so I had maximum speed. it did not perform to my expectations for Port Multiplier usage when accessing multiple drives at the same time with ESX.

StarTech PEXESAT322I PCI-Express x1 Low Profile Ready SATA III (6.0Gb/s) 2 Int/2 Ext SATA Controller Card

http://www.newegg.com/Product/Product.aspx?Item=N82E16816129101

Link to comment

Thank you Weebo, but it doesn't seem like that Startech is a silicone image chipset which is what you recommended?

 

That enclosure seems pricey. What do you think about this one?

 

http://www.amazon.com/Mediasonic-HF2-SU3S2-ProBox-Drive-Enclosure/dp/B003X26VV4/ref=wl_it_dp_o_pd_nS_nC?ie=UTF8&colid=37PZEZFSOCZ37&coliid=I1CTS9U3BFXBXW

 

 

I recommended the StarTech if you wanted/needed the 2 'other' internal drives to run at top speed. I.E. SATA III.

It's what I choose so my SSD would get 350MB/s.

It does not work well with Port Multipliers. It works, just not as smoothly when accessing multiple drives simultaneously.

 

 

I don't have an opinion on the mediasonic box.

 

 

I selected items based on performance, reliability and functionality.

I choose the sans digital box for it's internal power supply and hardware raid functionality should I choose to re-deploy elsewhere. It's really come in handy when I needed it. Have two of them.

 

 

Link to comment

Choose a card that has a low profile bracket.

 

The other card is a Rocket point and there is a comment in the comments saying "Verified by HighPoint - the R622 does NOT support Port Multiplier (PM)"

 

Thanks for pointing out the need for a low-profile card.  I overlooked that (I don't have one of these systems) ... and I'm surprised Sans-Digital included cards on its list that didn't meet that criteria.

 

As for the 622 not supporting port multipliers .... where did you see that note?    I looked at the Newegg link and it's not on any of their descriptive pages, and in fact one of the reviewers noted: "... Wanted a card that supports port multiplication to just access individual drives. It wasn't clear in the specs, at the maufacturer's website, or in the reviews that this card will do anything besides RAID or JBOD, but it works fine ..."    That, plus the fact it's on Sans Digital's list of "eSATA Port Multiplier" cards that they reference in their product descriptions would tend to support that it should work!  [ http://www.sansdigital.com/esata-port-multiplier/index.php ]    Have you actually tried that card?

 

I have not tried the card, I saw the comment, I cannot remember where it is.

 

What I can say for sure, the silicon image cards work with port multiplers.  The Marvel chipsets work, but not as smoothly as the silicon image chipsests. The Asmedia chipsets work, but again, not as smoothly as the silicon image chipsets.

 

In my tests I would do 1 dd, then 2 in parallel, then 3 in parallel, then 4.

 

With the silicon image chipsets, Each successive parallel dd lowered the throughput by 30MB/s.

I.E. one test was almost 120MB/s, two were at 60MB/s, Three around 40MB/s, 4 at 30MB/s. These were my findings, YMMV.

 

By the end of the test, 4 drives were DD'ing at 30MB/s each at the same time with a very smooth access of data to each drive.  With the other chipsets access was not as smooth. With the ASmedia Chipset, it worked by by the time I had 4 in parallel, the access was on the order of 10MB/s for each drive. Access was very blocky. I.E. When one process was DD'ing. It seemed to lock out (block) access to the other drives. The Marvel wasn't as fast as the Silicon image chipset. (from my findings years ago, current technology may be better).

 

I've been playing with these port multipliers ever since linux started supporting them, these are my findings from testing. There may be better cards. I can only afford so many controllers to test with. (grin).

 

I've used a SYBA Silicon Image 3124 chipset on PCIe x1, Addonics Silicon Image 3132 on PCIe x1 and the ASmedia SATA III PCIe x1 card on this particular box.

 

With the Startech ASmedia Card, SATA III and an SSD, I was able to achieve over 350MB/s on PCIe x1. With a 3TB SATA III 7200 RPM drive I was able to achieve over 190MB/s. Port multiplier speed suffered greatly when accessing multiple drives simultaneously.

 

Before any expenditure, I would suggest testing the eSATA port on the back of the box to see how well it performs.

 

The high point rocket card may work and perform well with the port multipliers. However, there still is the low profile point that has to be considered.

 

Someone else in the thread was exploring a 4 port card.  Perhaps that card could be considered.

The model of 622 that you need is the RocketRAID 622.  That one DOES support port multiplying. I'm currently using two they work real well.  I've got a couple of Sil3132s and they were dropping drives continually.  With the RocketRAIDs I have very few drops unlike the SIL3132s.  Not sure why my SILs were so bad because I read some of the same things about that SILs are better then Marvel but for me it has been the opposite.  Haven't tried the ASmedia controllers yet.
Link to comment

Have you used the R622 with Port Multipliers? If so were you able to benchmark multiple drive simultaneous access?

 

 

I.E.  DD from the raw drive  in parallel.

I'm curious if the High Point drivers handle it as smoothly as the Silicon Image cards.

While 30MB/s simultaneous access of 4 drives isn't that fast these days, I found that multiple drive access did not choke out the other drives.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.