MicroServer N36L/N40L/N54L - 6 Drive Edition


Recommended Posts

I'd also turn the write cache on. That's off by default.

 

Does enabling write cache have any noticeable benefit?

 

I just changed mine. When it was disabled I was doing around 45MB/s for a parity sync. With it enabled the speed jumped to over 100MB/s.

 

That's interesting. I wonder if there are any downsides to enabling it. Why is it disabled by default?

Link to comment

I'd also turn the write cache on. That's off by default.

 

Does enabling write cache have any noticeable benefit?

 

I just changed mine. When it was disabled I was doing around 45MB/s for a parity sync. With it enabled the speed jumped to over 100MB/s.

 

That's interesting. I wonder if there are any downsides to enabling it. Why is it disabled by default?

 

It says in the BIOS that if you lose power with caching enabled that you can lose data. So I guess as long as the MicroServer is connected to a UPS it should be fine.

Link to comment

I'd also turn the write cache on. That's off by default.

 

Does enabling write cache have any noticeable benefit?

 

I just changed mine. When it was disabled I was doing around 45MB/s for a parity sync. With it enabled the speed jumped to over 100MB/s.

 

That's interesting. I wonder if there are any downsides to enabling it. Why is it disabled by default?

 

It says in the BIOS that if you lose power with caching enabled that you can lose data. So I guess as long as the MicroServer is connected to a UPS it should be fine.

Ok thanks for the info.
Link to comment
  • 2 weeks later...

I finally got my N54L setup completed today. I put in a Rosewill RC218 port multiplier card. So I tested it by connecting four, 4 bay enclosures to the Rosewill card and one, 4 bay, enclosure to the eSata port. All twenty external drives came up in unRAID. Plus I have a drive connected to the on board sata port. So this should work with up to twenty five drives connected to my N54L.

 

Now I just need to get a pro license setup for my N54L and I can start off with five drives internally and fifteen drives in four of the external enclosures.

Link to comment

I just started using my eSATA ports on my N54L. From my Rosewill RC218 it is fine since that uses FIS–based (frame information structure) switching.

 

But from what I've read it seems that the built in eSATA port uses Command-based switching so it can only access one drive at a time, not concurrently like with FIS based switching.

 

I was wondering why things were going so slow with the four bay enclosure I connected. With four drives on the eSATA port it really slows things down if all drives are going to be accessed concurrently like during a parity check in unRAID. With one drive of course it's fine, and two drives is not bad, but once you get to three drives it really slows things down.

 

So if I decide to use the built in eSATA port in the future, I will limit it to two drives at the most. Then I will need to take a 5 Bay enclosure from my unRAID1 to use with my unRAID3 to connect to the RC-218. Then I should be able to hit the max of 23 drives in the array. I'll have to see though how much things slow down with two drives on the built in eSata port. When I was clearing one 2TB drive it was going at just under 3 minutes for each percent. When I tried two 2TB drives that increased to around 4.5 minutes for each percent. So I'm not sure what speeds I will get from a parity check.

Link to comment

The other choice is to access the drives in each box in a round robin fashion so you are not accessing drives sequentially.  That has it's own downside in management and maintenance. I've noticed the same issues with various vendors and how they handle port multiplier access.

Link to comment

The other choice is to access the drives in each box in a round robin fashion so you are not accessing drives sequentially.  That has it's own downside in management and maintenance. I've noticed the same issues with various vendors and how they handle port multiplier access.

 

But during a parity check or drive rebuild, it will still be accessing all the drives at the same time. Accessing one drive at a time is no problem, like when writing data to the array. I tried staggering the drives in my first unRAID setup but the difference in my use was minimal. I ended up using a cache drive with that setup. With my second unRAid setup I put all the drives in order of each enclosure. I'm not using a cache drive in that setup and have been pleased with the speed results.

Link to comment

The other choice is to access the drives in each box in a round robin fashion so you are not accessing drives sequentially.  That has it's own downside in management and maintenance. I've noticed the same issues with various vendors and how they handle port multiplier access.

 

But during a parity check or drive rebuild, it will still be accessing all the drives at the same time. Accessing one drive at a time is no problem, like when writing data to the array. I tried staggering the drives in my first unRAID setup but the difference in my use was minimal. I ended up using a cache drive with that setup. With my second unRAid setup I put all the drives in order of each enclosure. I'm not using a cache drive in that setup and have been pleased with the speed results.

 

While the driver attempts to read the drives, I think it has to read each drive sequentially. So by ordering the drives  to access the channel once every round vs accessing the channel 4 times one behind the other may alleviate contention for the channel.  There was someone who did this years ago when port multipliers were new and he said it gave him a slight increase in speed. YMMV.  It may not be worth it considering how you have to arrange the drives.

 

I know when I tested PMP support using 4 drives simultaneously, I got something like 15MB/s per drive. In comparison using the silicon image,  1 drive was 120MB/s 2 drives were 60MB/s, 3 drives were 40MB/s and 4 drives were 30MB/s.  There is still the option of putting the 2 eSATA brackets ports on the back of the unit after it is nibbled out. You may be able to fit 4 there. I haven't measured it. That's what I did with my old chenbro chassis years back.

  • Like 1
Link to comment

The other choice is to access the drives in each box in a round robin fashion so you are not accessing drives sequentially.  That has it's own downside in management and maintenance. I've noticed the same issues with various vendors and how they handle port multiplier access.

 

But during a parity check or drive rebuild, it will still be accessing all the drives at the same time. Accessing one drive at a time is no problem, like when writing data to the array. I tried staggering the drives in my first unRAID setup but the difference in my use was minimal. I ended up using a cache drive with that setup. With my second unRAid setup I put all the drives in order of each enclosure. I'm not using a cache drive in that setup and have been pleased with the speed results.

 

While the driver attempts to read the drives, I think it has to read each drive sequentially. So by ordering the drives  to access the channel once every round vs accessing the channel 4 times one behind the other may alleviate contention for the channel.  There was someone who did this years ago when port multipliers were new and he said it gave him a slight increase in speed. YMMV.  It may not be worth it considering how you have to arrange the drives.

 

I know when I tested PMP support using 4 drives simultaneously, I got something like 15MB/s per drive. In comparison using the silicon image,  1 drive was 120MB/s 2 drives were 60MB/s, 3 drives were 40MB/s and 4 drives were 30MB/s.  There is still the option of putting the 2 eSATA brackets ports on the back of the unit after it is nibbled out. You may be able to fit 4 there. I haven't measured it. That's what I did with my old chenbro chassis years back.

 

I currently have five eSATA ports on the back. Getting four from my Rosewill RC218. Two external on the card and then I have two more from the two internal ports running to a two port bracket. Where would another eSATA port come from? Or would I need to create an opening to run the two internal ports from the Rosewill and then stick a card in the PCI express x1 slot?

Link to comment
would I need to create an opening to run the two internal ports from the Rosewill and then stick a card in the PCI express x1 slot?

 

 

That was the idea I had a while back.  BTW there is a particular syba card that allows 4 ports on a PCIe x1 slot.

It has 2 eSATA ports and 2 internal SATA ports.  It's no longer available on the regular market. I found 2 on the secondary market. It worked for me. It wouldn't be the best performer, but it sure would max out ports !!!  ;)

 

 

As far as the opening, there's enough room in backup of the upper metal to drill and nibble space for eSATA ports.

You would need to evacuate parts to insure no shavings fall on the motherboard, but it could give you 6 or 8 ports from the x4/x1 slot.

Link to comment

would I need to create an opening to run the two internal ports from the Rosewill and then stick a card in the PCI express x1 slot?

 

 

That was the idea I had a while back.  BTW there is a particular syba card that allows 4 ports on a PCIe x1 slot.

It has 2 eSATA ports and 2 internal SATA ports.  It's no longer available on the regular market. I found 2 on the secondary market. It worked for me. It wouldn't be the best performer, but it sure would max out ports !!!  ;)

 

 

As far as the opening, there's enough room in backup of the upper metal to drill and nibble space for eSATA ports.

You would need to evacuate parts to insure no shavings fall on the motherboard, but it could give you 6 or 8 ports from the x4/x1 slot.

A x1 card will not work well for multiple enclosures(and least not multiple bay. multiple enclosures) In my testing the most drives I will use on a x1 card is four(which for me is one enclosure with four drives in it) and with a x4 card I don't like to exceed 16 drives(four enclosures). Although I have 17 drives in my unRAID1 attached to a rosewill card. But it is noticeably slower during the parity check because of that.

 

I've decided to just use a 4 bay enclosure with only two drives in it for the built in eSATA port. Then I will be moving my five bay enclosure from my unRAID1 and move that to my unRAID3 while moving a 4 bay enclosure to unRAID1. Then I need to drop the cache drive on unRAID1 and I won't have a cache drive on unRAID3 either. Then I should be able to get 1 parity drive and 23 arrays drives in the setup.

 

Although I have found the N54 is is kind of picky with my RC218 and the eSATA port. I could not use five MediaSonic enclosures and I could not use five Sans Digital enclosures. I had to use a combination of both for it to boot and see all the drives. So I'm not sure what will happen when I move over the 5 bay Sans Digital enclosure. Although I had the same issue with my unRAID2 setup. I have no idea why, but I did not have the issue with my first unRAID build.

Link to comment

What speeds are people getting to the N54L when writing to a cache drive over the network? I recently setup a cache drive and I'm only getting around 45MB/s to 50MB/s write speeds. The read speeds are OK but it seems to be writing slower than I would expect. The same cache drive in my unRAID1 setup gets write speeds of 70MB/s to 75MB/s. So I'm curious if these speeds are normal.

 

I've tried the cache drive in a drive bay inside the 54L and also in a couple of my external enclosures. It made no difference in the speeds. Is there a setting I need to change in the BiOS to increase the speeds or is this normal?

Link to comment

I tried turning the write cache off. When I initially turned it on I only had drives in the internal drive bay, but I was only looking at speeds during a parity check. Not transfer speeds since I was transferring without a cache drive. I wish I had checked out transfer speeds earlier with a cache drive.

 

Anyway I turned off the write caching in the BIOS. The write speeds are still on 45MB/s to 50MB/s to the cache drive. But at least they don't fluctuate alot like they did with write caching on.

 

These write speeds are slower than I would like. I would prefer to get at least 60MB/s write speeds for when I'm ripping BD titles. So I could rip two at a time. Otherwise with the speeds I'm getting I need to only rip one. The read speeds are 75MB/s to 85MB/s from the cache and array drives so I know the network connection is capable of faster speeds, I just don't understand why the write speeds are so slow over the newtork.

Link to comment

I tried turning the write cache off. When I initially turned it on I only had drives in the internal drive bay, but I was only looking at speeds during a parity check. Not transfer speeds since I was transferring without a cache drive. I wish I had checked out transfer speeds earlier with a cache drive.

 

Anyway I turned off the write caching in the BIOS. The write speeds are still on 45MB/s to 50MB/s to the cache drive. But at least they don't fluctuate alot like they did with write caching on.

 

These write speeds are slower than I would like. I would prefer to get at least 60MB/s write speeds for when I'm ripping BD titles. So I could rip two at a time. Otherwise with the speeds I'm getting I need to only rip one. The read speeds are 75MB/s to 85MB/s from the cache and array drives so I know the network connection is capable of faster speeds, I just don't understand why the write speeds are so slow over the newtork.

 

What is the model of your drive? How are you benchmarking the write speeds?

Link to comment

What speeds are people getting to the N54L when writing to a cache drive over the network? I recently setup a cache drive and I'm only getting around 45MB/s to 50MB/s write speeds. The read speeds are OK but it seems to be writing slower than I would expect. The same cache drive in my unRAID1 setup gets write speeds of 70MB/s to 75MB/s. So I'm curious if these speeds are normal.

 

I've tried the cache drive in a drive bay inside the 54L and also in a couple of my external enclosures. It made no difference in the speeds. Is there a setting I need to change in the BiOS to increase the speeds or is this normal?

I thought I was getting faster results to cache drive then that on my N40L but I'm at work currently and can't test for you.  I will see when I get home tonight.  Maybe someone else can confirm earlier.
Link to comment

Upgrade the BIOS to the "modified" one.

Turn on the write cache.

 

You still may not see that fast speeds. Using the "dirty writes" hack, I see 100MB/sec, but then writes slow down to about 45 to 50MB/sec on my N54L (7200rpm 4TB Hitachi, almost full).

The writes will be faster on an empty drive.

 

On my N36L (a slower Microserver), I'm seeing writes to a single drive of 70MB/sec+. But, that is running the Xpenology OS as a test.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.