Jump to content

First unRAID build to replace Syno NAS


stomp

Recommended Posts

Hello everyone!

 

I am currently looking for some parts to build my first unRAID server. Here is the current list:

 

Motherboard: ASUS M4A78LT-M LE (56.30 CHF)

CPU: AMD Sempron 145 (39.20 CHF)

RAM: KINGSTON ValueRam KVR1333D3N9K2/2G (35.95 CHF)

Case: COOLERMASTER Centurion 590 (91.00 CHF)

 

PSU: ANTEC SU-430 (already have it)

HDD: 5x SAMSUNG HD103UJ (already have them)

 

Meaning I can start with a cost of 222.45 CHF = 232 $.

 

I will use this server to stream media to my TViX. As I am mostly watching Blu-ray backup's (which are usually around 40 GB), I hope to get decent write speeds. If the Samsung drives are not fast enough, I am also planning to purchase a SAMSUNG Spinpoint F4 (140 MB/s write) as cache drive. Initially, I plan to use onboard SATA (6x) and further extend my system with extra controllers. Ultimately, the goal is to build a 15 drive servers with 5-in-3 docks.

 

Now, what do you think about this configuration? Will a faster CPU significantly improve overall performance? And what about the flexibility (1x PCIe 16x + 1x PCIe 1x + 2x PCI + 6x onboard SATA should be enough)?

 

Thank you in advance!

 

stomp

 

Link to comment

Write speeds are limited by the parity calculation. Most people get between 25-35 MB/s. You can improve perceived write speeds by using a cache disk, which is not on the protected array, and moves data nightly or on the schedule of your choice.

 

Your configuration looks fine. A faster CPU won't make a difference for unRAID. If you add other packages that use more CPU cycles, it might.

 

Your board will allow you to add one eight port SAS card, which would give you fourteen ports. If that's enough, you could add a two port card or two for a total of eighteen drives. There's no way to gracefully get that many drives into that case though, so fourteen or sixteen is a good upper limit. I have that case, and if I were planning on that many drives, I would get the Norco 4220. Initial cost will be higher, but the cost of three 5 in 3 drive cages is about the same as the Norco. That of course may not be true where you live.

Link to comment

I converted from a Syno as well (see my unRAID server in my sig).  I still use the Syno for the additional functionality, which I know unRAID could handle if I spent the time with the installation of various packages etc.  I've mounted my unRAID drives to the Syno via NFS.  unRAID will have no issues with playback of BD rips.

 

Careful with the Samsung F4 as it is a 4K Advanced format drive which results in reduced sequential writes with unRAID (multiple links if you search "Samsung F4")

 

If you plan to add additional packages, transcode or re-encode then you may want a faster CPU. 

 

My 140 does everything I want from the server..

Link to comment

I converted from a Syno as well (see my unRAID server in my sig).  I still use the Syno for the additional functionality, which I know unRAID could handle if I spent the time with the installation of various packages etc.  I've mounted my unRAID drives to the Syno via NFS.  unRAID will have no issues with playback of BD rips.

 

Careful with the Samsung F4 as it is a 4K Advanced format drive which results in reduced sequential writes with unRAID (multiple links if you search "Samsung F4")

 

If you plan to add additional packages, transcode or re-encode then you may want a faster CPU. 

 

My 140 does everything I want from the server..

 

Thank you for the tip about F4. Well I'm just looking for a cache drive that can approach Gbit upper limit (theoretically 125 MB/s). I will look for something else then. Regarding CPU, I was just concerned about parity creation speed but if it has no correlation, then the Sempron will be fine. My Syno was fine and had lots of features, unfortunately I was not using any of them. I'm just in need of a NAS to stream files. But transfer speed is important for me and the 30 MB/s of my Syno was not enough. Better NAS exist from Syno and others but cost increases exponentially with writing performance so it's a no-go for me.

Link to comment

Oh, and even with a fast drive and gigabit ethernet, don't expect much past about 80MB/s or so. 70MB/s is probably more realistic.

 

I'm wondering though, how much writing are you planning to do on this array that you feel the need for all this speed? If you're using it as a media server, then you're just going to rip your movies and move them on there, right?

Link to comment

I ran some tests with an SSD as a cache drive and the fastest transfer I ever saw was 74 mb/s.  I don't think anyone has documented a transfer faster than that.

 

By comparison, a 7200 rpm drive can get transfers in the 65/70 mb/s range.  So the advantage of a SSD isn't really the speed as much as the lower power usage, lack of spin up lag, etc.

Link to comment

Really? How disappointing. That points to a bottleneck elsewhere in the system. Good to know though. Since high cost, low capacity, and limited write capability all make an SSD unsuited to cache duty, I can't see power usage or lack of lag as enough of a benefit to justify using one.

Link to comment

Oh, and even with a fast drive and gigabit ethernet, don't expect much past about 80MB/s or so. 70MB/s is probably more realistic.

 

I'm wondering though, how much writing are you planning to do on this array that you feel the need for all this speed? If you're using it as a media server, then you're just going to rip your movies and move them on there, right?

 

Yes that's right. I'm just backing up discs on my PC and transfer them to the NAS. 70 MB/s is rather correct I think. There is always a gap between theoretical and practical results. However I'm curious to know what explains this rather huge gap between 70 MB/s and 125 MB/s...

Link to comment

Remember, you're not going to be able to write to the array any faster than you can read from your source.  So if you're copying from a USB drive to your array, the USB drive will probably be the bottleneck.  Also, the cache drive only postpones the process of writing to the array, it doesn't eliminate it.  So if you put in a 300gb Velociraptor as your cache drive, you'll have to stop loading it once you've hit the 300 gig mark and wait for it to write that data to the array before you can move more data.  It would actually increase the overall copy time if you're just moving bulk data in huge chunks from existing drives to the array because first you have to wait for the fast copy to the cache drive then the slow copy from the cache to the array.

 

A cache drive is really best for bursty writes where you're sitting there waiting for smallish writes to finish before you can move on to the next task.

Link to comment

I'm aware of the fact that limitation comes from the writing disk (well, in most cases). We should also not remember that HDD write and read speeds are not constant throughout the disk and are generally worse at its peripheral. I just read an article from Tom's Hardware in which the author clearly demonstrates that HDD are the bottleneck. The guy was able to transfer a 1 GB file at 111 MB/s from a RAM disk to another RAM disk. Now if you want to come close to the theoretical limitation of GbE, you really need a drive capable of writing and reading way above 125 MB/s. Part of the reason why I was looking for a F4 as it currently offers the best performance due to its high density.

 

A little note now regarding SSD. Lots of people tend to think that ANY_SSD_1 = ANY_SSD_2, which is not the case. Some SSD are extremely fast but some aren't, and can be even slower than a typical HDD. I appreciate your testing with an SSD as cache drive, however it makes no sense to consider it unless you state what kind of SSD you were using. But it's clear that using such drive as cache drive is not really appealing as it's extremely expensive (especially true for high performance drives) and that writing cycles are quite limited.

 

EDIT:

 

Just remembered that Jumbo Frames might also impact positively transfer speeds (as seen with my previous NAS).

Link to comment

stomp, you are correct that I didn't use the fastest SSD available, but rather a 'budget' model.  Actually, two budget models.  A 60 GB Corsair SSD, and a 30 GB OCZ Agility SSD.

 

Here's the details on the SSDs.

 

...and here's my test results.

 

And I guess I remembered incorrectly, the max transfer I saw was 73 mb/s, not 74.  I wouldn't be surprised at all if a faster SSD allowed for faster cache write speeds.  As far as I know, no one has tested it.  I would be really curious to see how one of the RevoDrives behaved as a cache drive...

 

Also, by the way, HDDs tend to be slower towards the center of the disk, not the periphery (outer edge) of the disk.  The outer edge offers higher aerial density, therefore more data per second can be read from the head (since more data is passing under the head in the same amount of time).  That's also why short-stroking a drive effectively makes it faster (albeit smaller).

Link to comment

As BubbaQ pointed out to me, the limiting factor is the network, so getting faster hard drives is pointless.

 

Honestly I'm not so sure. As I explained, one of Tom's Hardware tester was able to transfer some files over GbE at +110 MB/s. How? Well, he used RAM disks, disks that can exceed by more than one order of magnitude GbE transfer limitations. Now if you look at one of Synology's NAS, the DS1010+, it is stated that this device can write and read at +100 MB/s. Is it a reasonable claim from Syno? Yes I think it is, as my DS410j exactly performs as stated on their website. And the DS1010+ is not based on RAM disk, of course, it's just 5 or 6 HDD's (can't remember).

 

Anyway I'm not here as a beginner to whine because write speeds are too low, just trying to figure how I can optimize my future build. Things will be more clear for me when everything will be running. Thanks to all for your contribution.

Link to comment

The Synology system can do RAID 5 or RAID 6, both of which can offer higher transfer rates than unRAID, albeit with more risk to data security (in my opinion - there are arguments on either side).

 

I'm unclear as to what the current bottleneck in unRAID is - whether it be the network, the disks, etc.  All I know is that I'm perfectly content with my transfer speeds.  Faster is usually better, but unRAID is fast enough.

Link to comment

Before delving into the GB speed one has to be sure that the motherboard they chose is compatible with Unraid.

 

In theory any recent new motherboard based on Intel or AMD chipset should be compatible with the tiny exception of the onboard LAN controller the MB manufacturer selected to use.

 

Asus specifications / manuals for ASUS M4A78LT-M LE do not provide info regarding the GB LAN controller used and one has to actually go to the driver download page to see that it uses Atheros - and this one may not be supported or if supported can cause you a grief...

Link to comment

Before delving into the GB speed one has to be sure that the motherboard they chose is compatible with Unraid.

 

In theory any recent new motherboard based on Intel or AMD chipset should be compatible with the tiny exception of the onboard LAN controller the MB manufacturer selected to use.

 

Asus specifications / manuals for ASUS M4A78LT-M LE do not provide info regarding the GB LAN controller used and one has to actually go to the driver download page to see that it uses Atheros - and this one may not be supported or if supported can cause you a grief...

 

Oh yes you're right, LE has Atheros whereas the regular one has Realtek. I'll go for the M4A78LT-M then. Thanks a lot. Besides this, I was wondering if it was possible to use 2 GbE ports (one on the motherboard and one through an additional network card), and configure only one to use Jumbo Frames... The reason I ask this is that my NMT doesn't support Jumbo Frames so I need to have this feature deactivated between the server and the NAS. However I still want to have JB enabled between the server and my PC, for transfer purposes. Is it possible? Will it lead to some conflicts?

 

EDIT:

 

Just received a 320GB Samsung F4. I've been doing some tests on it, planning to use it as working drive while waiting for my NAS. It's blazing fast!

 

14wsoy.png

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...