Jump to content

Howto: Build a unRaid box (just to document what I did)


goofygrin

Recommended Posts

Coming from gentoo, I think there is a definite lack of Howto's for unRaid, so here's my first cut at one... this one is for the people just thinking about building an unRaid server.

 

Introduction

As you probably know, unRaid is a Linux based file server system.  It is based upon the Slackware distribution of Linux. 

 

unRaid allows you to build an "array" of hard drives and share those drives across the local network for all users to access the files on those drives.  In order to help combat physical disk failure, unRaid allows for the inclusion of a parity drive, which will be used to contain information about all of the files on all of the other drives.  This information can be used to rebuild the information on a drive if it is replaced (either for an upgrade or because it died).

 

The benefits of unRaid versus other systems such as FreeNas are:

  • Ability to have different sized drives in the array -- RAID normally requires you to have all drives in the system be of the same physical size
  • Ability to dyanmically add drives to the array as they are aquired -- most RAID arrays are very sensitive to "growing" the array and the operation typically involves a full backup before attempting to grow the array.  If you've got 5 TB of data, this likely isn't something you're going to attempt!
  • Non-striping of the data -- normally, the data in a RAID is striped across the disks, meaning portions of each file are stored on various drives.  In unRaid, files are stored contiguously on a single disk.  This means that if there is a catastrophic problem (e.g. multiple disks fail), you can likely retrieve the information from a single disk.

 

Of course, there are some drawbacks to unRaid:

  • Limited hardware support
  • It's not free (for more than 3 drives)
  • The drives are not merged into one large logical volume (or directory)
  • Poor security (but it is "good enough")
  • Since the data is not "striped" across multiple drives, read performance is limited to the speed of the individual drive.  In most RAID implementations, data can be read much faster than an individual drive's performance because various drives are accessed simultaneously, thus aggregating the read performance.

 

Hardware

 

CPU: Basically anything.  I went with a Celeron 2.8ghz because it was cheap.  Almost any recent processor is fast enough, but you might want to avoid anything < 1ghz (so ITX is a "maybe")

 

Memory: You should have at least 512, but at today's prices, "splurge" for a gig

 

Video card: don't need one.  Scratch that.  You probably won't need one a lot, so use the onboard or the cheapest card you can get off ebay.

 

Hard drives: either PATA or SATA drives of your choice.  Remember though, that the largest will be used for parity, and those old drives you've got laying around probably will die as soon as you put something important on them!  SATA drives will be faster and also will be easier to route cables since their cables are much smaller.  Also remember that they tend to take different power supply connections, so plan accordingly (do you need adapters?).

 

Motherboard: You will need a newer motherboard that allows you to boot from a USB stick.  A lot of older ones don't.  I went with the Gigabyte GA-945GZM-S2 because it was cheap ($55) and had everything that you need: 4 SATA & gigabit ethernet.  You will need to be careful here if you choose something else, make sure that the onboard devices are supported.  Support for these has increased in the 4.x betas, but it still boils down to:

Onboard SATA: Intel ICHx, some VIA chipsets (and a couple more I'm sure)

Onboard LAN (gigabit only): Intel, Marvel, some VIA (slow) and Realtek

 

PCI SATA Cards: I got the Promise SATA300 TX4 card (2 of them) because the are reasonably cheap ($55) and I needed more SATA drives :)  There are a couple other cards that work as well (I trust Promise, so when I saw that these worked, I just stopped looking).

 

USB Memory Stick: at least 512mb.  I got the Sony Micro Vault Tiny, 512 MB ($15).  It is *very* tiny.

 

Power Supply: If you're going to be building a server, that's up a long time, and that has a lot of hard drives in it, then you will need a quality power supply (or more than one actually!).  What defines a quality power supply?  That's a big debate.  There are a few reputable companies out there that just about everyone agrees on like Enermax.  My personal favorite is Ultra, but that's because they like to give away power supplies (in the past month, I've gotten a 600W for free after rebate, a 300w for $5 and a couple 400 watt bundled with a UPS for $40).  They also give you a lifetime warranty if you register the product online.  If you can get a modular power supply (where the cables are separate and you only use what you need), all the better.  They are very nice to work with.  If you're going with multiple power supplies, you need to think about how you will turn the second one on.  You might have to build (what I've done the past) or buy an adapter that lets the one power supply control the other.

 

Case:  Almost any case will do for a small array.  Bigger arrays will need more thought.  You will need to think about ventilation.  Hard drives get hot and they are very suceptible to heat (most are rated to 55*C).  There are 5:3 adapters that allow for 5 hard drives to be placed into 3 5.25" drive bays, however I'd be concerned about heat on the drives that close together (plus each one is $120, so I'd rather just spend the $150 on a great big case).  I am reusing an old antec case.  Originally I put the three drives into the drive tray and didn't wire up the fan.  During the parity calculation, the temperatures reached almost 50*C, so I wired up the 80mm fan in the front of the drive cage.  In 10 minutes, the temperatures were down to around 30*C, so fans and ducting is important.

 

A word about network cards

unRaid used to be very rigid in it's support for network cards.  It was gigabit intel or marvel based cards and that was it.  In the past few months, support for various other network chipsets such as Via and Realtek have been added [ed. to the 4.0 codebase or to the 3.x as well? -- unsure!].  Before you go buying a new card or motherboard, you might verify in these forums if the card you've got will actually work.  I originally bought a PCI nic with a marvel chipset, but when I booted up, unRaid recognized my onboard nic for a pleasant surprise (now I've got a spare nic, but I'm not sure what I'm going to do with it :D).

 

Installation

So now you've got some hardware.  Slap it all in your case.

 

On your other PC:

  • put your memory stick in
  • format it, FAT (the instructions say FAT32, but that didn't work for me) with the label UNRAID
  • Get syslinux.exe from http://www.lime-technology.com/dnlds/
  • Run syslinux x: (where x: is the drive letter for your memory stick)
  • Download the distribution you want from http://www.lime-technology.com/dnlds/ and extract all the files to the memory stick
  • Remove the memory stick from your computer

 

On the server:

  • Insert the memory stick
  • Setup the server to boot from the USB stick (on mine it was USB-HDD)
  • Boot from the memory stick.  This will take a couple minutes with lots of stuff scrolling across the screen.
  • Once you get to the prompt, type "root" and press enter, no password will be needed

 

Verify that you got an ip address by typing "ifconfig" and pressing enter.  You should get an output like this:

eth0      Link encap:Ethernet  HWaddr 00:1A:4D:27:13:8A
          inet addr:10.0.1.57  Bcast:10.0.255.255  Mask:255.255.0.0
          UP BROADCAST NOTRAILERS RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:16261 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2850 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:1067783 (1.0 MiB)  TX bytes:1193253 (1.1 MiB)
          Interrupt:21 Base address:0xa000

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)

  • If everything on your server looks similar to above, on your PC, try to open a browser to "http://tower"  You should be taken to the unRaid start page

 

Setting Everything Up

  • From the Main Page, select "Devices" from the menu.
  • In your Disk devices, verify that all your hard drives are showing up.  If not, then you either have connected them to an unsupported device or you forgot to connect them.
  • Specify your largest drive to the parity device.  Specify the other drives to be the other devices.
  • Go back to the Main screen and start the parity sync (if it's not already)
  • Congratulations, you've basically set everything up!  Let the parity sync finish (this can take a while) and everything will be ok.

 

Odds and Ends

  • Shares:
    By default, unRaid creates a share for each disk in the array (\\tower\disk1, \\tower\disk2, etc.).  This isn't usually what people want.  unRaid supports the concept of User Shares.  Basically, you can create folders in each of the disk shares, and then a user folder will "aggregate" all of the content from each share into one share with that folder name.
     
    So, if you have \\tower\disk1\DVD and \\tower\disk2\DVD, a user share can be created for \\tower\DVD.
     
    To turn on user shares, go to the Shares menu item.  Select an option from the User Shares dropdown.  If you want the shares to be read-only, select "Export read-only."
     
    In order to pick up new User Shares (or top level directories), you will need to press the "Re-scan" button.
     
  • Security:
    unRaid has a couple security flaws, the biggest is that the root password is blank.  This is a big no-no in the linux community.  It is being worked on for future release.
     
    Share security is important as well.  If anyone can get to the share and delete your files, this is bad.  I recommend making the Disk shares Export read/write, hidden (the \\tower\disk1 share will still be there, but it will be hidden, so you will have to know it's there to get to it) and making the User shares Export read-only (unless people will need to edit files on the share).
     

 

Helpful Info

  • You can telnet into the server, either using the built in telnet app or Putty (a free download).
  • Once telnet'ed in (or from the keyboard on the server), you can run important linux commands such as:
    • top: similar to the task manager in Windows, press q to exit
    • df -h: tells you how much space is left on your drives
    • shutdown -h now: tells the system to shut down, right now
    • shutdown -r now: tells the system to reboot, right now
    • ifconfig: tells the IP configuration (similar to ipconfig in windows)
    • tail -n # <file>: display the last # lines of <file>
    • ls: same as dir in dos
    • du -s <directory>: how big a certain directory is
    • smbmount //server/share directory: mount a windows (or samba) share to the given directory -- this is useful for copying files from one server to the other (will be faster than copying in Windows Explorer)

     

Link to comment

Excellent start goofygrin!  A few thoughts...

 

1.  "Video card:  don't need one"  I don't agree (in general).  True, after you get your server all set up, you primarily use the browser-based SMU for configuration management or telnet straight into the box, but nothing beats a video card during that initial setup phase.  It's also true that some mbs allow for video redirection through the serial port, but that's beyond most folk's ability (including mine).  Maybe it should be reworded.."Video card:  Use your mb's built-in video capability or the cheapest card available as you won't use it much after the initial setup."

 

2.  You should probably add "Power Supply" to the Hardware section.  I'm not an engineer, nor play one on TV, but I believe Tom recommends reputable units like Enermax, Antec (to name a few) with a single high amperage 12v rail.

 

3.  I recommend expanding the Network portion of the Howto as this is the area that hangs up a lot of folks.

Link to comment

Great job overall. I just want to chime in about the pros and cons of unRAID. One of the fundamental drawback of unRAID is its performance. Due to NOT striping data across multiple drives, writing\reading data does not maximize the potential throughput of having multiple drives handling striped data at the same time.

Link to comment

Excellent start goofygrin!  A few thoughts...

 

1.  "Video card:  don't need one"  I don't agree (in general).  True, after you get your server all set up, you primarily use the browser-based SMU for configuration management or telnet straight into the box, but nothing beats a video card during that initial setup phase.  It's also true that some mbs allow for video redirection through the serial port, but that's beyond most folk's ability (including mine).  Maybe it should be reworded.."Video card:  Use your mb's built-in video capability or the cheapest card available as you won't use it much after the initial setup."

 

You're right.  I'll update that.

 

2.  You should probably add "Power Supply" to the Hardware section.  I'm not an engineer, nor play one on TV, but I believe Tom recommends reputable units like Enermax, Antec (to name a few) with a single high amperage 12v rail.

 

Hmm.  Power supplies these days are just crazy.  Given that most antec units are garbage (they used to be great) and companies like Ultra are almost giving away power supplies (Free after Rebate or $5 or $40 with a UPS), it's difficult to recommend something.  But I guess that most people never think about that with 10 drives they will need a lot of juice.

 

3.  I recommend expanding the Network portion of the Howto as this is the area that hangs up a lot of folks.

 

This is a grey area for me since I don't know a lot about it.  Originally I thought that only Intel and Marvel chipset were supported, and I had bought an outboard marvel nic.  Then when I booted up unRaid for the first time, it recognized the onboard lan, so I was pleasantly surprised!  If you have anything that would be helpful, I will definitely add it.

Link to comment

Great job overall. I just want to chime in about the pros and cons of unRAID. One of the fundamental drawback of unRAID is its performance. Due to NOT striping data across multiple drives, writing\reading data does not maximize the potential throughput of having multiple drives handling striped data at the same time.

 

This is actually could be a big deal to me and I need to think about it because of they way that I tend to use my media servers.  I have xboxes at almost all of my TVs running XBMC.  They stream full ISO images of DVDs or MP3 or FLAC audio files quite a bit.  I also use the media servers as backup locations for backing up the computers in my house).

 

If I was to try and stream a couple movies and they were on the same physical drive, I'm not sure that it could keep up.  Maybe an experiment is worthwhile...

 

I'll add it though!

Link to comment

Hmm.  Power supplies these days are just crazy.  Given that most antec units are garbage (they used to be great) and companies like Ultra are almost giving away power supplies (Free after Rebate or $5 or $40 with a UPS), it's difficult to recommend something.  But I guess that most people never think about that with 10 drives they will need a lot of juice.

 

I agree that some excellent brands have fallen off the wagon, and some no-name units are quite good.  I was just trying to warn folks not to start an unRAID build, particularly if you have plans for 8, 10, or 12 drives with an undersized PSU.

 

This is a grey area for me since I don't know a lot about it.  Originally I thought that only Intel and Marvel chipset were supported, and I had bought an outboard marvel nic.  Then when I booted up unRaid for the first time, it recognized the onboard lan, so I was pleasantly surprised!  If you have anything that would be helpful, I will definitely add it.

 

I'm using my Asus's onboard NIC as well.  What I was trying to suggest was a little primer to help out folks who ask, "why can't I see my unRAID server on my network?" (i.e., IP & subnet settings, DHCP, workgroup names, etc.)

Link to comment

If I was to try and stream a couple movies and they were on the same physical drive, I'm not sure that it could keep up.  Maybe an experiment is worthwhile...

 

While I agree with ysss' point that unRAID's lack of striping hinders its "maximum potential" throughput, other users like Joe L. have shown unRAID's ability to support multiple simultaneous video streams (see here...http://lime-technology.com/forum/index.php?topic=633.0).  Based on his research, and comments by Tom, I'd argue the first bottleneck you're likely to hit is network performance.

Link to comment

If I was to try and stream a couple movies and they were on the same physical drive, I'm not sure that it could keep up.  Maybe an experiment is worthwhile...

 

While I agree with ysss' point that unRAID's lack of striping hinders its "maximum potential" throughput, other users like Joe L. have shown unRAID's ability to support multiple simultaneous video streams (see here...http://lime-technology.com/forum/index.php?topic=633.0).  Based on his research, and comments by Tom, I'd argue the first bottleneck you're likely to hit is network performance.

 

I'd agree with that, especially given that I'm mostly on a 10/100 network.  What is interesting, is that after copying 70 gigs of MP3s from my old raid to the unRaid, running a comparison between the two was WAY faster on the old raid (reading every file and it's size) than on the unraid (like 30 seconds vs. 2-3 minutes).  (I was just doing a "properties" of the folder on both servers from a windows PC and you could see the numbers rolling up in the unRaid and rolling up MUCH faster on the raid5).

Link to comment

What is interesting, is that after copying 70 gigs of MP3s from my old raid to the unRaid, running a comparison between the two was WAY faster on the old raid (reading every file and it's size) than on the unraid (like 30 seconds vs. 2-3 minutes).

 

True..."writes" appear to be unRAID's weakness (in fact, I believe the "penalty" is directly related to the number of drives, because unRAID has to touch every drive in order to calculate parity).  One way to speed up bulk loads is temporarily turning off the parity drive (i.e., forcing it to recalculate afterwards).

Link to comment

If I was to try and stream a couple movies and they were on the same physical drive, I'm not sure that it could keep up.  Maybe an experiment is worthwhile...

 

While I agree with ysss' point that unRAID's lack of striping hinders its "maximum potential" throughput, other users like Joe L. have shown unRAID's ability to support multiple simultaneous video streams (see here...http://lime-technology.com/forum/index.php?topic=633.0).  Based on his research, and comments by Tom, I'd argue the first bottleneck you're likely to hit is network performance.

 

Yes, you're right about the practical observation that network becomes the bottleneck on most installation nowadays. I was merely highlighting the main drawback of unraid compared to conventional striped raid systems.

 

I saw a striking example when I setup a 5 drives ZFS system (opensolaris filesystem that employs striped raid). My zfs machine can achieve a sustained read performance of 200MB/sec, while I watch my unRaid set maintain a 50MB/sec read performance and even lower write peformance w/ the same hardware components.

 

Horses for courses they say... and I DO love my unraid system for all its advantages and brilliant design ;)

Link to comment

 

  • smbmount //server/share directory: mount a windows (or samba) share to the given directory -- this is useful for copying files from one server to the other (will be faster than copying in Windows Explorer)

 

 

Can someone explain this one more?  I've got most of my stuff copied over, but still have quite a bit left.  Copying a 7GB movie over a 100Mb network takes about 20 minutes, I think.  Is this a way to speed that up?

 

Thanks

 

Link to comment

What is interesting, is that after copying 70 gigs of MP3s from my old raid to the unRaid, running a comparison between the two was WAY faster on the old raid (reading every file and it's size) than on the unraid (like 30 seconds vs. 2-3 minutes).

 

True..."writes" appear to be unRAID's weakness (in fact, I believe the "penalty" is directly related to the number of drives, because unRAID has to touch every drive in order to calculate parity).  One way to speed up bulk loads is temporarily turning off the parity drive (i.e., forcing it to recalculate afterwards).

Actually, when writing to a drive it is only necessary to pre-read the existing contents of the blocks being written on data and parity drives, and then write to both the data drive and the parity drive.  It is not necessary to read or even spin up the other data drives if they are idle.

 

For any given bit on the data drive, if it is currently a zero and the new bit being written also a zero, the parity bit at that position is not changed.  if the current bit is a one and the new bit is also a one, again the parity bit is unchanged.

When the old and new bits in a given bit position are different, the parity bit for that position needs to be flipped... (from one to zero, or zero to one)

 

You are correct in that disabling parity can speed bulk loads... and INITIAL parity calculation speed is related to the number of data drives (as all of them need to be read to first calculate parity for any given bit), but subsequent write performance does not change with multiple drives... It is the same for 1 data drive or 13.

 

 

Link to comment

You're correct Joe (as usual)  ::)

 

I went back and read Tom's note on the issue and realized that only one drive is involved in a read (data) and two (data & parity) in a write.  The two exceptions are initial parity (where all the drives are touched) and in the case of a single drive failure (since it has to recalculate the missing data from all the remaining drives on the fly).

Link to comment
  • 2 weeks later...

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...