Jump to content

Using unraid in a mission critical setting


Recommended Posts

Hello,

 

I am an IT admin for a network of 14 Hospitals.  We are interested in using Unraid for a backup solution, system wide.  I have a few questions I was hoping you fine folks could help me answer.

 

1. Does Unraid support NIC "Teaming?"  I may have 7 facilities backing up to unraid at once.

 

2.  How easy is it to mirror one Unraid to another for redundancy?

 

3.  Does unraid support Email notification of failed drives?

 

4.  I plan to use Server hardware with this build, Will my motherboard matter much if I am planning to use 3x supermicro 8 lane Sas controllers that I know are compatibile?

 

5.  Will unraid ever support more than 21 drives?

 

I understand that some of these issues are touched on in the forums, but some of the answers are quite old.  I was hoping to get new up-to-date info. Thanks

 

 

 

Link to comment

I'll answer what I can.

 

1. Does Unraid support NIC "Teaming?"  I may have 7 facilities backing up to unraid at once.

I believe there may be some beta support for bonding two nics, But it just came out and I do not know the status of it.

The biggest bottleneck will be the parity drive, unless you use a cache drive.

unRAID is NOT known for it's write speed. Expect write speed of 20-40MB/s to the parity protected array.

Probably faster to the cache drive (up to 60 has been reported). But once you start adding multiple streams, all the head movement is going to diminish the speed by a certain percentage.

Using an advanced caching areca controller for the parity may help a little, but only by a small amount.

 

2.  How easy is it to mirror one Unraid to another for redundancy?

It's easy to use rsync for copying disk1 to disk1 on another server.

 

3.  Does unraid support Email notification of failed drives?

Not out of the box. There are third party plugin tools to do this.

Eventually I plan to get nagios running on unRAID.

 

5.  Will unraid ever support more than 21 drives?

There is already a "placeholder" for adding a "few" more drives in unRAID 5.x beta. But it's not in use at the current time.

There does come a practical limit as to how many drives you can fit in a chassis and protect with one parity drive.

Link to comment

Thanks for the super fast reply.  As far as backing up one unraid to another....

 

We have 2 datacenters.  I would like to use unraid in each, keeping an incremnetal copy of data.  For instance, Datacenter 1 is primary, Maybe nightly, I would like to have Unraid 1 sync all Incremental Changes to Unraid 2, in Datacenter 2.  Possible?  And is it a good idea...?

 

Secondly, Are you suggesting that Nic teaming will not aid in performance?  I understand the limitation of unraid and how the Cache drive fits in, but NIC bonding was a question posed to me by people who write the checks.

 

Oh and what happens if I write more data than my cache drive?  So if I have a 3TB cache and write 4TB of data...?

 

Thanks again

Link to comment

Thanks for the super fast reply.  As far as backing up one unraid to another....

 

We have 2 datacenters.  I would like to use unraid in each, keeping an incremnetal copy of data.  For instance, Datacenter 1 is primary, Maybe nightly, I would like to have Unraid 1 sync all Incremental Changes to Unraid 2, in Datacenter 2.   Possible?  And is it a good idea...?

 

Rsync will handle this with flying colors.

 

Secondly, Are you suggesting that Nic teaming will not aid in performance?  I understand the limitation of unraid and how the Cache drive fits in, but NIC bonding was a question posed to me by people who write the checks.

 

The feature is too new to comment. It just came out in a recent beta. You would have to test it.

 

Oh and what happens if I write more data than my cache drive?  So if I have a 3TB cache and write 4TB of data...?

 

I believe the transfer would fail unless you were incrementally moving some of the data from cache to parity protected.

In this case it may make sense to look at the areca controller for cache and parity.

You can do raid1 on the cache and raid0 on parity.

or if you need a really big cache volume, then you can do a raid5/6 cache via areca hardware raid.

Then move it off to the protected array over time.

This would let your cache drive exceed a single spindle's size.

 

Link to comment

Awesome, I didnt even know raid 0 was possible to use with parity or cache....

 

So for example, I plan to use three 8 lane sas contollers in my rig.  The onboard controller for the board supports Raid, So in theory, I could setup a Raid 0 on that controller and use thta volume (2x 3TB) for a super fast cache drive? (without investing additional money that is...

Link to comment

We did take a long look at using unRAID as a target for desktop and server backups at work.

We were given an incredibly small budget for the project. thus eliminating almost all enterprise solutions once the servers where purchased.

 

the real question is how large of a backup from how many sources at once and your time limit?

 

We were never able to get unraid to perform fast enough for our needs. We needed to backup about 100 workstations in an about an 8 hour window.

Because most of the  users were power users and software devs, we were backing up to 8TB a night.

At first it was the slow write speed that slowed us down.

 

we were able create a script that allowed only 8 PCs to backup at once. this also helped the over saturation issue.

 

We since realized that by using an ARC-1222 raid controller with a raid5 array for the unRAID cache drive and then let it trickle the data over to the unraid drives with the mover script all day, the drive speed limit was overcome.

the next limit was the single 1Gig NIC we could saturate the NIC all night and not quite make the deadline.

 

In the end we went with a different solution that allowed us to bind 4x 1GiB NICs with ZFS pools.

 

If the binding does work in Ver5.0Beta13, it might be a feasible solution for us now.

Link to comment

Awesome, I didnt even know raid 0 was possible to use with parity or cache....

 

So for example, I plan to use three 8 lane sas contollers in my rig.  The onboard controller for the board supports Raid, So in theory, I could setup a Raid 0 on that controller and use thta volume (2x 3TB) for a super fast cache drive? (without investing additional money that is...

 

You would need to use an Areca RAID controller or 3ware RAID Controller.

The hardware raid controller can do the raid levels which would be transparent to unRAID. RAID0 parity, RAID1 cache, RAID5 cache.

 

It's been tested and proven to work for RAID0/RAID1 by me and another member.

And RAID5 on the areca by Johnm.

 

Link to comment

Awesome, I didnt even know raid 0 was possible to use with parity or cache....

 

So for example, I plan to use three 8 lane sas contollers in my rig.  The onboard controller for the board supports Raid, So in theory, I could setup a Raid 0 on that controller and use thta volume (2x 3TB) for a super fast cache drive? (without investing additional money that is...

 

using a raid zero for cache does leave you without parity protected data. in a "mission critical" situation, I would never use a raid zero.

 

Also the "onboard raid" would probably not work in a *NIX environment if you meant the motherboard. those are usually software raids controlled by windows.

 

you would need an LSI, (3Ware?), or Areca controller.

 

EDIT: I should point out there is a bug in beta13 with LSI cards. it will be hard to test binding in a box with LSI cards installed.

Link to comment

Awesome, I didnt even know raid 0 was possible to use with parity or cache....

 

So for example, I plan to use three 8 lane sas contollers in my rig.  The onboard controller for the board supports Raid, So in theory, I could setup a Raid 0 on that controller and use thta volume (2x 3TB) for a super fast cache drive? (without investing additional money that is...

 

using a raid zero for cache does leave you without parity protected data. in a "mission critical" situation, I would never use a raid zero.

 

Also the "onboard raid" would probably not work in a *NIX environment if you meant the motherboard. those are usually software raids controlled by windows.

 

you would need an LSI, (3Ware?), or Areca controller.

 

EDIT: I should point out there is a bug in beta13 with LSI cards. it will be hard to test binding in a box with LSI cards installed.

 

I would not use RAID0 for cache either. Perhaps RAID5 if you need larger then 3TB of temporary high speed storage.

 

I do use RAID0 for my parity though. I have an exercised spare drive, so if it fails, I can pop it in.

 

 

How do you plan to back up the data? Program? Mounting? Protocol?

 

Link to comment

 

 

 

 

How do you plan to back up the data? Program? Mounting? Protocol?

 

 

At the risk of sounding like a total newbie, I am afraid I have no idea what you mean.  I have done a fair amount of reading on unraid, and have yet to see this mentioned....  Can you elaborate?

Link to comment

At the risk of sounding like a total newbie, I am afraid I have no idea what you mean.  I have done a fair amount of reading on unraid, and have yet to see this mentioned....  Can you elaborate?

I think what he meant was at the desktop end of things. What program are you going to use to send data to the server, and what protocol were you planning to use to connect to the server.

Link to comment

At the risk of sounding like a total newbie, I am afraid I have no idea what you mean.  I have done a fair amount of reading on unraid, and have yet to see this mentioned....  Can you elaborate?

I think what he meant was at the desktop end of things. What program are you going to use to send data to the server, and what protocol were you planning to use to connect to the server.

 

Oh....duh.  Sorry.  I plan to use network shares, either by manual copy to scheduled tasks on some servers.  Perhaps I will use something like Robocopy.  Havent decided yet

Link to comment

I'll offer my experience..

 

If you're doing any kind of sync, generally they will scan first, then copy. It takes around 8 hours for my work unRAID server (archives) to compare 8Tb, then copy/move/change 1Tb worth of data. It can be many small files and many small changes.

 

If you're using large files, shouldn't take anywhere near as long.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...