Jump to content
unpavedkevin

4K Video editing NAS with 10gb is UNRAID the way to go?

34 posts in this topic Last Reply

Recommended Posts

I have been doing research on solutions for my small video production facility. I am looking to have a NAS that 2 editors can access data from at the same time. I plan on building a server with an e5 xeon and ECC ram, also a dual 10gb ethernet card 16GB of ram, and 8x 4TB enterprise 7200rpm drives. I plan on having both computers attached through SAF+ cables to both editing stations, one being a hackintosh and other a pc.

 

Is unraid a safe, fast, viable option? I was also looking at freeNAS, but I like the simplicity of unraid. I heard it was not fast though...has that changed? I want to be able to saturate the 10gb speeds on each system. Any thoughts? Is all this possible with the newer versions of unraid?

Share this post


Link to post

Unraid does not stripe array drives, so you will never exceed the speed of each individual drive. In order to fill the speeds you are asking for, unraid would need to be using SSD's instead of 7200 RPM spinners.

Share this post


Link to post

Roughly what kind of speeds would I be getting with 7200rpm hard drives in raid 6 configuration? At least 110 mbs ballpark with a 10gb ethernet?

Share this post


Link to post
1 hour ago, unpavedkevin said:

At least 110 mbs ballpark with a 10gb ethernet?

With modern drives and using turbo write between around  200 and 100Mb/s, depending to what section of the disks you're reading/writing.

Share this post


Link to post

Thats pretty slow for 10gb, also if I added 2 m.2 drives for caching would that speed up faster? I saw a Linus video and he was able to get speeds in the 450mbs range.

 

100mbs sec speeds: Is this just because unraid is slower, or would this be any NAS software solution? Thanks

Share this post


Link to post



Thats pretty slow for 10gb


Like mentioned above unRAID doesn't stripe data, so it can never be faster than a single disk, doesn't matter what type of NIC you have.

You can get 10GbE speeds writing to the cache pool, either by using one or more fast NVMe devices, or many slower devices, like Linus did.

Share this post


Link to post

Ok last question, for my needs what would be a good processor to go with. I am looking at the Xeon E3 and E5, but I rather not spend a bunch of money if I don`t need that much processing power and its overkill. What cpu what work well and handle the bandwidth of 4k editing? Also how much ram would I need? I would be using ECC also.

Share this post


Link to post

Any modern high clock Xeon will do, as for RAM it can help with transfer caching but since you'd be working with large files it won't make a big difference in the end, 8GB or 16GB would be enough.

Share this post


Link to post

i wonder what exactly unraid should do? Just store the whole thing? 

Share this post


Link to post
4 hours ago, unpavedkevin said:

if I added 2 m.2 drives for caching would that speed up faster?

The cache pool in unraid doesn't act at all like cpu cache. It's a separate storage space that is used either to speed up writes of new files to the shares, and / or as a spot for permanent files that need better performance, like VM vdisks or docker data.

 

Once a file you are sending to the cache pool gets moved to the parity protected array by the scheduled mover, it won't automatically get moved back.

 

So, if you send a new project to unraid, if there is enough free space it will sit on the cache pool until the mover runs. While it's in the cache pool, you will get the highest speed possible. After it's moved, subsequent changes to those existing project files will be done on the array at the slower speeds. Completely new files written to the project will be on the cache drive until mover runs.

 

If you run out of space on the cache pool, new writes will go directly to the array, at the slower speeds.

 

You can modify this behaviour manually with scripting or some settings, but in general that's how things work.

 

6 hours ago, unpavedkevin said:

hard drives in raid 6 configuration?

Unraid doesn't do RAID.

Share this post


Link to post
1 hour ago, jonathanm said:

Unraid doesn't do RAID.

 

In reality it does do RAID - it is a redundant array of inexpensive disks. Just that unRAID doesn't match any of the formally numbered and standardized RAID variants. So unRAID with two parity drives doesn't match the traditional RAID-6 implementation with striping and the parity spread over all disks.

 

The actual requirements of RAID-6 is as simple as "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures."

 

https://www.snia.org/education/dictionary/r

Share this post


Link to post

unRAID is ill-suited for those kinds of speeds, unless you use a pool of drives outside of the unRAID array for the video sharing purposes, and then setup a cron job to let unRAID copy those files to its own array nightly as a backup solution.

 

The underlying OS is no slouch, but the way unRAID reads and writes to its array more closely matches that of using a single hard drive, rather than a group of them in concert.

 

Based upon your macOS and Windows editing needs I'd recommend going with a Linux-based file server (you could run that as a virtual machine on an unRAID host if desired) so you get the support for truly long filenames and paths, and utmost flexibility in shared storage configuration. 

Share this post


Link to post

bman can you explain that a little further? I should use Linux? What NAS on Linux? How would I still use unraid? So I will never get pure 10gb saturation as I would running something like freenas in raid 6? Still a bit confused, thanks!

Share this post


Link to post
Posted (edited)

unraid is the host, startet from usb. 

 

From there u can do what ever you want.

 

Direct workstation(s) with monitor with hardware bypass (almost 99% perfomrance)

plugins

dockers

VMs (e.g. Linux)

 

Linus, a youtuber, build 7 gaming pcs out of one unraid machine....

 

In the end it doesnt matter what you want to do, your answer is unraid. you can do everything with it. And its easy to maintain and setup.

 

Limit is just your budget and or hardware.

 

Edited name, its linus, not linux xD sorry https://www.youtube.com/watch?v=LXOaCkbt4lI

Edited by nuhll

Share this post


Link to post
On 7/13/2018 at 7:39 PM, unpavedkevin said:

bman can you explain that a little further? I should use Linux? What NAS on Linux? How would I still use unraid? So I will never get pure 10gb saturation as I would running something like freenas in raid 6? Still a bit confused, thanks!

 

As nuhll said, you can run VMs of any Linux flavour you like (or other operating systems as desired) and get all kinds of performance.  But the storage subsystem unRAID uses is not suited to the speeds you're looking for.

 

Using drives outside the array (by simply not assigning physical drives to unRAID's storage pool) with any VMs you might like to run will get you the speeds you're looking for.  You can have a hybrid system in this way, using unRAID for archive type storage (write-once, read many concept) and also using it as a spot to host virtual machines from. Or you can ignore the core functionality of unRAID and just use it to host VMs and fast drives + network access, similar to what Linus did in his 7-gamers-on-one-PC video as mentioned above.

 

Share this post


Link to post
1 hour ago, bman said:

Using drives outside the array (by simply not assigning physical drives to unRAID's storage pool) with any VMs you might like to run will get you the speeds you're looking for.

 

Actually not, since unRAID doesn't supports multiple arrays and it takes striped writes to get enough bandwidth when using HDD.

 

A single HDD outside of the array isn't much faster than a HDD within the array with turbo-write enabled.

 

The only advantage with a HDD outside of the array is that it is possible to write independently to multiple stand-alone drives while the unRAID array can only maintain max transfer speed with a single access (read or write) at a time.

Share this post


Link to post

I run hardware raid on some unassigned device drives to get faster speeds since Limetech hasn't yet decided to include multiple cache drives.....

Share this post


Link to post
1 hour ago, 1812 said:

I run hardware raid on some unassigned device drives to get faster speeds since Limetech hasn't yet decided to include multiple cache drives.....

 

Another alternative would be to run a VM and allow that VM to own multiple physical disks.

 

The VM could then make use of any software RAID support available in Linux without interfering with unRAID.

Share this post


Link to post
2 hours ago, pwm said:

 

Actually not, since unRAID doesn't supports multiple arrays and it takes striped writes to get enough bandwidth when using HDD.

 

A single HDD outside of the array isn't much faster than a HDD within the array with turbo-write enabled.

 

 

Quite right; It was not obvious where to go from my point of not assigning drives to the array, but the idea is that once you've got them outside the control of unRAID you can build a virtual machine to handle them in any speedy fashion you choose.

Share this post


Link to post

I need to have consistent throughput and a fast server. I have seen videos such as this one showing these speeds I am looking for, and almost with the same setup. Can someone please explain...watch this: 

 

Share this post


Link to post

In general, what you're after is a different server OS than unRAID.  You can use unRAID as a spot to host virtual machines, and in that way get the "different" OS that is required for your performance needs. 

 

In short, though, if you're not using unRAID for its storage abilities, you may be better served installing another OS on your bare metal build. 

 

The author of the video is already running unRAID for other reasons, but unRAID is not what is enabling the speed between his demo systems. Rather, the speeds demonstrated come from the Windows, Linux or OS X machines (whichever he has booted into at the time) in concert with RAM used as disks and 10GbE network interface cards. Proper network setup is necessary too, naturally.

 

 

 

Share this post


Link to post
2 hours ago, pwm said:

 

Another alternative would be to run a VM and allow that VM to own multiple physical disks.

 

The VM could then make use of any software RAID support available in Linux without interfering with unRAID.

 

also true! But I opted to let an enterprise raid card handle both raid 10 arrays I run vs offloading the job to ....osx.....lol.... I think I made the right decision.

Share this post


Link to post
Posted (edited)

If you get 99% of bare metal performance, but have an extra layer of "help" with all those small dockers and plugins.. why choose bare metal. I wouldnt do anything without unraid. If i just knew it earlier. Also i find it a lot esier to maintain. If a VM crashes for whatever reason, you dont need to run to the server and reset it, just go to the web interface and do it... wherever you are.

 

Unraid costs one time some USD and can be used for the rest of your live.

 

I would say, get unraid, try to work with shares from unraid (or, lets say, with the cache drives), if that doesnt work out, you can simply install a VM and "redirect" the SSDs or what ever you bought to it and do wahtever you want.

 

I also read somewhere that you can now use RAID 0,1 and 10 as cache drive, its not secure, but it has speed.
Edit, found it:

They are quite frequently releasing updates to make the performance even better.

Edited by nuhll

Share this post


Link to post

i did a quick test on my windows 10 VM (which is running just on the cache pool) - no direct passthru

I use 2 850 EVO 500GB

1.png

Share this post


Link to post
6 hours ago, nuhll said:

I also read somewhere that you can now use RAID 0,1 and 10 as cache drive, its not secure, but it has speed.

 

RAID 1 and RAID 10 have similar redundancy as the main array has when using single parity.

 

RAID 1 has redundancy through mirroring. Standard write speed but reads can be splitted between the multiple copies.

RAID 10 also involves redundancy from mirroring. Just that the individual mirrors are then striped for additional bandwidth.

 

RAID 0 is something to be very careful about using. More bandwidth but worse MTBF than a single disk. It's the RAID from hell for people who want to lose lots of data.

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now