60TB Home Server Build for Photo/Video Editing/Misc


Recommended Posts

Hi there! I'm considering UnRaid for my new home server. Right not I have a home server (Lenovo TS140) that runs Windows 10. It runs Blue Iris (IP Cam Surveillance), Plex, Sonarr/Radarr/Headphones, and that's pretty much it. For data storage, I have several externals that I work from. I outgrew my largest enclosure, so I purchased single drive externals to become archive drives. I have mirrored copies of everything and back up to backblaze and Gdrive for offsite. I would like to consolidate everything. One server/storage solution to rule them all. 

 

For Blue Iris & Backblaze I'm guessing I'll have to run a Windows 10 VM to keep those going?

 

My data storage needs are mostly because I have a photography and a video business. 

 

I really love everything I read about UnRaid, especially the ability to grow my storage pool. I have quite a few hard drives to start with, but it would be nice if I didn't have to buy my "full set" and commit them to a Raid 6 or ZFS2 array. That said, my main working drive is a 4 bay enclosure that's running Raid 5 with 4 6TB drives. For my video editing work, I don't think I could be limited to the speed of a single drive, especially since I'm working in 4k more and more. I also back reference projects at lot (from the past year at least) and move/copy to current projects to use. For the photography side of things, I would also like fast storage for at least the most recent projects. Skipping around on 30+ megapixel raw images in lightroom get's laggy on slower drives.

 

Can I have my cake and eat it too? Can I have a big storage pool and then also a fast array/pool on the same server that I can access via 10GbE or something? Are there other options?

 

On top of the data storage/usage, I still want the server to do everything I mentioned it's doing already. I'd also like it to become a personal cloud (maybe Tonido, Cozy, or Seafile?).

 

I am planning to utilize as much of my current hardware as possible. That includes transplanting and upgrading my TS140. Here is my current hardware plan (open to suggestions here too):

 

TS140 motherboard + E3-1225 V3 Xeon (possibly upgrade to a 1245 V3 for the hyperthreading)

Norco RPC-4220 4U 20 Bay Hotswap

32GB DDR3 ECC

SeaSonic SSR-650RM PSU (plus adapter for TS140 mobo)

HBA?

10GbE card?

8 x 6TB WD Red

 

Aside from the Reds, I also have 4 Toshiba 6TB X300 drives, 4 x 4TB WD Greens, 2 x 4TB WD Reds, and a 8TB WD Easystore.

 

Needless to say, there are a multitude of options out there and I'm a little overwhelmed. I've considered everything from just continue running Windows 10 on the new build with a hardware Raid or something to running another dedicated NAS OS like the FreeNAS or NAS4Frees of the world. Any insight would be very much appreciated!

 

 

 

 

 

Link to comment

I think what you want to do is separate things into two distinct use.

 

1. Working data, the data you work with on a daily basis which offers fast read/write and some redundancy.

 

2. NAS duties and storage or backup storage. A system running unRAID, dockers, perhaps a VM.

 

unRAID is not built for speed like other systems so you can't have that piece of the cake, you could hook some SSD's as unassigned devices to work off of but they would not be part of the array. Plex, Sonarr/Radarr/Headphones can run comfortably as dockers, no problems there. If you acquiring new hardware, consider carefully how you want to best use those resources.

Link to comment

Thanks for the info!

 

I'm a-okay with having a separate storage pool for working data, I would just like it all to reside on the server. If I were to use SSDs, they would just be one-off devices? UnRaid doesn't support any traditional arrays, like mirroring or raid 10 for the ssds? I read somewhere that some people have run a hardware raid (5 or 6) and connected that to UnRaid. Is that true? Could I setup a hardware raid of 4-5 fast disks in Raid 5 as my working drive and then have my main, large storage an UnRaid array that I can grow? Would SSD caching help in the middle?

 

 

 

Link to comment

Ideally I'd have 1-2TB minimum for my working "drive." A typical video project is 150-200GB and a photo project would be 200-250GB. I might have a few of each I'd be working on at a time.

 

Is the cache pool automatic, or is just a "drive" that I could mount and use traditionally?

Link to comment

For cache pool default is raid 1 as already mentioned but it is possible to change this to raid 0 if really needed. You would need 10Gb Ethernet to take advantage of such a setup. Generally standard configuration would be good enough. 

 

The term cache drive refers to the option Cache drive within Unraid, this is generally a faster drive that sits outside the protected array and you can decide if a share uses the Array, Cache only or a combination of both (This includes a function to move content to the array during the night). There are slightly more options but this a high level description.

 

You also have the option for unassigned drives, this is useful if you want to use the Cache + Mover function for general writes to the array  (actually in this mode it's first written to cache and then later moved to the array) and use the unassigned drive option to provide the SSD only for a certain project or projects. But this drive is outside of the array and it will be up to you to move the data to the array when ready.

 

Also running blue iris in an Windows10 VM uses very little resources, i use D2D writes and only provide it 2 vCores. I'm sure it would like more but I've had no issue so far.

Link to comment
13 hours ago, Tuftuf said:

For cache pool default is raid 1 as already mentioned but it is possible to change this to raid 0 if really needed. You would need 10Gb Ethernet to take advantage of such a setup. Generally standard configuration would be good enough. 

 

The term cache drive refers to the option Cache drive within Unraid, this is generally a faster drive that sits outside the protected array and you can decide if a share uses the Array, Cache only or a combination of both (This includes a function to move content to the array during the night). There are slightly more options but this a high level description.

 

You also have the option for unassigned drives, this is useful if you want to use the Cache + Mover function for general writes to the array  (actually in this mode it's first written to cache and then later moved to the array) and use the unassigned drive option to provide the SSD only for a certain project or projects. But this drive is outside of the array and it will be up to you to move the data to the array when ready.

 

Also running blue iris in an Windows10 VM uses very little resources, i use D2D writes and only provide it 2 vCores. I'm sure it would like more but I've had no issue so far.

 

At the very minimum, using 2 x 1TB SSDs could work for the drive I work from. I'd like more space than that, but that might work as a minimum. Still would be $500-600 just to add that on. The 450-500 MB/s that the SSD cache would provide would be excellent.

 

I've been searching and I've found a few people who've maybe setup a hardware Raid 5 and brought that in as a "individual" drive, that's not part of the UnRaid array. I wonder how reliable this is and if there is speed loss from doing so. I'd love to have 8TB+ in Raid 5/6 and my working data drive, then have my UnRaid array as my large archive + everything else, expandable drive. 

 

I found this: 

 

I guess this died off? I have tried searching, but haven't found much else. Implementing Raid 6 in UnRaid would be amazing. That would solve my issues.

 

Also, good info on Blue Iris. How many cams do you use in this config? Also, are they HD?

Link to comment

So after giving this more thought, I think I would rather have large, expandable storage, than a semi-large pool that is fast. So I think I am going to snag a 1-2TB M.2 NVME SSD for my system as a "working" drive, for just the most recent projects, then have a SSD cache on my unraid server.

 

Two questions:

 

1) For my system + 20 6TB WD Red drives, is my PSU selection (SeaSonic SSR-650RM) adequate? Several calculators say yes, but I am wondering if I should get a 850w Platinum instead. Edit: I just attached my killawatt to my TS140 and ran AIDA64 stress test. With 3x4TB drives and an SSD in there, the whole system is pulling 80-90w.

 

2) I like the Norco RPC-4220 a lot. Having the ability to eventually have 20 drives is nice, but it is HUGE. Is there another option I should consider?

 

Thanks for all the info so far!

Edited by andyps
Link to comment

I much prefer a tall tower to a rack mount setup, as I have no rack it would take a lot of floorspace.

 

I use the Super Micro CSE-M35T-1B 5in3 drive cages and they are pretty awesome. BTW, have heard complaints of Norco cages and quality overall. I have no personal experience. Might do more research. There is a supermicro rack mount case people tend to prefer. Probably has the same cage tech as the cages I use.

 

Look at this post and a few up from there in the thread for more information on case options:

 

 

  • Upvote 1
Link to comment
21 hours ago, bjp999 said:

I much prefer a tall tower to a rack mount setup, as I have no rack it would take a lot of floorspace.

 

I use the Super Micro CSE-M35T-1B 5in3 drive cages and they are pretty awesome. BTW, have heard complaints of Norco cages and quality overall. I have no personal experience. Might do more research. There is a supermicro rack mount case people tend to prefer. Probably has the same cage tech as the cages I use.

 

Look at this post and a few up from there in the thread for more information on case options:

 

 

 

I really like those hot swap cages. I can find them used on ebay for $65-70 shipped each. I'll check out that thread too. I am leaning towards the tower as well. For a different approach, I also like the Lian Li PC-D800, which can still be purchased for a bit less than the norco.

 

Any thoughts on the PSU?

 

Thanks!

Link to comment
4 hours ago, andyps said:

 

I really like those hot swap cages. I can find them used on ebay for $65-70 shipped each. I'll check out that thread too. I am leaning towards the tower as well. For a different approach, I also like the Lian Li PC-D800, which can still be purchased for a bit less than the norco.

 

Any thoughts on the PSU?

 

Thanks!

 

For the PSU, I have run 25 drives on my 650 watt PSU. I have a 750 watt PSU in my backup server than sometimes runs more drives. I think a PSU in that range is sufficient. There are some calculators online to help determine. Remember than the PSU is pushed hardest when spinning up all drives in parallel, but that only takes a few seconds and then power draw drops. This can happen at power on, although some/most controllers hold off powering their drives until the their BIOS is activated, which tends to smooth out the power draw. I ran a 500 watt PSU in an older server that probably should not have been sufficient, but it never had an issue. I wouldn't overthink it. A good quality SINGLE RAIL psu of 650+ watts. This does not include power hungry video cards or X299 motherboards/CPUs with huge power draws. But for a rather normal 4 core Xeon/i3/i5/i7/Pentium whatever, memory and 25 drives, it will work well.

 

Going back to the case. I have never used the Lian Li, but will say that it appears beautifully put together. However, it lacks the most "beautiful" feature that I have to have for my servers. And that is the ability to swap out a drive without opening the case. The Lian Li is better than most, giving easy access to the rear of the drive, but its not enough IMO. You still have to physically touch the SATA and power cables to remove or add a disk, and risk knocking something loose in the process. 

 

The hotswap cages are really what is needed. Not for true hotswap (adding or removing a drive from a running server), which I have never done, but for the ability to eliminate the possibility of knocking a cable and creating a marginal connection that will become apparent hours, days, or even weeks later. A fraction of a millimeter is enough to take a perfect connection and make it intermittent. (Funny, I was a pretty intense disbeliever in cages with my first servers, calling them an expensive luxury, until numerous experiences taught me the importance! What I found was the the time the cages were needed to most, was at the times the array was the most vulnerable - like after a drive failure or when diagnosing a drive that keeps dropping offline. I am now the biggest proponent.)

 

I can't tell you how often we get forum cries for help right after a user has replaced or added another drive. This is by 2 orders of magnitude (and I'm not exaggerating), the most frequent root cause. And proactively eliminating that risk is more than merely convenience, especially on a server with a larger number of drive. IMO this is much more important than dual parity.

 

It is for these reasons, I don't recommend the Lian Li case you mentioned.

 

I'll mention a couple other advantages of the cages:

1 - Sometimes a drive can "act up" and it is difficult to know if it is the drive or something else (cable, controller, port in the cage). Powering down and swapping drives will allow you to keep all other variables constant, impossible with individually wired drives.

 

2 - Servers with a lot of drives are heavy. When you need to bring it to a workbench, or even relocate it, being able to remove all of the drives drastically reduces the weight. With cages, you can remove every drive, move a server, and re-insert all of the drives, and have the server work perfectly. Without it, it could easily turn into an afternoon exercise to get all the drives mounted and working. And even then, intermittent issues can continue to occur. Although the Lian Li is on wheels - if a flight of steps is involved, or a lift from floor to bench, the hot swap cages will save your back!

 

3 - Hard to explain, but when you have the cages you use them. Whether it is installing a disk in my backup server to do a preclear, and then moving to my prod server when it is ready. Or moving a disk from my backup server to my prod server to copy a particularly large amount of data more quickly. Or removing a new disk after it has been precleared to await a time when the disk is truly needed (saving wear and tear). To being able to examine the disk label to know when the disk was manufactured. Before cages, I would never even think to open the case unless it was absolutely necessary. With the cages it is a frequent event.

 

A configuration that is feasible (I'll stop slightly short of saying recommended, although its what I would do if buying new), is using a small case that would only hold the motherboard / cards, and the SSD. And buying an LSI SAS9201-16e, with 4 SFF-8088 breakout cables, and 4-5 CSE-M35Ts (which would NOT be mounted inside a case). If ever there was an emergency and you were trying to save your data from a fire, for example, imaging being able to just grab these 5 drive boxes and run.

 

You can run a power pigtail out of the back of the case to power the cages. And the SFF-8088 cables connect externally. The cages themselves are very secure. (You could figure out a way to screw them together into a "disk box".) This configuration  makes the system portable and has all of the advantages listed above. Noise might be a concern, but my server is in my basement and I can't hear fan noise. Quieter fans are possible too.

 

I have been able to occasionally find the SuperMicro 5in3s on eBay cheaper than you mention. I recently found 2 new ones for $55 each (shipped) that I couldn't resist. Right now cheapest I am seeing is $67.

 

(I should bookmark this post so I can use it again. I tend to repeat a similar message, because I think it is important and people don't research even recent articles.)

 

Good luck!

  • Upvote 1
Link to comment

Lot of a great info! Thanks.

 

I definitely hear you on your points for sticking with hotswap over a case like the Lian-Li. I will most likely go that route. I like the convenience of both the config, but also in the event of a drive failure, making the replacement process much easier.

 

I think you are spot on for the PSU too. After further research, looks like the WD Red 6TB drives use 4-5w at idle and 6-7w at load. Even if I double that and say 14w a drive, that's only 280w, plus my existing 90w, I still have plenty of room for the one or two PCIe cards I'll need and the SSDs. Plus it seems the PSU will be happier if I pull that kind of power from it, vs getting a higher wattage PSU that I only use 20-40% of.

Link to comment

@bjp999 I ended up with an Antec Twelve Hundred case. I snagged it on ebay last night for less than $100 with shipping. Also grabbed 4 of the SuperMicro hot swaps. Did a best offer on the set and he accepted this morning.

 

Now the only two big purchases remaining are the HBA, the 10GbE cards, and whatever SSD config I decide on. I think I need to go with a HBA with a lot of ports. I only have the following slots to work with:

 

1 PCIe Gen3 x16 slot

1 PCIe Gen2 x4 slot /x16 connector

1 PCIe Gen2 x1 slot/x4 connector

1 PCIe Gen2 x1 slot

 

 

Link to comment
7 minutes ago, andyps said:

@bjp999 I ended up with an Antec Twelve Hundred case. I snagged it on ebay last night for less than $100 with shipping. Also grabbed 4 of the SuperMicro hot swaps. Did a best offer on the set and he accepted this morning.

 

Now the only two big purchases remaining are the HBA, the 10GbE cards, and whatever SSD config I decide on. I think I need to go with a HBA with a lot of ports. I only have the following slots to work with:

 

1 PCIe Gen3 x16 slot

1 PCIe Gen2 x4 slot /x16 connector

1 PCIe Gen2 x1 slot/x4 connector

1 PCIe Gen2 x1 slot

 

 

 

With that TS140 motherboard - you get 5 SATA slots, leaving you needing 15 more.

 

The LSI SAS9201-16i would give you 16 ports. That is enough for your 15 drives + 1 SSD.

 

If you need more (I don't think you should), you could buy a cheaper 2/4 port card. Or an LSI SAS9201-8i. You may even have something in your extra parts bin.

Link to comment
28 minutes ago, bjp999 said:

 

With that TS140 motherboard - you get 5 SATA slots, leaving you needing 15 more.

 

The LSI SAS9201-16i would give you 16 ports. That is enough for your 15 drives + 1 SSD.

 

If you need more (I don't think you should), you could buy a cheaper 2/4 port card. Or an LSI SAS9201-8i. You may even have something in your extra parts bin.

 

That's one I've been looking at. That would cover my minimum requirements. I was considering more ports if I did a multiple SSD cache pool. 4 x 1TB in Raid 10 for example. To connect to via 10GbE, mount on my desktop, and use as my high-speed working/editing drive. That would be $1100-ish. If they are 500ish MB/s SSD drives, that would potentially saturate the 10GbE on reads and nearly on writes.

 

Another option would be a 1TB M.2 NVMe drive in my desktop ($450) and a 2TB single SSD in the server as the cache disk ($550). I would have less working drive space, but it would be super super fast. I would lose transfer speeds, but 400-500MB/s is more than enough for just transferring back and forth.

 

Seems like the first option would be my best compromise. More usable working space and really fast transfers/working speed. Any other options I should consider?

 

Also on the CSE-M35T-1B cages, are you running the stock fans? If so, how loud are they?

Link to comment

I am running the stock fans. They are not silent. But they are terrific ball bearing fans that will last forever. And they move a lot of air. They are 92mm, not 80mm. The sound they make is the sound of air, not squeaks or whines.

 

In a work environment, they'd  be fine IMO. If they are not, you can always look at options (quieter fans, fan controller, etc.). In a pristine listening environment where you want the sound of a pin to have you jumping out of your seat, they are going to be too loud (probably anything short of locating the server remotely is going to be too loud IMO, which is what I have).

 

The 10GBE internet makes a lot of sense for your application. 

 

I thought you had said that your media files were about 5G each. I'd consider 250G SSDs (50 x 5G files) on the desktops and a 500G SSDs (100x5G files) on the server. If you have space issues, you'd be able to 2x those into a RAID0 configs without much trouble. But this is your business and you know a lot more about the numbers of work items in queues and the amount of space you need on the workstations vs server. I can only give an amateur's perspective on what your business needs.

 

But I think you are looking at a good server case. I have the 900 Antec for my backup server (with 2 extra 5in3s outside the server). The case is not quite as deep (front to back) as my Sharkoon. If you keep the 5in3s slid in about 3/4 of the way, you can get everything connected, and once done, slide them into place and screw them in. (BTW, you will need a DEEP THROATED C-CLAMP to bend back the little ledges in the case).  I had a little trouble with a deep controller card in my 900 server. Making matters worse, it has SATA ports on the south end, further increasing the depth it needs. It is a tight fit with that controller in place. The -16i is quite a bit shorter, and you shouldn't have much trouble. Can't speak for the 10Gbe cards, but should be fine. Worst comes to worse, you leave the 5in3's pushed in 90% of the way and have a little extra depth, which is my current setup. If (when) I had a different controller, I'd be able to push them all the way in.

 

Good luck! Seems your off on an adventure getting it all assembled. Take your time! Take some pictures and post them. Also, I'm interested in the 10GBE network option, and will be very interested in the hardware you get and your experiences.

 

Oh, one other thing. The used CSE-M35Ts don't always come with these tiny little flat head screws you will need. You can buy them on Amazon cheap.

Link to comment
2 hours ago, bjp999 said:

I am running the stock fans. They are not silent. But they are terrific ball bearing fans that will last forever. And they move a lot of air. They are 92mm, not 80mm. The sound they make is the sound of air, not squeaks or whines.

 

In a work environment, they'd  be fine IMO. If they are not, you can always look at options (quieter fans, fan controller, etc.). In a pristine listening environment where you want the sound of a pin to have you jumping out of your seat, they are going to be too loud (probably anything short of locating the server remotely is going to be too loud IMO, which is what I have).

 

The 10GBE internet makes a lot of sense for your application. 

 

I thought you had said that your media files were about 5G each. I'd consider 250G SSDs (50 x 5G files) on the desktops and a 500G SSDs (100x5G files) on the server. If you have space issues, you'd be able to 2x those into a RAID0 configs without much trouble. But this is your business and you know a lot more about the numbers of work items in queues and the amount of space you need on the workstations vs server. I can only give an amateur's perspective on what your business needs.

 

But I think you are looking at a good server case. I have the 900 Antec for my backup server (with 2 extra 5in3s outside the server). The case is not quite as deep (front to back) as my Sharkoon. If you keep the 5in3s slid in about 3/4 of the way, you can get everything connected, and once done, slide them into place and screw them in. (BTW, you will need a DEEP THROATED C-CLAMP to bend back the little ledges in the case).  I had a little trouble with a deep controller card in my 900 server. Making matters worse, it has SATA ports on the south end, further increasing the depth it needs. It is a tight fit with that controller in place. The -16i is quite a bit shorter, and you shouldn't have much trouble. Can't speak for the 10Gbe cards, but should be fine. Worst comes to worse, you leave the 5in3's pushed in 90% of the way and have a little extra depth, which is my current setup. If (when) I had a different controller, I'd be able to push them all the way in.

 

Good luck! Seems your off on an adventure getting it all assembled. Take your time! Take some pictures and post them. Also, I'm interested in the 10GBE network option, and will be very interested in the hardware you get and your experiences.

 

Oh, one other thing. The used CSE-M35Ts don't always come with these tiny little flat head screws you will need. You can buy them on Amazon cheap.

 

At the moment I have two active video projects. Both are between 100-150GB with individual video files sized between 4-6GB each (plus a large assortment of smaller files). I also have two active photo projects. One is 300GB (with over 8,000 raw images sized around 20-25MB each) and the other is 150GB. It's fairly common for me to have 2-4 active photo projects and 2-3 active video projects. So I'm thinking, minimum, I need 1-2TB of fast "working" storage. I'm currently leaning towards running the 2TB Raid 10 SSD cache on the server option. It gives me twice the space as the M.2 NVMe local drive while still maintaining very fast speeds (not as good as NVMe, but more than good enough). I also like that I can expand that in the future.

 

I'm glad you pointed out needing the C-Clamp. I just figured out today that the Antec 1200 has the tabs in the 5.25" bays. I was scratching my head about how to handle those. Great suggestion.

 

Figuring out my 10GbE solution is definitely next on the list. Looking forward to having that set up. 

 

Good note on the screws for the hot swap cages. I figured they wouldn't come with any hardware given the discounted price I paid.

 

I'll definitely be taking my time! I plan to do the build in stages so I don't get overwhelmed. Thankfully last week when I was looking to purchase the E3-1245 V3 CPU to upgrade my 1225, I found an auction for a full system (HP Z230) for the price of the CPU. Figured it wouldn't hurt to bid and actually won it. So now I have a second system to use for the unRAID build and I don't have to take my TS140 out of rotation until the unRAID is up and running. Then I can either sell the TS140 or hang onto it for anther project. Should take the stress off the build progress. Thanks!

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.