Need some help deciding on direction


Recommended Posts

Hi folks. I’ve had an unRAID server running since 2014 as a backup for my media without much fuss, but I’ve decided to go in a different direction now and need advice.

 

My current server is an Intel i3 Haswell build, unRAID 6+, 5 4TB drives, and a 120GB SSD as cache. As it sits it’s at almost 50% capacity on the drives.

 

What I’m wanting to do now is to go 1 to 1 rips of all my media, BRay and DVD using MakeMKV. For just the BR portion we’re talking at least 6TB needed, and I can’t even guess on the DVD since I have about 500 and am slowly replacing what I can with BR. And I want whatever box it’s on to be able to rip using multiple instances of MakeMKV and be a Plex server in the end.

 

Since my current server is a backup to a raid box connected to a Mac Mini serving both as Plex server and client, I could technically start the process on the current server. But since it’s my backup it gives me a bit of pause.

 

There’s room and power enough to add 3 more 4TB drives (that’s the max compatibility on my MB apparently), and I can still find a 4th gen i7 to beef up the processor, but it isn’t exactly cheap. And then I’d still need to add another SATA card so I could add some BR drives. And then I’d be maxed out with no more room to go. Oh and memory is maxed at 16GB.

 

 

My other option is to just build another unRAID server with more memory, larger drives, better CPU and start fresh. I’d probably go AMD Ryzen with a processor that has graphics support like the i3 cause I don’t need any more graphics support than that since it’s just Plex and MakeMKV that I’ll be using. And a few other dockers that matter.

 

I could also raid my current server, replacing MB, CPU, PS, and memory while leaving the drives intact plus add more larger ones and slowly start replacing content. That’s providing I could do all that without a complete FUBAR situation.

 

But again, right now that box is my only backup to what I currently have and it’s not I’ll be able rip everything overnight.

 

I’m leaning toward a new build so I could at least have more of a future with it than the current system. However the current box has not failed yet and I could probably end up still using it as backup providing I can match capacity.

 

So what do you guys think? I haven’t had to pop in here much lately, but I think thats about to change!

 

 

Sent from my iPad using Tapatalk Pro

Link to comment
1 hour ago, acurcione said:

I’m leaning toward a new build so I could at least have more of a future with it than the current system. However the current box has not failed yet and I could probably end up still using it as backup providing I can match capacity.

Having 2 unraid boxes is a very nice place to be. I suspect your current system can actually handle large drives, the last real cutoff was 2.2TB, beyond that it's mostly just failure to retest, since 4TB was likely the largest drive available when your board was last marketed.

 

Here is what I would suggest. Look towards building a second tower, but make your first purchase be a 8 or 10TB drive, whichever you find a deal on. Plop it in your current server replacing your current parity drive and see what happens. Absolute worst case scenario is that you have a data drive failure unrelated to the new drive while you build parity, and like you said, it's your backup location anyway, so not the end of the world. If you load the array in maintenance mode for the parity build, you could just put the old parity drive back in place and recover.

 

If the current box truly doesn't like large drives, you aren't out anything because you are planning a new box anyway. If it works, you can either leave it in place in preparation for operation capacity match, or pull it to use in the new box.

 

Honestly, only using 50% of capacity isn't where I like to be anyway. That's capacity that likely cost me too much, and is causing unneeded statistical risk. I prefer to add drives one at a time, generally when I cross the size of the parity drive(s) in free space remaining. I currently have 8TB parity, so until I get below 8TB free, I'm not going to add capacity. I have an 8TB drive ready to install, because I got it on sale, but it's not going to start spinning until I need it.

Link to comment

I fully agree with jonathanm.

 

  I was a bit surprised with the 50% free space figure myself.  The only time I have had that much free space on an unraid server was on a new build.  I do have drive that are ready to drop in, that I have pre-cleared, but I do not count them since they are ready in case a drive fails, OR I need to increase my storage capacity.

 

  Most of my servers came on-line as I needed to expand, and got to a point, that though the prior system was not fully used yet as far as what it COULD do, it seemed to be a better option to start with a new server build, with new capabilities and more potential for expansion.

 

  I love having servers sitting powered off as backups.  It is what my older unraid builds do most of the time, sit without power.  As far as PLEX is concerned, I have played with it in a Docker, and find it not as easy to work with compared with running a PLEX server under Windows.  Since I have multiple servers, my PLEX needs include accessing multiple servers to be seen by PLEX.  It is much easier for me to just map the UNC path in PLEX in Windows and have it work.  With a Dockerized version, I first need to set the resource to be available in Unraid, using the Unassigned Devices plugin, which works well enough, but then I still need to map the resource in PLEX.  More steps, and also a bit more difficult steps to take than under Windows.

 

  I just have not seen a real reason to move toward PLEX fully running under unraid to have ANY real advantage for me.  To me, multiple machines seems to work better, and provide more stability for my uses.  Of course this will not be the case for everyone.  I am also still testing to see if I can change my mind about it.  It sounds really nice to be able to have everything on one server.  I am just not sure if the advantages will out-way the disadvantages to me.

 

  I also like using multiple computers in a rack with a KVM switch for ripping my DVDs and Blue Rays.  it seems to be much less of a working process bottle-neck than trying to use fewer computers with multiple optical drives.  The result is I have my media on hard drives much quicker, then I can either move the files as is to my servers, AND/OR compress them in batches to be placed on the servers later.

Link to comment

Yeah, the 50% cap thing, I had built the server to also act as backup for my several Macs. Problem was, well, it never really worked without a hitch. And I grew tired of messing with it so connected raid stayed in place.

As far as a Plex server if it doesn’t play nice in a docker then I just leave it running on my Mac Mini using the unRAID server for storage. And I personal note, I don’t play nice with Windows these days. I gave it 10 a serious go not long ago and had to go back to Mac.

Now if all I were doing with Windows was Plex then maybe I could make that work.

Thanks for the feedback folks. I have a better idea of where to go from here.


Sent from my iPad using Tapatalk Pro

Link to comment

  Plex works well in a Docker, in my opinion just not as easily as in Windows.  I only have one computer I regularly use with Windows 10, and I am still trying to accept it, but so far I really hate it, with each forced update I hate it more.  My PLEX server under Windows is running on Windows 8.1.

 

  Many people really like Plex running in a Docker on Unraid, and it works very well for them.  If all my media was on one Unraid server which also was running Plex I could see I would have NO complaints with it at all.  It is pretty cool being able to watch the system resources in Unraid while Plex is transcoding a stream in a Docker.  :-)

 

  I personally like to keep my old systems in use, till I either outgrow the use, or build another system that eventually just takes over the use.  I enjoy putting together a new system, from new or used parts, to test new configurations on, if they work out well, they are kept, if not, they turn into another test for the next application.

 

  It sounds like you are all set with options to try out, and it will be interesting to hear what you finally decide!  Please let us know what you try, and do or do not like as a result of your testing.  It is always nice to hear about what other people have tested and the decision process used to make they final choices.

 

  Good luck!  Hope you can find a combination that works very well for you with no down sides!

 

  One more thing to consider, there is a very cool Host Bus Adapter (HBA) card that is available for internal drives with another version for external drive expansion; LSI SAS 9207-8i (internal port version), and LSI SAS 9207-8e (external port version).  These are PCI-e Version 3.0 cards, capable of fully utilizing a PCI-e 8x port!  These are getting old now, from when they first came out, but are still a current product, so they can be found new for close to $100, and less for used!  I am not sure what you have in your i3 server, but this could potentially be a real way to build it up without removing anything that is currently in use!  If you wanted you could even use the SAS 9207-8e version of the card and connect a nice big 24 port JBOD chassis for a very large expansion capability.  This is what I am currently in the process of testing, and so far I am very happy with my initial tests!  It may turn into my main unraid server when I have finished, and may also become my second full time Plex server!

 

  If I like it enough, it may eventually replace my current Plex server, but time will tell...  That would also mean that in the future I would have relegated my other unraid servers to backup use then instead of full active storage also.  The JBOD chassis I grabbed was an old retired 45 bay SuperMicro 4U unit.  Old enough that the server farm it was in needed to upgrade, but it matches perfectly with my SAS 9207-8e cards I also picked up used for under $35 each.  After all why would a server farm keep running a slow card with only 8 internal 6Gb/s SAS channels when there are so many faster and MUCH MORE EXPENSIVE options out there?  I am happy, it makes for some nice cheap grabs in the used market!

Link to comment
On 4/26/2019 at 7:37 PM, jonathanm said:

Having 2 unraid boxes is a very nice place to be. I suspect your current system can actually handle large drives, the last real cutoff was 2.2TB, beyond that it's mostly just failure to retest, since 4TB was likely the largest drive available when your board was last marketed.

 

Here is what I would suggest. Look towards building a second tower, but make your first purchase be a 8 or 10TB drive, whichever you find a deal on. Plop it in your current server replacing your current parity drive and see what happens. Absolute worst case scenario is that you have a data drive failure unrelated to the new drive while you build parity, and like you said, it's your backup location anyway, so not the end of the world. If you load the array in maintenance mode for the parity build, you could just put the old parity drive back in place and recover.

 

If the current box truly doesn't like large drives, you aren't out anything because you are planning a new box anyway. If it works, you can either leave it in place in preparation for operation capacity match, or pull it to use in the new box.

 

Honestly, only using 50% of capacity isn't where I like to be anyway. That's capacity that likely cost me too much, and is causing unneeded statistical risk. I prefer to add drives one at a time, generally when I cross the size of the parity drive(s) in free space remaining. I currently have 8TB parity, so until I get below 8TB free, I'm not going to add capacity. I have an 8TB drive ready to install, because I got it on sale, but it's not going to start spinning until I need it.

Well, I got a 10TB drive to try out in my old box. I haven't started the build on the new one just yet. And guess what, the darn drive trays for my case (Fractal Define R4) don't have proper screw offsets for it! So I hooked it up temporarily to see if the BIOS in the MSI board could at least recognize it and it DID. So at least I have that going for me.

 

I have an email out to Fractal Design to see if they happen to have drive trays that work with that case. If not, I'll have to get another case most likely which kind of sucks. Especially since I don't know 100% if the drives going to work or not since all I have is BIOS confirmation. Not how I wanted this test to go.

 

My other problem is, since I don't have any confirmation if the drive is good or not and it looks like it will be a few more weeks before I can start the new build, what the heck do I do with the drive? Hold on to it for the build and hope it's not DOA or return it?

Link to comment
8 hours ago, acurcione said:

the darn drive trays for my case (Fractal Define R4) don't have proper screw offsets for it!

Is there metal there but no holes? Perhaps drill your own holes? As long as the drive trays can be completely removed so any metal shavings can be cleaned off before the tray is put back in the case, I see no downside to modding what you've got if it can handle it.

Link to comment
Is there metal there but no holes? Perhaps drill your own holes? As long as the drive trays can be completely removed so any metal shavings can be cleaned off before the tray is put back in the case, I see no downside to modding what you've got if it can handle it.


Unfortunately no. The trays are too short to reach the new holes. But Fractal does have adapters they’ll send for free as long as I cover shipping so now that the parity rebuild is over I can power it down and flip it over so I can get the serial number for the case and start that process. All is not lost at least!


Sent from my iPad using Tapatalk Pro
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.