bmfrosty Posted July 22, 2017 Share Posted July 22, 2017 I built my current URAID build in 2013, and as the hardware has aged and the drives have aged, I've been thinking about what I have and have not liked about it. The big tower case comes first. The surgery to make changes comes second. As I'm four years in, I'm thinking it's probably time to start swapping out hard drives and make a move to a faster CPU to better support transcoding. I don't want this to leave me with a chance that I'll be left in a lurch. The thought is that I'll start with this kit: https://www.newegg.com/Product/Product.aspx?Item=N82E16816132015&cm_re=jbod_enclosure-_-16-132-015-_-Product 1) I'll install the card in my current system and attach the JBOD enclosure to my current system. 2) I'll install a new parity drive in the new enclosure - 8TB or 10TB. 3) Preclear and rebuild of parity will be the first test. 4) I'll add two data drives in the new enclosure and use unbalance to migrate all the data to the new drives and decomission all of the old drives. 5) I'll migrate the Cache drive to the enclosure. 6) I'll purchase an off the shelf system and migrate the card, enclosure, and USB stick to the new system. Theoretically this should get me to new hard drives and new other hardware. Does anyone see any holes in this plan? Am I risking any serious bottlenecking multiplexing over the eSATA port? Thanks! -Ben Quote Link to comment
trurl Posted July 22, 2017 Share Posted July 22, 2017 20 minutes ago, bmfrosty said: Am I risking any serious bottlenecking multiplexing over the eSATA port? Parity checks, parity or data disk rebuilds, all require accessing all disks simultaneously, so yes, a bottleneck for these parallel operations. Also, writing a single data disk also writes parity, so it would be better if parity were on its own port for even normal operations. Quote Link to comment
bmfrosty Posted July 22, 2017 Author Share Posted July 22, 2017 I could migrate parity to the main chassis if needed. One drive on a motherboard cable is not the big mess that 8 drives has been. Even if I move to p+q, two drives on the motherboard isn't too bad. Quote Link to comment
bmfrosty Posted July 22, 2017 Author Share Posted July 22, 2017 SFF-8088 is probably better for this, but I'm not really set up for anything rack-mount at home. Quote Link to comment
SSD Posted July 22, 2017 Share Posted July 22, 2017 Many towers of that vintage support installing so called "5in3s" that allow 5 drives to be mounted in the space of 3 5.25" slots. I use CSE-M35T-1Bs in my servers, which are the best of the best IMO. Deals can occasionally be had on eBay, although these have been fewer of late. Here is a picture of an Antec 900 with 3 of those cages. Swapping drives is a breeze with virtually zero chance of creating a cabling event. If that won't work or you have more drives than will fit, I have a couple of the cages that I keep outside the server. You could even screw a couple of those to support 2 or or 4 cages (5-20 drives). You just need something like an LSI SAS9201-16E, purchased from eBay for about $50, plus some SFF-8088 breakout cables. That would support 16 drives. For power I run a PSU pigtail out the back of the server and connect the cages (only one needed per cage, so very manageable). This would be cheaper than the units you are looking at, and probably faster too. Your PSU would power everything. Your description of status quo and likes / not likes is not really clear. I am assuming your drives are screwed into a tower case making swapping hard and error prone. And that you're read for a more powerful box to take advantage of new unRAID features. All very doable. Moving to hot-swap cages would be a good step. I'd do in conjunction with the motherboard update and get all installed and burned in. Once the cabling proves itself, and with a powerful server that should last you years, you'd be sitting pretty for adding / upgrading drives, with a server that should not need to be opened. Some people like to blow out the dust bunnies from time to time, but with good airflow I've found dust bunnies to be pretty rare in my cases and I'd just as soon leave the server closed, protecting all of the sensitive connections. Quote Link to comment
BobPhoenix Posted July 23, 2017 Share Posted July 23, 2017 (edited) if you want to expand externally you could go this route without a bottle neck: https://www.newegg.com/Product/Product.aspx?Item=N82E16816111471 It has a SAS connector so just get a controller with an external SAS connector and use the provided external SAS cable. Edited July 23, 2017 by BobPhoenix Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.