drive cages pro/ con


lars

Recommended Posts

ok, i am still waiting for parts at the moment to complete my first unraid-build. and yes i need to cut some cost as well along the way.

so i am building without any kind of cages at this point. well obviously not really, the hdd's need to be fixed to sth  ;D.

anyway, it's more about all the heehaa concerning the "front-access fast (and maybe hotswap) versions", as mentioned i build my first unraid server, but do drives die that often and fast!? that you can't open your "easy access" case to swap it??? i dont know, i use computers since the 80ies, especially in the last years i have servers as well as workstations running at my house 24/7 for various uses. in the last 10 years i had one failing hdd needing to be replaced... ok at work (in a more serious 24/7 environment we had 2 in the last 4 yrs). does it really pay out to spend the money on the cages? it sure as hell is a lot cheaper to buy a full size tower with endless bays and some good fans for the drive bays, if needed. if i have to swap a drive once a year, hell, 4 times a year! it is still not really worth $100+ for a 5 in 3 cage. open case, 3 screws (if not fast releases anyway), 2 cables... whats the big deal!? beside looking cool and whatever. my box will end up in the basement anyway, the bling factor gets lost there already. there are decent cases holding 10-20 hdd's without cages, the money seems to be better invested in additional drives!? fans can be added front and back to drives without cages, if needed....

 

really like some insights of others here, in terms of why?!, how often you need to swap drives (due to failure, not tinkering around), heat issues (not imagined, but real world checks with/ without cage and dedicated cooling)....

 

i run my equipment (without ac) in tropical conditions (sure not ideal) with very low failure rate. considering, many of you guys running similar stuff ac'ed, supercooled, icepacked etc. - why? seems to be overkill or just 'geeky' wanna have to talk about hardware...

 

anyway, no insult to anybody intended and insights, logical explanations etc. welcome  :)

 

always, L

Link to comment

Well, you have had 30 reads without a reply, so your arguments for no cages are convincing. 

 

The reason that I went with quick change cages is simple.  As you put more and more drives into a case, the more crowded and messier the wiring becomes.  Now if a drive fails, you will move a lot of cables (both deliberately and accidentally) as you change out the drive.  Most SATA connectors are not locking and they will 'move' at the point of connection and electrically disconnect without physically detaching themselves.  Now, when you attempt to bring the array back up, you have one or more additional failures.  OK, so you know about this issue, you go back in and push all of the connectors back.  Of course, you have to disturb ALL the cables as you are doing this.  If becomes quite easy to unseat a connector that you have already seated as you seat the next one.  (If you have extensively read the  forum over a period of a couple of months you should have noticed the number of posts from people who have a second issue after replacing a failed drive.  Folks REALLY get upset and panicky when they have 10+TB of data at risk... )

 

It is your option.  I would also say that if your built will never have more than six drives, don't use cages.  Seven to ten drives is the gray area.  Eleven or more, you should definately be considering cages from the start. 

 

I hadn't even addressed the issue that there aren't many cases that can hold more than six or seven 3.5" HD's without some sort of drive cage arrangement...

Link to comment

It's not only about drive failures, but also ease of adding or upgrading or rearranging drives.  Once you get above 6 or 8 drives, the case starts to become very heavy.  Not having to move the case to open it to get at the drives is a big plus that has been more than worth the money of the hotswap cages to me.  My server weighs over 50 pounds.

 

The cages are also needed for cooling densely packed drives.  I've had to move drives that were running hot to a slot that provided better cooling.

 

I planned for my server to support 15 drives from the start, but I started with three drives and a single hotswap cage.  As I added more drives I also added hotswap cages.

Link to comment

I agree with both responders, plus the fact that time is money.

When I had my 20 drive server, I did not want to spend allot of time wading through the cages, disassembling the brackets to replace a drive in the middle of a chassis.

 

If you look at my first design, I went totally trayless to speed up the swapping.

After I started to grow, I built a larger tower server with 20 drives using the Supermicro 5x3s since I had a few supermicro servers already.

 

Also the supermicro's with the geil fans kept the drives at a good temperature without worry.

There is an alarm in the Supermicro 5x3 that let you know if the drives are too hot or the fan dies.

The Geil fans were so quiet, I could not even hear my machine.

I had about 2-4 failures in 4 years. I was able to swap a drive out within minutes and start the rebuild process.

 

 

My data and time are valuable.

Link to comment

I just find it much easier to pull a tray then to open the case and start unplugging and unscrewing things. I have 2 4x3 cages. 5.0 supports powered swapping. I tend to put my drives top to bottom matching the disk assignments so I will stick a disk into the 7th bay (cache is at the bottom) and preclear it and then stop the array and swap it on the main page and start the disk upgrade. Once complete, I'll stop the array again and swap locations. My server may go a year without the drives changing but I don't care that the cages "don't get used". I typically do multiple disk swaps when I upgrade and they're really handy during those times.

 

There are some cases with fairly easy to use internal 3.5" bays. I would consider one of those for 6 or so drives instead of hot swap cages.

 

Link to comment

Well, you have had 30 reads without a reply, so your arguments for no cages are convincing. 

 

The reason that I went with quick change cages is simple.  As you put more and more drives into a case, the more crowded and messier the wiring becomes.  Now if a drive fails, you will move a lot of cables (both deliberately and accidentally) as you change out the drive.  Most SATA connectors are not locking and they will 'move' at the point of connection and electrically disconnect without physically detaching themselves.  Now, when you attempt to bring the array back up, you have one or more additional failures.  OK, so you know about this issue, you go back in and push all of the connectors back.  Of course, you have to disturb ALL the cables as you are doing this.  If becomes quite easy to unseat a connector that you have already seated as you seat the next one.  (If you have extensively read the  forum over a period of a couple of months you should have noticed the number of posts from people who have a second issue after replacing a failed drive.  Folks REALLY get upset and panicky when they have 10+TB of data at risk... )

 

It is your option.  I would also say that if your built will never have more than six drives, don't use cages.  Seven to ten drives is the gray area.  Eleven or more, you should definately be considering cages from the start. 

 

I hadn't even addressed the issue that there aren't many cases that can hold more than six or seven 3.5" HD's without some sort of drive cage arrangement...

 

i am with you there. i also don't try to declare cages as evil. i am just amazed how many ppl here build relatively small servers going completely overboard on this kind of stuff. am i the only one with a tight budget this days.

i agree with the increasing cable issues. but one can even in a 9-12 hdd unit avoid a mess by decent cable mgmt. i am not saying it is ideal! i think, when i get to that size/ # of drives i still rather go with a good case (http://www.xigmatek.com/product.php?productid=122 for example) and get a additional cage for it - cost me little more than one 3in5 hot swap -> leads to 2 more 3tb drives for the same money.

anyway, i agree with you 100% on big builds, some ppl here have, as well i agree to go with it if you just don't know where to put your money.

 

cheers, s

 

 

Link to comment

hi all,

 

thx for input. as said above, i don't try to declare cages as evil! from a certain size on i doubt i would go completely without them.

 

but, i mentioned also that i have at this point a pretty tight budget, which went rather towards good quality psu etc., than cages. i was more amazed about the spending by some ppl with 'mini' servers at the point, 4 drives and some, fairly crappy psu's... but $100 plus hotswap cages. i mean come on, spend the money on a decent ups! that makes more sense.

 

anyway, i agree with the cable mgmt in bigger and growing builds, i can see the weight issue (depending on where it is placed), i do not really agree with the heat issue (well at a certain size i guess it becomes a growing issue) - a lot there is sloppy cable routing and a couple fans can do wonders, if placed well.

Link to comment

 

 

 

.....

i agree with the increasing cable issues. but one can even in a 9-12 hdd unit avoid a mess by decent cable mgmt. 

........

 

You also have to be very careful with 'neatening up' cheap SATA cables by cable-tying them together or you can introduce crosstalk problems that will cause intermittent data corruption.  Believe me when I say that is the absolute worst problem you will ever have on a server!!!!

Link to comment

Many of us are professionals and choose to do somewhat a professional job.

Others have a certain pride in their builds, just like people soup up cars, people soup up computers.

Others choose money now vs loosing time in the future.

When you have enough experience and done enough of these builds, you learn the value of not opening the computer to swap a drive.

It's not IF the drives fail, it's only a matter of WHEN.

 

 

My first build without cages was riddled with cables slipping out of place due to vibrations of the drives.

 

 

I like to choose the proper sized locking cables, color coded and all, to each individual controller to each 5x3. Tied neatly.

I never had to go inside the computer except to dust it out. Everything just worked very well that way.

 

 

Swapping and/or upgrading a physical drive was a mere 10 minutes vs an hour with other types of cages.

 

 

It was great. I could pop out the spare drive, pop in a new drive. preclear it. come back a few days later.

shutdown unraid. re-assign to the new slot and upgrade. Then pop out the old drive, pop in the spare. preclear it for the next time I needed it.  It was minutes to do all this instead of an hour inside the computer.

Link to comment

I wont ever go back to cases full of individual drives again. Just messing around inside a case unscrewing drives fishing in wires can be a PITA. especially if you drop a screw and have to fish it out (we all have at least once). I am sure many of us have also accidentally bumped a sata or power connector (and not notice) on a drive we were not working on and put it all back together, put the server back and go to fire it up and a drive or 2 do not come back online.. time to pull it back out..

 

The biggest reason for me. is to just not have to move the PC. all of my towers are on shelves tightly squeezed together. and my rackmounts are all racked. after you start loading these puppies up with lots of drives, to unrack it for maintenance is a PITA. they are heavy to move and I have to un-wire the backs to get them out.

 

An often overlooked argument for cages.. I tend to used the same type of cage for all of my computers.

If i have to swap a drive from pc to the other.. just pull the tray and put it into the other PC..

 

As drive become larger an cheaper, i tend to upgrade the drives in my primary PC's/Servers and "bump" the older drives down to secondary boxes and possibly bump drives for those pcs down the line to even older boxes.

 

I just moved 40 drives between 2 norco 4224's yesterday. that took about 2-3 min.. instead of hours.. and yes this was an extreme example. but it is one of my reasons for hotswap..

 

If I only had 1 server and it was easily accessible, my entire reasoning would be moot.

when you have a sever room full of them, i don't have time to mess around.

I work on servers all day at work.. i don't want to come home and do "work".. I want to go relax, less time spent on a PC is king to me..

Link to comment
An often overlooked argument for cages.. I tend to used the same type of cage for all of my computers.

If i have to swap a drive from pc to the other.. just pull the tray and put it into the other PC..

 

this is why I prefer the trayless.

Since I already had a bunch of supermicros, this is why I went with them.

My server was so heavy you could hear my back creak when I picked it up.

 

The only case I would not use a 5x3 is if it were trayless and had a back plane I could just push the drive into.

I started to use the Sandisk external PM units for that reason.

 

Link to comment

 

 

 

.....

i agree with the increasing cable issues. but one can even in a 9-12 hdd unit avoid a mess by decent cable mgmt. 

........

 

You also have to be very careful with 'neatening up' cheap SATA cables by cable-tying them together or you can introduce crosstalk problems that will cause intermittent data corruption.  Believe me when I say that is the absolute worst problem you will ever have on a server!!!!

 

thx, frank1940! interesting point. have to say i have to read up on it. never happen to me yet i think, but i also never had computer with big amounts of internal hdd's. so this is kind of a new angle to me (also a reason why i started this thread). while the coming build on my and will be fairly small with initially 6 hdd's... very interesting.

a question along this line, since you mention 'cheap' sata cables - what are you considering cheap? and i assume it is mainly about a higher level of shielding?

Link to comment

I wont ever go back to cases full of individual drives again. Just messing around inside a case unscrewing drives fishing in wires can be a PITA. especially if you drop a screw and have to fish it out (we all have at least once). I am sure many of us have also accidentally bumped a sata or power connector (and not notice) on a drive we were not working on and put it all back together, put the server back and go to fire it up and a drive or 2 do not come back online.. time to pull it back out..

 

The biggest reason for me. is to just not have to move the PC. all of my towers are on shelves tightly squeezed together. and my rackmounts are all racked. after you start loading these puppies up with lots of drives, to unrack it for maintenance is a PITA. they are heavy to move and I have to un-wire the backs to get them out.

 

An often overlooked argument for cages.. I tend to used the same type of cage for all of my computers.

If i have to swap a drive from pc to the other.. just pull the tray and put it into the other PC..

 

As drive become larger an cheaper, i tend to upgrade the drives in my primary PC's/Servers and "bump" the older drives down to secondary boxes and possibly bump drives for those pcs down the line to even older boxes.

 

I just moved 40 drives between 2 norco 4224's yesterday. that took about 2-3 min.. instead of hours.. and yes this was an extreme example. but it is one of my reasons for hotswap..

 

If I only had 1 server and it was easily accessible, my entire reasoning would be moot.

when you have a sever room full of them, i don't have time to mess around.

I work on servers all day at work.. i don't want to come home and do "work".. I want to go relax, less time spent on a PC is king to me..

 

point well taken. but you have to agree you are quite a extreme example mate  ;). i see your point completely. and, just in case it didn't come out as clear as intended in my initial post... i was more focusing on the 'start up' user... maybe having a couple of computers and a small server or two. i wouldn't dare to argue with you considering a setup like yours. gee, if it wasn't about the initially mentioned budget considerations i would just go for it cause it is convenient.

in my situation (living in jamaica with constant power spikes, brownouts and cuts) i see the money better invested in powerline conditioning and ups unit. at least a decent ups might also be a consideration for ppl in the first world building a server!? it is amazing how much damage can be done to hardware by over/under power and interuptions. (i am sure you have that sufficiently covered with your sets).

 

thx, L

 

 

Link to comment

Well, you have had 30 reads without a reply, so your arguments for no cages are convincing. 

 

The reason that I went with quick change cages is simple.  As you put more and more drives into a case, the more crowded and messier the wiring becomes.  Now if a drive fails, you will move a lot of cables (both deliberately and accidentally) as you change out the drive.  Most SATA connectors are not locking and they will 'move' at the point of connection and electrically disconnect without physically detaching themselves.  Now, when you attempt to bring the array back up, you have one or more additional failures.  OK, so you know about this issue, you go back in and push all of the connectors back.  Of course, you have to disturb ALL the cables as you are doing this.  If becomes quite easy to unseat a connector that you have already seated as you seat the next one.  (If you have extensively read the  forum over a period of a couple of months you should have noticed the number of posts from people who have a second issue after replacing a failed drive.  Folks REALLY get upset and panicky when they have 10+TB of data at risk... )

 

It is your option.  I would also say that if your built will never have more than six drives, don't use cages.  Seven to ten drives is the gray area.  Eleven or more, you should definately be considering cages from the start. 

 

I hadn't even addressed the issue that there aren't many cases that can hold more than six or seven 3.5" HD's without some sort of drive cage arrangement...

 

Recently was doing work on my desktop that has a measly 7 HDDs in it, I can confirm this. First two drives didn't work, then one, then none (But then it died about a second after going into windows), I basically had to rip all the HDD wiring out and do each one one-by-one slowly and carefully.

 

Hopefully buying my first unraid setup in the next weekish with drive cages so I won't have to deal with the bullshit of cabling, I'm more of a software guy myself; I'd rather sit for hours attempting to work out why my code isn't working than work out why my MOBO isn't passing post.

Link to comment

 

a question along this line, since you mention 'cheap' sata cables - what are you considering cheap? and i assume it is mainly about a higher level of shielding?

 

Cheap is not necessary the price you pay.  (Products with shoddy or average construction will often be priced at premium levels and advertised as being top-of-the-line.)  What I mean is cheap components being used in the construction of the actual cables.    Particularity, the shielded cabling that is used in the construction.  Unless, you wrote the spec and have actually inspected the product as a part of the manufacturing process at the factory in China, there is no way of knowing.  The problem is that poorly shielded cable will be entirely satisfactory unless you tie cables together for 'neatness'.  Remember, 98% of all of these cables will be in computers with less than two SATA cables!

 

An classic example of cheap substitution is the use of RG-59 cable for RG-6 in TV cable application.  Often, the RF cables packed with consumer electronics will be RG-59.  A short run of RG-59 will work but a longer run will result in issues of signal loss and 'pickup' of other signals because of the inadequate shielding of RG-59.    That is why your cable company repair person will automatically replace these when he/she does a service call. 

Link to comment

I use norco hdr5-v2 5in3 cases.

I need to remove 5 sata data, and a sata power cable with 5 connections form a tray when i need to swap a hdd out. Also unscrew 8 bolts to remove the 5in3'module from the pc case.

10.50 euro for the hdr5-v2 and 15 euro for a noctua pwm fan.

 

I have 2 servers with 4x 5in3 modules in each server. So i looked at 8x 100 euro for hot swap v.s. 8x25 for non hot swap

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.