Jump to content

693 posts in this topic Last Reply

Recommended Posts

A note to all 4020/4220 users: the second row of Molex power connectors on the backplane is intended to be used *only* with a second redundant power supply or an alternate power supply. You don't need to connect it using Y-adapters to a single PS!

 

hello, has this been confirmed ... can you possibly run 20 drives only connecting one molex for each backplane ? has anyone done this ?

 

thanks in advance

20 drives, each pulling 2 amps upon spin-up will in total draw 40 Amps of current.  A single molex is not rated for that amount of current.  It might work, but you are going to be asking for trouble.  There is a reason the ATX spec limited any single rail to 18 Amps. 

 

According to this page, the molex connector is rated for 11 Amps.

Share this post


Link to post

depends on the gauge of the wiring :)

 

some psu come with really fat wires.

Does not matter about the wire guage (well, it does if it is too small). 

Instead, we are talking about the single molex connector feeding all 20 drives.

It apparently is rated for 11 Amps based on the size of its connecting surfaces. It will be stressed at a 40 Amp load starting to spin 20 drives.

Share this post


Link to post

I have a custom molex power rail (there's a picture of it somewhere on the forum) that powers 10 drives (all green) off of a single molex (directly from the PSU).  I have not had any problems.  I specifically held the molex in question while powering up to see if it got hot - and it did not.  Not in the slightest.

 

I cannot advise anyone else to brazenly ignore the specs on the molex connector as I have, but this has been my experience.

Share this post


Link to post

so thats 11 amps per pin? thers 4 pins :)

 

Uhm, no.

 

then please define how the amperage load is divided between the 4 connectors.

 

i hav eno issues running 12/16 hdds off a single line off my psu with sata cli pon connectors i assume psu modular connectors pins are very much identical to the pins used in a molex 4 pin connector.

Share this post


Link to post

so thats 11 amps per pin? thers 4 pins :)

 

Uhm, no.

 

then please define how the amperage load is divided between the 4 connectors.

 

i have no issues running 12/16 hdds off a single line off my psu with sata cli pon connectors i assume psu modular connectors pins are very much identical to the pins used in a molex 4 pin connector.

 

 

Color Type

Pin 1 Yellow +12 V

Pin 2 Black Ground

Pin 3 Black Ground

Pin 4 Red +5 V

 

We've not talked much about the 5 Volt current needed by disks because the 5Volt lines on power supplies are much higher amperage than the 12 volt supplies.   According to the site quoted earlier the 5 volt wattage ranged from 2 watts to 4 watts. (.4 to .8 Amps per disk.)  20 disks therefore will use an average of 12 Amperes of current on the pin 4, the 5 Volt connection.

 

Pin 1, the 12 Volt connection will have 40 Amps through it on spin-up.

 

The two other pins, pin 2 and 3 share the return current (Korchoff's Law), as they are in parallel.  They then share evenly 52 Amperes of current (26 Amperes of current each)

 

Therefore

Pin 1 = 40 Amperes

Pin 2 = 26 Amperes

Pin 3 = 26 Amperes

Pin 4 = 12 Amperes

 

So, in the case of 20 drives, all spinning up, all through 1 connector, the only pin even close to its rating is pin 4.  Basically, all are being used over their rated current capacity.

 

Now for the bad news.  Every connection has some resistance.  when you pass current through a resistance you have a voltage drop.  Looking up the expected "resistance" of a molex connector I find it listed as 20 miliohms.

 

Plugging this into google and it says:

(40 amperes) * 20 milliohms = 0.8 volts

 

This means our 12 volt line is now 11.2 volts, and actually, changing as the current draw changes while the drives spin up.    Worse than that, 40 Amperes * .8 Volts = 32 Watts.  Our poor connector pin will be dissipating 32 Watts of heat. (The connector will be getting warm) Oh, but wait... That is just the one pin.  The other pins also have their voltage drops.

Pin 2 = (20 milliohms) * 26 amperes = 0.52 volts drop  (13.52 Watts heat)

Pin 3 = (20 milliohms) * 26 amperes = 0.52 volts drop  (13.52 Watts heat)

Pin 4 = (20 milliohms) * 12 amperes = 0.24 volts drop  (2.88 Watts heat)

The total heat is 61.92 Watts.   Ouch... ever grab hold of a 60 Watt light bulb... pretty hot isn't it.  I'm thinking it is a good thing the spinup time is only 30 seconds or so, but even when not spinning up, we are over the ratings of single pins in the connector.

 

The wire from the power supply to the connector probably is pretty heavy gauge, let's guess it is 16 gauge.  Let's guess there are two feet of wire involved(one foot from the supply to the disk, one foot of wire for the ground return).  The resistance of 16 gauge wire is .00473 ohms per foot.   Doing the math, two feet of wire, carrying 40 Amperes will have

(.00473 * 2) * 40 = 0.3784

Volts drop across it.  In reality, the wire distance to the disk from the supply is frequently much more than 1 foot.

 

Now, subtracting that from 11.2 and we see the disk is actually getting 10.82 Volts when everything is spinning up.  Still sound good to you?  Remember, earlier I posted a note from a disk spec sheet describing how disks were very sensitive to voltage  variations (noise) greater than .1 volts and if being read or written while the voltage is changing, the results were not guaranteed.   I don't have a degree in Math, but my grade school math skills are enough to tell me that 1.17 volts of noise is likely to cause issues.

 

Is it no wonder why disks become unstable when too many are attached to a power supply through a single connector, or even multiple connectors through a single cable, even if the power supply is able to supply the rated current.

 

Joe L.

Share this post


Link to post

Can't the drives have staggered spin up on power Up? That would tend to solve the power draw issue but introduce others. Is that a drive function?, an OS function, a MoBo function, an application Function?, a....

 

I thought (and do not know why I do) that modern drives have some configuration setting that says wait n units of time after power on before powering up. If so I guess one has to go to the drive mfg to find out and how to do it.  I will do that fopr the Hitachi 2TB drives I am using and report back.

 

The delays need not be long as the startup inrush current lasts for a relatively short time. This brings me back to 1958 and my woindering why I was in this EE lab with all sorts of electric motors.

Share this post


Link to post

Can't the drives have staggered spin up on power Up? That would tend to solve the power draw issue but introduce others. Is that a drive function?, an OS function, a MoBo function, an application Function?, a....

 

I thought (and do not know why I do) that modern drives have some configuration setting that says wait n units of time after power on before powering up. If so I guess one has to go to the drive mfg to find out and how to do it.  I will do that fopr the Hitachi 2TB drives I am using and report back.

 

The delays need not be long as the startup inrush current lasts for a relatively short time. This brings me back to 1958 and my woindering why I was in this EE lab with all sorts of electric motors.

It is not inrush current, although I'd be willing to bet it is even higher than the "peak" values mentioned by the manufacturers, but it is the extra current needed to get the disks to spin up to their rated RPM.  It is much more than a fraction of a second, but instead the entire time the disks are accelerating.  For some disks, this could easily be 10 seconds or more.  In fact, some "green" drive boast about how slowly they spin up to speed. (Actually, they boast about how they've reduced their peak power needs when spinning up, by using a smaller motor, and taking a longer time to slowly spin up to speed) 

 

You might be able to have the disk controllers stagger the disk spinup on boot-up, by when you do a parity calc, or when you have a disk fail and all the others have to spin up at the same time it does not work.

 

Basically, even if you have a power supply capable of unlimited current, if there are wires or connector contacts with any fraction of an ohm resistance between it and the disks it powers you have significant voltage drops once the currents increase.

 

I left out a few voltage drops in my past example...  Again assume a perfect 12 volt supply

 

12 Volts starting voltage.

 

  40 Amps of current through 1 foot of 16 gauge wire = 0.1892 volts

 

  40 Amps of current through 20 milliohm molex connector = .8 volts

 

  40 Amps through a parallel pair of 20 miliohm molex pins (the two ground pins) = .4 volts

 

  40 Amps through 1 foot of a parallel pair of 16 guage wires (the two ground wires back to the power supply) = .2 volts

 

12 volts minus .1892 volts - .8 volts - .4 volts - .2 volts = 10.41 volts at the hard disk.    (In my prior post I did not account for the voltage drop of the ground wires going back to the power supply)

 

And it does not matter how good your power supply might be.  If the supply is unable to keep up with the current demand, it is even worse. If it drops a few tenths of a volt under load, then combined with the resistance of the wiring, the end result might be out of tolerance for a disk.  Wiring harness resistance is why even a 750 Watt supply may not be able to handle large numbers of drives.    As much as you might like a single connection to the power supply, you can't cheat physics (not until we get room-temperature superconducting power harnesses).

 

Joe L.

Share this post


Link to post

Hey guys, this power stuff is great, can we move it over into another thread?

I would love to comment on the staggered spin up, but I don't want to deviate to far in a new direction in regards to pimping a rig and the informative power discussion.

Share this post


Link to post

Hey guys, this power stuff is great, can we move it over into another thread?

I would love to comment on the staggered spin up, but I don't want to deviate to far in a new direction in regards to pimping a rig and the informative power discussion.

Good idea.

Share this post


Link to post

Sweet Keyway, Let us know what your parity check speeds are.

I have a similar build with a Coolermaster Stacker and the AOC-SATA-MV8's. (I choose them as the price was right for open box @70 each).

 

I may upgrade to the SAS Controllers, but I'm not sure. I may fill the PCIe with an Areca for the parity/cache with SAFE RAID and use a bracket or two for floor mounting. My PSU is on top so that bottom PSU area is ripe for another couple drives hahaha!  I like the look of the antec 1200 with Supermicros. I bet with the larger fans exhausting air, you might be able to get away with taking the loud sanyo denki's off the CSE-M35T-1B. I went with the GEILD from the recommendation @ newegg reviews. They are very quiet.

 

i took your advice and purchased some of the GEILD fans and they are really quiet running at full speed compared to the stock fans. i picked up these fans (http://www.newegg.com/Product/Product.aspx?Item=N82E16835426004) and they work great. the only problem i ran into with these fans was that they came with a temp sensor to auto adjust the fan speed so i just cut off the sensor and soldered the wires together to run them at full speed.

 

here is the noise difference measured using my ratshack spl meter  ;D

 

Stock Fans:

From 1 Foot: 68db

From 10 Feet: 54db

 

New Fans:

From 1 Foot: 56db

From 10 Feet: did not register on the meter because the lowest i can go is 49db

 

thanks

James

Share this post


Link to post

long ass snip

 

that clears a lot up, except i have sata power connnectors IDT fit :)

 

i should probably mention my psu uses 16 awg wiring too, not the usual 18/20 awg.

Share this post


Link to post

the only problem i ran into with these fans was that they came with a temp sensor to auto adjust the fan speed so i just cut off the sensor and soldered the wires together to run them at full speed.

 

I just ordered some GELID fans as well.. Where did they put the temp sensor? I would like to disable that as well, but just from the pictures of the fan, I can't see anything sensor looking..

Share this post


Link to post

I just ordered some GELID fans as well.. Where did they put the temp sensor? I would like to disable that as well, but just from the pictures of the fan, I can't see anything sensor looking..

 

it was about 10" of wire coming from the fan motor with the sensor attached at the end.

Share this post


Link to post

A note to all 4020/4220 users: the second row of Molex power connectors on the backplane is intended to be used *only* with a second redundant power supply or an alternate power supply. You don't need to connect it using Y-adapters to a single PS!

 

hello, has this been confirmed ... can you possibly run 20 drives only connecting one molex for each backplane ? has anyone done this ?

 

thanks in advance

20 drives, each pulling 2 amps upon spin-up will in total draw 40 Amps of current.  A single molex is not rated for that amount of current.   It might work, but you are going to be asking for trouble.   There is a reason the ATX spec limited any single rail to 18 Amps.  

 

According to this page, the molex connector is rated for 11 Amps.

 

Kind of missed those postings. The Norco 4220 cases use one Molex connetor per 4 drives, i.e. to power each mini disk backplane. There are 5 backplanes on 5 rows, each with two Molex connectors, one is required, the second one is to be connected to a redundant power source. So, they definitely do not use *one* Molex for all 20 drives, but for 4 only.

 

Share this post


Link to post

I just moved from a Chenbro ES34069 with 4 drives to a Norco RPC-450B based hardware with 10 drives (max 15 drives). I wanted a device that could hold more drives with a good air flow. I don't really care about hot-swap drive bays (too expensive and unRAID doesn't support hot-swap today) but also didn't want to spend too much time building my own drive bays. Furthermore I wanted something both heavy and not eye-catching so a potential "visitor" would be less tempted to take it over. As this beast is going in my garage, noise is less of an issue. I looked around and couldn't find any nice industrial chassis that would fit many drives out of the box. Then I found the Norco RPC-450B that would hold 10 drives with no screws have two front 120mm fans. It even has room for additional drives as there is an empty 3x5.25" bay. Priced at $78 this sounded a too good deal. Well I did pay that amount but living in Europe I also payed more than $100 for shipping and guess what the parcel got returned to the US the first time so I ended paying even more for a Fedex delivery. Assembly went well, built quality is ok but nothing like a Lian Li case. First things I replaced were the front fans (way too loud even for a garage). The result is acceptable but due to the two grids and dust filter in front of each them, the case is not quiet. Performance was up compared to previous Atom based platform but mostly on the write speed. I guess at the moment read speed is limited by the client PC. Last thing, I do recommend those metal grip SATA cables. They make your life much easier (and safer) when playing around with drives and SATA controllers.

 

 

Overview

844571674_Thqf8-L.jpg

 

Screwless drive bay

844572082_2iyKc-L.jpg

 

Random pictures

844579710_b4HyA-S.jpg 844578386_KG48q-S.jpg

 

844576791_GVhkf-S.jpg 844579342_3BAAM-S.jpg

 

Full photo gallery here: http://bit.ly/cCxpMP

Share this post


Link to post

Hi aplhazo, a very neat and tidy rig you have, I especially like the cable work you've done. If I was building this rignfor me and asked you to do it, I couldn't of done it as good then that. Good piece of kit you have too and good choice of hardware. Looks very professional and it seems it didn't require too much work to assemble too.

Share this post


Link to post

so here it is, my second build, or more accurately, just a "reincarnation" of my existing/previous rig.

the idea was to be able to have more drives (i was using an antec 300 and had already 6 disks in it) and especially to make maintenance easier and quicker, i.e. not to have to open the box to replace a failed drive or to add a new one.  just that is worth its weight in gold to me.

 

the entrails of the box are the same: gigabyte ga-ma770-ud3 motherboard, sempron le-1250 cpu, 2gigs of ddr2-667 ram, asus radeon 4350 passive graphics card, tp-link gigabit nic for "server to server" transfers, an ide dvd burner "just in case", that lets me boot some live cd for testing (if need be). 

 

the new build adds: 3 syba sil3132 pcie 1x sata cards (adding 6 ports to the mortherboard's 6), two supermicro CSE-M35SB drive cages (noctua 92mm fans replacing the stock ones) and two Kingwin KF-1000-BK trayless racks. 

 

it replaces the antec 300 case with a coolermaster 590 case, with [3 arctic cooling AF12Pro PWM + 1 scythe sff21d] fans for cooling, and the ocz ModXStream 500w with a 600w one.

 

i did work at closing all unnecessary vent holes to try to ensure that a max. of air would be sucked in by & around the drive assemblies. 

 

i thus went from 6 internal drives to a possible 12, all accessible from the outside.

the drives have been running fairly cool (max temp. has been 31c, maybe 32c).

 

the geek factor has obviously shot upwards with all the separate drive activity lights... and the white fans (looks geeky to me).  the only disappointment is that the drive fault lights are not functional with the sata version of the supermicro cse-m35sb.  beyond that, i have been quite happy with the new server, whilst not as quiet as the previous incarnation, is not that noisy after all.

 

here are some pictures, this time in a more manageable size (no need to use any funky software, it turns out win7's built-in picture viewer/editor can resize quite well.  the ui is a pain, but if you are persistent, you can the job done).

 

cheers.

 

2462wpz.jpg

 

23lbpc0.jpg

 

6puc1x.jpg

 

24xj21v.jpg

 

j9x11j.jpg

 

Share this post


Link to post

Nice! What do you do with the CDROM? Did you recompile the kernel to make it accessible?

I had planned to put on in my machine so I could rip from it with vmware.

I think I'm going with the external SCSI route.

Share this post


Link to post

Nice! What do you do with the CDROM? Did you recompile the kernel to make it accessible?

I had planned to put on in my machine so I could rip from it with vmware.

I think I'm going with the external SCSI route.

 

thank you.  i think i could make it tidier if i had slightly longer sata cables to route them like some of you guys have done (straight lines, 90 degree angles, being able to go along the sides of the drives, etc.).  the last two trayless bays (KINGWIN KF-1000-BK, added after the fact) sort of spoiled "the look", i might go back and tidy things up in the future.

 

as for the cd-rom, i used it to test all hardware before running unraid (knoppix and other live cds are a godsend).  if i re-generate another bart-pe cd, i might be able to use it to flash the firmware on new syba cards (all instructions to do so make use of windows xp or 7 for an os).  i haven't used it with unraid, maybe i should try to see if it can see the drive.  though from your question i doubt i might be able to do with "plain vanilla" unraid.  maybe we don't have to recompile the kernel to make use of the dvd-rom?  as long as Tom (mr. unraid himself) left a handful of loadable kernel modules with the os, this might just be a question of editing one's "go" script.

 

i have an usb lcd display (2x20, iirc) that i would love to connect to my nas (usb hd44780 lcd unit from 'lcdmodkit', check on ebay) but i would need to:

(1) fabricate or find an adapter to give it a standard usb connector to plug it into one of the external usb ports (because it connects to the motherboard usb leads),

(2) find some sort of little enclosure to house it (it would sit on top of the nas),

(3) find some software, running under unraid, to give me stats: drive usage, drive & cpu temp, etc.

but like the cabling it will have to wait (unless someone can list me all the required parts off the top of their heads).

 

cheers.

 

Share this post


Link to post

My first unRAID rig :D Wiring is atrocious but it's all secure

 

Intel E5200 + Arctic Cooling Freezer 7 Pro Rev 2

2GB PC6400 DDR2

Asus P5Q-WS Mobo

Asus ATI HD 4350 passive video card

Raidcore BC4852 PCI-X RAID card used for extra ports

8 * 2TB WD Green, 4 * Seagate 1.5TB LP in 4-in-3, 500GB Seagate 7200.12 for cache

2GB Transcend JetFlash as /boot (See ghetto install below :D)

NZXT Whisper case + An extra fan cable tied in :P

FSP 800w modular PSU

 

2uiwyty.jpg

 

2a5mhe.jpg

Share this post


Link to post

Here's my build:

 

I did it all myself!  ;D

 

I love it. UnRAID Minimalist. No noisy fans. Easy to swap out drives. What more could you need!  8)

 

Actually that's almost exactly how mine started out.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.