c3

Members
  • Posts

    1175
  • Joined

  • Last visited

Everything posted by c3

  1. One thing you did not mention, Approximately how many drives you are thinking of supporting? That MB has (6) SATA2 connectors, 6 is great, and SATA2 is good. SATA3 is better, but probably not a true issue. Those 6 will only take you so far, the PCIe 16x will let you expand with controller of your choice. You'll want 3+ cores for that workload, and might just go ahead with 8GB of RAM considering the tomcat/jboss/etc.
  2. Well, I am investigating having SuperMicro replace the sockets. I could attempt self repair, but it would always be a nag when odd things happen. I had hoped that the open-boxes were due to marginal power supplies, as they do require the real stuff, or just people not wanting to deal with roll your own ESX LAN drivers. But careless installers, passing the buck is bad karma, shame on you. My only disappointment with NewEgg is the claim to have tested functional. They are accepting the returns (if SuperMicro wont fix). I have asked to talk with RMA testing about the process to avoid this in future. All my ESX hosts have access to the iSCSI storage, YAAC will too. I have a dual port 1Gbe so four ports on 3 vSwitches, management gets redundant. The whole setup is complicated, but I'll add to the description as I go.
  3. c3

    Norco 4224 fans

    I got a Norco 4224 with 4 fans in the middle and 2 USB ports up front. I am not concerned about noise, since this will not be in living space. But I do wish to monitor fan health. All of the fans are 2 wired to molex connector. I am using the X9SCM-F so there are 4 available fan connections on the motherboard. Any good options? I am not opposed to switching to 120mm, but it seems Norco switched from 80mm to 120mm and then back again.
  4. Newegg is awesome, they RMA'd motherboard#1 and motherboard#2. I have requested repair estimate from SuperMicro, rumor is $75. Any hints as to a super test for motherboards? (ya, ESX, unraid vm, and parity check...)
  5. I started collecting parts a while back, but with the arrival of an open-box X9SCM, building began... First motherboard came in with visible CPU socket damage http://c3images.posterous.com/bent-pin# Second motherboard came in with visible CPU socket damage http://c3images.posterous.com/bent-pin2# NewEgg RMA'd them both, hope they bin them and not try to sell on. UPDATE 12/10/11: A third motherboard is now running ESX !
  6. Yup, not sure how many of these there are, but I am building one too! Thanks John Like others, I see the need to consolidate machines and I already have a few ESX hosts and a couple iSCSI storage arrays. So, this is the first step at combining those. YAAC, the first, will host an unRAID guest using a pass-thru M1015 controller to support up to 8 drives. Other guests will include XBMC with centralize mySQL db. YAAC, as a host will join my existing ESX cluster and potentially use the iSCSI storage. Case: Norco 4244 - not sure the version, it has 2 USB up front, and 80mm fans. MB: SM X9SCM-F CPU: E3-1230 Heat Sink: Intel OEM Memory: Kingston 4x4GB ECC Power Supply: Seasonic X750 SATA Controller: M1015 Boot/ESX datastore: 750GB Seagate 7200rpm unRAID drives: (5) 2TB Seagate LP 5900rpm iSCSI drives: (7) 2TB Hitachi 7200rpm UPDATE 12-13-11: Third motherboard is now running ESX, full RAM (total 4x4GB), using the reverse breakout for ESX, and SFF8087 cables. With the Seasonic X750 and leftover cables from prior Seasonic X builds, the 1-to-7 is not needed. I just run (4) of the 3 molex cables up to the backplanes. I would like to get fan speed control. Anyone doing done that for the 4224? My reason is power, not noise. 6 fans at full speed 24x365 is just wasted energy. I join the club of wondering what those (4) switches on each backplane are for?
  7. They get the open boxes because these motherboards often do not work with borderline power supplies. But be careful as I have received one with bent CPU pins. Open Box typically sell out on first sale because there is only one, which is why I never post them. I buy it While X9SCM is a favorite for some, the rest of the series are very note worthy. X9SCM 4 PCIe slots (2 8x and 2 4x) X9SCL 3 PCIe slots (2 8x and 1 4x) X9SCA 3 PCIe slots (1 16x and 2 4x) and 3 PCI slots X9SCI 1 PCIe slot 16x and 1 PCI slot of course the -F is useful for headless/remote servers.
  8. Now, similar at NewEgg. 120GB http://www.newegg.com/Product/Product.aspx?Item=N82E16820167050 80GB http://www.newegg.com/Product/Product.aspx?Item=N82E16820167047
  9. Now, similar deals at Newegg. 120GB http://www.newegg.com/Product/Product.aspx?Item=N82E16820167051 80GB http://www.newegg.com/Product/Product.aspx?Item=N82E16820167047
  10. Don't worry, its sold out online, not available in stores...
  11. Amazon reviews indicate the $7 per drive adapter is not perfect
  12. yes, i'd love to get this for a 4224 das, but i'd come up short 4 drives there's also a nice bargain at http://www.provantage.com/intel-res2sv240~7ITSP0V8.htm Checkout the 36 porter
  13. As a fan out, it is limited to 20 with one uplink. The PCIe is only used for power...
  14. Yeah, I traveled, but no server :'( I still have my cashier check, so only a few hours lost (and a great deal). No ice, was a pretty day
  15. Not sure what happened. We had scheduled Saturday 10/29 for purchase meeting in OK. So far no response on email (gmail) or PMs. Hopefully, nothing bad happened.
  16. The LCD is 450 Watts, the non LCD is 225 Watts. Using weight, the battery size is also half.
  17. Looking for a motherboard for a low power unRAID build, so not using any cards would be good. This is motherboard sections, so I can not ask about power supply to match
  18. Already up to $110.99 (probably thanks to Amazon auto pricing).
  19. Considering both of the mentioned tasks are online, are you looking for traffic shaping for your internet connection? Internal to your home, QoS and/or jumbos are non starters if you don't want managed. Either one is plenty of work.
  20. unRAID has single parity, which only allows for the rebuild of a single failed drive. An enhancement, P+Q parity (aka RAID-DP/RAID6) would allow for a rebuilt to continue in the event of a second drive having a read error during the rebuild. This enhancement is not trivial. It would slow write performance and increase cpu load. Two drive failures are rare, and multiple simultaneous drive failures are often not actual drive failures. They are often cabling, power, cooling, or connector type failures. But as drives get larger, the probability of a read error during rebuild gets larger. Large drive capacity brings the requirement for P+Q. Do an internet search on "end of RAID5" for several detailed discussions on the topic. Choose your drive size wisely, it may save you data and power.