Jump to content

Johnm

Members
  • Posts

    2,484
  • Joined

  • Last visited

Everything posted by Johnm

  1. Errr.. honestly. that board is pretty old at this point. A few notes about it from my experience with those... The PCIe slots are V1 (Gen1) not V2. not a huge problem. just don't expect a SAS cards to run very fast. The slots are limited to 4 GB/s total bandwidth per 8x slot. 2GB/s total bandwidth for the 4x slots. It does not have VT-d so no hardware passthough. it uses FBDIMMs, those are pretty expensive these days for new. something like $100 for a 4GB stick. not to mention not super fast by todays standards those Xeons are like little space heaters. they are not very efficient electricity wise. if you are running Free version of ESXi, it is limited to a single CPU, this also cuts the boards ram in half. so the board tops out at 16GB for a single CPU as I recall. what you save on the cost of the board and CPU, you have to eat back in ram costs. this will bring it back up to the price of a modern modern board/CPU/RAM Combo I would not get it for ESXi if you plan to run a storage guest. unless you don't mind setting up RDM for every drive (I don't recommended it). For a test lab, it will do fine if you can source the ram cheap. for production, I would avoid it. This is assuming it is on the HCL, I didn't even look... ioomu = AMD term, Intel uses VT-d to describe that feature. as stated, no. this board predates that technology. It does however have VT-i, so it can run Virtual hardware. But, no passthough. EDIT: I am not sure how the title has to do with motherboard sourcing?
  2. I should have pointed out, pull off the front panel "when you bend the tabs".. ps. this box has really short tabs so it was an extra PITA to get them to fold over but the "C-clamp" method works. It does come with 2 Fans a 140mm on the back and a 120mm on the front as I recall. I moved my front 120mm to the top aft and removed my drive cages and went with 5in3 supermicro cages with fan controllers to spin the down to about 1/4-1/3 speed. the only noise it really makes now is the drives crunching away (mine is not an unraid box, win 2k8 with hardware raid. so lots of drive crunch sounds) I can say i love this build..
  3. Welcome to the forums. One thing I would mention; Personally, i feel that transferring 20TB to unraid without parity does leave for the possibility of data corruption if your drives are not pristine. one thing I would do, if you still have your source file is to try and make sure your data is perfect. perhaps some sort of deep hash or binary comparison program before you erase your old data. I am not saying you did it "wrong". it is just my personal preference to make sure you data is safe at all times. building without parity can lead to unexpected data corruption that you wont know about until you read the data back. To answer your question. there is more then one way to get to the point you are asking about. The way I would do it myself (others might have a better way). I would first build parity with that 2TB drive you have under the stable 4.7 build. I would then test parity after it is built in 4.7. This is a new build and we want to make sure it all looks good and that we can create and test parity before we move on.. do a little bit of testing make sure it is stable if you like. If it all looks good. go ahead and do the upgrade to the bata of your choice. Test that it looks OK, spin up and spin down the drives and copy some data on and off. I personally would run parity with "data correction off "one more time to make sure you can still test parity in the new kernel. check you syslog for errors. If everything looks good, I would stop the array and upgrade the parity disk. run parity again... then test file copies again.. double check the syslog Yes, i said said a lot of testing... some of it is not "necessary", but, 20TB of data would be something I would want to make sure is safe at all times. taking shortcuts or rushing things can lead to disaster.
  4. if you swapped mobos and it still persisted, look to cpu, power supply, memory issues. look for a bent pin on cpu. try different ram or 1 stick at a time.. make sure cpu power header is plugged in.. possibly bad psu but not very likely any beeps to give ideas?
  5. the single rail @ 60 amp will easily allow you to fill that 900 with the maximum of its 15drives with overhead to spare.
  6. i agree the prom9 is a nice case. it does come with 2x 4in3 adapters for hard drive but only one cooling fan. if you get an extra 120mm, you can add 8 drives before you need to worry about the 5in3 adapter. also, going with Raj's advice, pull the whole front panel off so you dont scratch the curved brushed aluminum. it scratches easily. just give it a good tug. it sounds like you have a good plan. welcome to unraid!
  7. Few reasons for me. 1 more bandwidth. PCIe2 8x SAS3. (you can easily over-saturate the saslp-mv8, not a big deal though) 2 native ESXi compatibility. no hack needed (that may not work in a future esxi) 3 Price, I was getting new system pulls off ebay for 1/2-2/3's the price of an MV8 4 port expander aware with the speed to back it up. (I am sure I will put two ESXi builds on atlas down the road in DAS boxes) 5... its 5am and my brain is dead... 6 industry standard enterprise card and good bios support. 7 3+TB support (so is the mv8) 8 SATAIII SSD's at full 6Gb speed... (not to mention they do a good job at a raid0 of SSD's imo not that it is needed for unRAID) The saslp-mv8 is a solid card and i recommend it for anyone using unRAID on bare metal. especially if you are running 4.7. The flaw with the M1015 at this point is you need to run betas for the spindown/temp support. and that it is broken completely in latest beta's.
  8. yes sorry, I bought the 1M cables and they are to long. the .5m were to short... so go with the 1m and some zip ties
  9. I would check your cache drive speed before and after the change. It might make more sense to put the last data drive on this controller and have the cache on the motherboard? if there is a speed hit. i rather have my parity take an extra hour then a cache drive speed hit. then again, it might perform just fine the way you suggested.. just food for thought.
  10. It was really cheap on buy.com the other day for 8 gig sticks for the X8SIL
  11. AMD + IPMI usually = Opteron = G34 or C32 socket...
  12. the pci hard drive adapters are pretty nice looking. I just don't have the realestate for 3-4 of them as per my needs. I have looked at the scythe rafter to both hold my internal drives and an added fan for additional cooling of my HBA's. I just never bothered ordering one because it looks... well cheap. there is a nicer quick swap one from an unheard of company that i can't source in the US for the life of me. As far as the M1015 cooling.. that part of the 4224 is sort of dead air. especially if you go to slower/lower cfm fans in the fan wall. the back top is vented to let hot air out via convection. However, the M1015s do get a bit toasty there, especially with 3. I don't think they are in danger of frying with stock cooling.. however. I did do a ghetto fan mod in my 4224's to keep the air in that area circulating. I am very tempted to slot out all my PCI brackets and mount another exhaust fan on the outside of the case back there. I probably wont, but as time goes by, the modder in me wants to become a mad scientest with my norco. (better cooling, aditional HDD and Bluray rom mounted internal with external access... eSATA port headers in the front, sound deading foam inside the case) EDIT: I should point out in case you missed it, the Norco branded SAS cables are to short for the M1050's.
  13. For the cache drive, You want a fast, reliable Hard drive that is larger then what you copy in your average day to your unraid. not all sata drives are made the same. they do vary in write speed. especially older ones tend to be much slower. also, you will be trusting ALL of your data to this drive so you need to make sure it is reliable, not a 5 year old drive that in a desk drawer. with the way hard drives prices are right now, you can wait on the cache drive until priced fall again if needed. as far as the BRI10i, it can only see 2.2TB. It looks like it will never go beyond that limit. as of 5.0beta7, unRAID supports 3TB drives. Not the BRI10i. I would not waste money on a card that is essentially obsolete in todays technology. I would stick to a controller that is still solid in unraid and supports greater then 2.2TB. as far as the rest of your build, I am not familiar with the motherboard, but your plan seems to be solid. i would keep an eye out for sales and pick up parts when you can. by the time you are ready for your build, you might see something better or cheaper.
  14. If you already ordered the parts, then it is not quite "Advice before I start, please". I would have suggested much newer , power efficient hardware that costs less. But, if you already are at this point, I guess the next step to start building the server once it arrives and go from there. as far as the HTPC, I would think mac mini should be ideal (other then finding a good remote). I have a 2010 mini that has never hiccuped at any HD content yet. I use XBMC not Plex. but I am thinking about switching. I guess the question is what format are your 1080p movies in now? ISO or MKVs?
  15. Well, honestly it is your call in the end. The SATAIII has a noticeable difference when using SSD's I am sure with such a powerful ESXi build, that extra PCI slot might be handy. maybe a raid card for the datastore or a usb3 card? Another NIC? but not at the expense of an arm and a leg. You will be surprised how fast that space goes on hard drives. Keep in mind that if you are backing up live guests, some ESXi backup software takes a snapshot of the guest so you need overhead on the datastore equal to the size of your largest guest. I find that 3x 30 gig guests is all i can fit on a 120gig SSD. you also mentioned a lot of usenet downloading. I have a 500GB virtual drive on my 7200 datastore spinner just for this task (I am considering giving it it's own RMD laptop spinner). i don't know that you want to write straight to the unraid. I have a program that moves completed rar sets to my unraid cache drive once it is "complete". unfortunately that leaves me the burden of once in a while cleaning up the "trash" I don't think you will even need the controller. even on on high, the Noctua are pretty darn quiet. the noise is all air. test them with their own speed control wires for temp readings first. then decide. Intel stock coolers are pretty quiet these days. they do suck for overclocking. For this build, it should be just fine. If you are running 24x7 handbrake at 100% CPU in turbo, then you might consider aftermarket. Just watch your tower coolers, they might not fit. I just can't wait to see the Ivy Bridge models come next year....
  16. Johnm

    New Design

    The MBD-X7SPx comes in two sizes: A and E X7SPA = mITX X7SPE = FlexATX (Proprietary). Technically, there is a X7SPT = that is a special board with two computers on one motherboard and fits nothing other then 1 special case. The first one (X7SPA) is true mITX, it will fit any mITX case (and any case for larger mother boards). It should fit almost any case out there since mITX uses the first four primary mounting holes of any standard motherboard. The second one (X7SPE) is about an inch wider then mITX. the PCIe slot it is moved over one row to fit the Supermicro 1u servers riser cards. It will fit in MOST mITX cases. the exception will be the ultra tiny mini cases. Usually the type designed to fit on the back of a monitor. Supermicro reference: http://www.supermicro.com/products/motherboard/Atom/ Newegg only sells the second type (X7SPE) of the 525 and first type (X7SPA) of the 510. I am not sure why they went this route. I have some of each of those and they both fit my mITX cases just fine (Chenbro ES34169). they should both fit any Lan-Li case, including the one you mentioned. also, While listed as 4Gig max, you can shove 8Gigs of ram in D525 with certain types of RAM. I have noticed the memory clocks slower when you do this (as do most Supermicro boards when you reach a certain memory amount). I hope this helped and didn't confuse you any more.
  17. that my friend looks like a failing drive I would also check your power cables and size of your power supply unless you really did power it on and off 31 times in its 43 hour life time. The high error rate "could" be attributed to a lose power plug, but the high pending sector counts looks grim. I would guess you in fact that you got a DOA... that or you dropped your server while running ...more likely the first one. RMA it. try and return to your vendor first to save time and expense
  18. it could be an under/over volting warning too.. could be power supply load has changed or dirty power? some SM boards have a voltage error header and fan error headers. hook an led to them (hard drive, nic or power lights?) to it and see if one lights when it alerts.
  19. as far as RAM. 16GB is the best you can do with that board for now. Kingston does list the 8GB chips but they are hard to get since they are either new or in pre-production. http://www.ec.kingston.com/ecom/configurator_new/partsinfo.asp?root=uk&LinkBack=&ktcpartno=KVR1333D3E9SK2/16G it sounds more like you will need disk IO over ram from your specs. If you do really want 32Gigs look at an X8SIL. I pound my guests and the E3 1240 is almost always under 25% used.. i only see it move when encoding video or large rar/par operations. 3x M1015's will work fine, put 8 drives on each of the cards in the 8X slots and 4 drives on the card in the 4x slot. you wont come close to saturating the cards, even with parity checks. just be aware there are issues with lsi cards and the latest betas (13-14) the difference between the X9SCL-+F and the X9SCM-F is: the SCM has an extra PCIe2.0 4x slot. 2 of the sata ports are SATA III. it also has 1x 82579LM and 1x 82574L GbE connectors. the SCL-+ has 2x 82574L GbE connectors (that's better, that's what the + is). Both will work fine.. If you are going for the. SCL+, you could go with a SATA II SSD and save a few bucks. no point in buying a bell or whistle you cant use. as far as the fans.. i would be a little worried. you need lots of pressure more then CFM. those fans need to pull air though a tight space while creating a positive pressure area in the motherboard compartment. A few of us have failed with some brands of fan. I would be interested in seeing your results. Look at CPU fans and radiator fans also. So far the naucta's have not let me down and stayed fairly quiet.
  20. err.. o--m--g! yeah like anyone would run that (or afford it) but a nice dream.
  21. there are only two companies now... seagate now owns Samsung and should complete the absorption this month.. WD now owns Hitachi.. that one is taking a little longer. can you say monopoly? the price wars of the last year or so will become a thing of the past are my fears... i do not think we will see $55 2TB drives and $105 3TB drives again for a while... to answer your question. go with what ever sale (rare right now) you can find and buy the best Gigabyte per dollar you can get. if they are close, go bigger for long run savings.
  22. do you have light on the nic when the cable is plugged in? did you try a different cable?
  23. Johnm

    More Memory?

    if you're talking modern DDR3, right now it is priced at an all time low. it would be silly not to stock up on ram.. you can get 16Gig for about what i bought 4 gigs for a year ago. for an older server, i would have to weigh the pros and cons.
  24. i did my entire server in one shot... Just be careful if you have any EARX drives in the pile. I recall seeing a warning about those and wdile3. you might want to look into that and see what it was about if that may be the case
×
×
  • Create New...