gyrene2083 Posted July 25, 2011 Share Posted July 25, 2011 I'm curious, I've read about the cables for Supermicro AOC-SASLP-MV8 8-Port SAS/SATA Add-on Card, that some folks say you need to use the forward, and some like here say you need the reverse. What exactly is the difference? -Semper Fi gyrene2083 Quote Link to comment
Rajahal Posted July 27, 2011 Share Posted July 27, 2011 Forward breakout cables are for connecting a miniSAS controller card (such as the SASLP) directly to hard drives, or to a hard drive enclosure (such as the Norco SS-500). Reverse breakout cables are for connecting a miniSAS backplane (such as those found in Norco 4220 and 4224 cases) to regular SATA ports on the motherboard or a SATA controller card (such as a SIL3132). The two types of cables look identical, but electrically are different. It can be very difficult to tell them apart if you don't know what to look for. Generally speaking, the forward breakout cables will have the letter 'F' in the model number somewhere. Sometimes reverse breakout cables have the letter 'R', but not always. If you ever aren't sure about a cable, just make a thread in the hardware forum like you did here. Actually you created that thread in the wrong forum, but no worries, I moved it for you Quote Link to comment
KYThrill Posted October 10, 2011 Author Share Posted October 10, 2011 Still running stong. My 7200 rpm parity drive runs 36C Max (during a parity check). The other green drives run 30-32C during a parity check. With only a couple drives spinning and everything else spun down, drive temps are normally 27-28C. Quote Link to comment
KYThrill Posted February 27, 2012 Author Share Posted February 27, 2012 I finally added a second MV8 card to this MB. Initially, I had some problems. The server would power back up automatically, a second or two after it completed shutdown. I did resolve the problem in BIOS. There was a setting for primary video card: PCIE or Onboard. Even though it has onboard video, the default setting was PCIE. I switched it to onboard, and all was right with the world. With a non-video card in one PCI-E x16 slot, this setting appears not to matter. With two cards in the x16 slots, apparently it does matter. I have also filled out all 20 bays now.. Heat levels are up a bit. The 7200 rpm parity drive will hit 41C max during a parity check. Runs cooler (under 40C) any other time. I may need to increase the airflow (and sound) a bit to improve cooling. Still debating what I get keeping my hottest drive around 35C vs 40C. Quote Link to comment
publicENEMY Posted March 2, 2012 Share Posted March 2, 2012 may i ask why 4220 and not 4224? i bought 4220 since unraid support max data drive is 20(totally forgot about the cache and parity drive) and 4220 have optical drive bay which i plan to use to backup my blurays. unfortunately, i think i made a mistake. whats your thoughts? Quote Link to comment
KYThrill Posted March 5, 2012 Author Share Posted March 5, 2012 may i ask why 4220 and not 4224? i bought 4220 since unraid support max data drive is 20(totally forgot about the cache and parity drive) and 4220 have optical drive bay which i plan to use to backup my blurays. unfortunately, i think i made a mistake. whats your thoughts? Well, I can do 21 drives (a 2.5" cache drive). There is a 2nd 2.5" mounting location I don't use. You could make use of that spot for a data drive, but it will be less than 2TB. The short answer is that I went with a 20 drive (21 with cache) array for a slightly improved reliability and slightly reduced risk of data loss. The longer answer, unRAID can withstand one drive failure with no data loss, but 2 drives will still loose some data. I did not consider that the loss was limited, but that any loss could be catastrophic (since I wouldn't get to choose what was lost). The second statistic that is critical is the chance of a 2nd drive failing during a rebuild from another failed (or swapped for a larger size) drive. A third statistic I looked at was chance of failure of my array over a five year period. I set my limits of risk up front, and then ran calculations. A 24 drive array did not meet the limits I set. Admittedly, unRAID is a little different than regular RAID arrays, and the equations I used were either directly from or slightly modified from standard RAID calculations. But that was the best I could do without trying to reinvent the wheel. Unfortunately, I did these calcs long ago, so I don't remember exactly what the difference was (I do remember it was small, but enough to exceed the max limit I set before hand). Again, I don't remember the exact numbers on everything (I'm sure I have all that around somewhere, but I can't find it now), but it was something like, if my max target was 10%, 20 disks were 9% and 24 disks were 12%. Not a huge difference, but just outside of my design parameters at the time (with no unRAID experience). I think there are many valid arguments as to why my calculations at the time might not reflect the realities of unRAID (for example, RAID discs all spin at the same time, but unRAID spins in groups, so drives will not see the same type of failure from spinning, which should make drives last longer in unRAID). But it was the best I could do at the time with what I did know. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.