flambot

Members
  • Posts

    462
  • Joined

  • Last visited

Everything posted by flambot

  1. I'm just about to ring the supplier to see what cables come with the unit. Then I'll have to think about the best way to get power to them. Currently I have a cable with 4xsata connectors on it that comes straight from the power supply. I'm pretty sure all my HDD's are supplied directly from the PS with the same type of cable. My setup only uses the molex for the fans. The PCI-e plug is what the plug on the end of the cable is. I have a single molex to sata (female), but I need the reverse - a molex to sata (male). Best to get on the the supplier first. Thx EDIT: I'm wrong about the PCI-e plug - they all look to same unless you look real closely. The power ones have a different shape to the plastic pins. Further, I have found some cables that came with my PS. They come straight from the power supply and have female molex connectors on them. They look like this
  2. Excellent. Now I'll have to track down some female molex to 6 pin PCI-e connector is I can plug straight into the power supply like my existing sata power cables.
  3. I think I might get one of these to see what they're like. Any idea how the cables attach to it? I'm assuming there is a back panel that the drives slide on to (similar to the bits in a HDD dock??) - and that the Mobo SATA cable plug into this board. What about the power connectors? I've never seen any sort of quick swap cages, so I have no idea. Thx Update: I finally found a pic (not of this unit, but a Lian Li backplane panel. Shows the connctions quite clearly. Now I wonder if the fan hinges out of the way??
  4. My server has drive cages that hold 3 HDD's and sports a 120mm fan. They are a pain to change drives. I found these locally and wondered if anyone has any experience with this brand and or these hot swap cages. http://www.ascent.co.nz/productspecification.aspx?ItemID=368255 They look like you just screw a handle (supplied) to the HDD to pull it out. Not sure if the HDD locks in place. There is also a 4 HDD one http://www.ascent.co.nz/productspecification.aspx?ItemID=420428 Last time I looked at hot swap cages they were a wicked price. These seem reasonable.
  5. I could even live with 18 hours - beats my current setup by almost 10 hours (and that'll go even further south with the intro of a 6Tb parity) :'( Would be nice to be back to a small array, but I have too much media.
  6. Hi. I guess the above heading says it all. My unRaid is currently using a 4Tb WD Red parity drive. I was rather pleased that the system could recognise this sized disk, but is there anything that could limit a larger one?? I'm using a Asus P5B-E Mobo. I'm asking because a recent parity upgrade to 6tb failed to see the new disk (probably a faultly HDD, but I'm not completely certain. It's been RMA'd). Could something else be to blame? Thx
  7. I'm in New Zealand. A HGST Ultrastar He6 HUS726060ALA640 64MB 6TB (the only 6tb HGST on the pricespy list) is currenlt $800.32NZ Roughly twice the price of a 6tb WD Red. WeeboTech...I would be extremely stoked if my parity checks were only 12 hours!!
  8. That's a good first step. One more thing: After you've done that, move two of the drives from the 2nd TX4 to the first one, so you only have two drives on each of the TX4's. That will largely mitigate the bandwidth restriction your 2nd TX4 is causing on the drives attached to it. ... then when you later add another 6TB drive (to replace one of your 2TB EARS units) you'll be able to copy the data from the 3 1TB drives on the TX4's to it, and can then remove those 3 drives and one of the TX4's => at that point you'll only have one drive attached to a TX4 ... which is fine. At that point your only performance limitation will be the areal density of the remaining 2TB drives ... but your performance will be MUCH better than you're seeing now, so you can likely just live with it and replace the remaining drives at a more leisurely pace -- just buying new 6TB drives as you need more space. Thx Gary, I'll do just this. One question. Unraid keeps track of the drives via serial number and not the port designation (is that right)?? So that means I can change the ports without issue.
  9. Thank you for all the great input here. Lots of good stuff to mull over. Let's just hope the new 6Tb drive is faulty (see other thread) and the replacement is recognised by my setup. Not sure what to do if it's not - or why it isn't recognised.
  10. Thx Gary for the really insightful post. When you never grew up with computers, some of this stuff is way above my head. Your post here exactly equals my thinking. Use a new 6Tb parity, then replace disk7 (1tb) with the old 4Tb parity. That'll take care of all the 750Gb drives (once I copy all the data across). Later, a couple more 6Tb drives will take care of extra space and the remainder of the other drives on the TX4's. Hopefully, the ultimate aim is reduce parity / rebuild times, and to reduce the statistic of a drive failure by having less drives. Very much appreciated. Thank you.
  11. Not sure I'll need anything more than 6Tb in the short term. 10Tb...wow!!! My WD7500AAKS are the oldest in the system (they replaced some 500Gb Seagates). I built the box in 2007, so they are probably quite a few years old now. I don't run my server 24/7.
  12. Hey Gary, Really appreciate the expanded answer. It's helped a great deal. The short answer here is no PATA Drive. My Mobo has an eSATA that is accessed from the back panel (making it 8xonboard ports). My plan is to remove the 2xTX4 cards completely and leave 1xparity and 7xdata drives in the system. I currently run 21Tb, so using 7x4tb drives would only give 28tb (an amount of space in the short term that could be filled quite easily.) Yes...I have purchased a 6Tb WD Red for the new parity, but as in my other thread, this drive is NOT being recognised in my system. I have carried out an RMA on it, but there is always the possibility that they find it okay - so not sure what I'll do then. My drives are assigned as follows. The WD7500AAKS are obviously the oldest. parity WDC_WD40EFRX disk1 WDC_WD20EARS disk2 WDC_WD20EARX disk3 WDC_WD20EARS disk4 WDC_WD20EARX disk5 WDC_WD20EARS disk6 WDC_WD20EARS disk7 WDC_WD10EADS disk8 WDC_WD7500AAKS (On Promise TX4 Expansion Card) disk9 WDC_WD7500AAKS (On Promise TX4 Expansion Card) disk10 WDC_WD7500AAKS (On Promise TX4 Expansion Card) disk11 WDC_WD7500AAKS (On Promise TX4 Expansion Card) disk12 WDC_WD10EACS (On Promise TX4 Expansion Card) disk13 WDC_WD10EACS (On Promise TX4 Expansion Card) disk14 WDC_WD10EACS (On Promise TX4 Expansion Card) disk15 WDC_WD20EZRX (On Promise TX4 Expansion Card)
  13. Thx. That's my current intention. To run a system without any expansion cards at all. I have 8xSATA ports on the Mobo. If all 7xdata HDD's were 6tb, then that would double my current space. I doubt I will need that much space anytime soon
  14. Actually it will make these times LONGER. If the drives have equivalent areal density it would be twice as long; although it's likely your older drives have lower areal densities, so it won't be quite that bad. But unless the larger drives double your areal density, then the parity checks and rebuilds will be longer -- not shorter. Hey Gary, Thx for the input, but I still don't understand this. Are you saying that even if I reduce the number of drives, parity checks could still take as long?? I thought the PCI bottleneck (for the TX4 cards) was one of the main reasons the check took so long. When I upgrade from a 2Tb parity to a 4tb parity the check jumped from approx 22hours to 27+ hours. Both are still to long
  15. In which case your numbers are fine. But it's certainly a good idea to pay attention to the SMART parameters so you'll notice if something starts to change for the worse. Will do. Thx for the help.
  16. Greetings, I've been reading about the increasing sizes of HDD's. There is a concern that raid setups aren't keeping up with HDD size increases -especially when considering the time it takes to parity check or rebuild a disk in a large disk array. My own setup has the max. 16xHDD's and takes approx 30 hours to run a parity check. I feel this is too long. 8xHDD's are on 2x PCI Promise TX4 expansion cards. They slow the whole process down considerably. Currently, I'm in the process of upgrading my parity to 6Tb, but I wonder if this is too big. Will my parity check and rebuild time reduce by cutting the number of drives to eight (all connected directly to Mobo). I'd like to think so. Is there some way to calculate how long a check should take?? Thx
  17. Thx very much for the insight. I've been a little worried about the 20000+ ones, so it's good to know. One of the reasons I'm updating my parity drive (see other thread) is that I trying to reduced overall HDD numbers from 16 drives to eight. Hopefully, that'll reduce the time it takes to do a parity check (or HDD rebuild). HDD's are so large now that I can double my existing storage with only 8 drives. Wow...50000 hours seems a lot Thx again
  18. Actually, that's why I asked the question. UnMenu has highlighted them. Strange that some of my oldest drives don't have many of these, but some of the newer ones do. I only have two drives that have less than 10000 hours.
  19. Yes...these are older drives, but now I have found the unMenu SMART view I can keep an eye on them. Thx
  20. Another thought on this subject...when I unassign the parity that invalidates the parity info, so how do I use this drive to rebuild a HDD if it fails during re-sync of the new parity? Thx
  21. Hello, I just found the SMART page under unMenu that shows all the SMART info on one page (thank you to the person who posted about this in another thread). Several of my drives have a lot of load_cycle_count (33256, 21449, 29730, as some examples). I read about a utility that can adjust this aspect of the drive but not sure what it is and how it is done under linux. Should I be worried and is it neccessary? Thx.
  22. Hello, I've been reading up on Power on Hrs for HDD's. I now have 7xHDD's in my server that have done over 20000 hours. Under the SMART page from unMenu there is some blurb about HDD's rated at 5 years (approx 43000), but other searches etc have turned up little info? Should I be concerned? Thx
  23. I don't remember doing it this way before - even thought the parity update to 4tb wasn't that long ago. Perhaps that is my mistake. I certainly didn't do it this way this time. I thought I just had to take the old one out and put the new one it and the system (unraid) would recognise the change and ask me to assign the new one as the new parity. What I don't understand is why the log doesn't show the new drive?
  24. I figured the same thing. I'm about to give it another try in the server. If I'm remember correctly, all I do is pull the old parity, put the new one in and start the server. It then gives me the option to select the new drive as the parity. Weird the current 4tb drive went without a hitch. It's a WD 60EFRX drive - if that means anything. The 4tb is the same drive, so in my mind should work! Don't figure!