Jump to content

thaddeussmith

Members
  • Content Count

    213
  • Joined

  • Last visited

Community Reputation

2 Neutral

About thaddeussmith

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Dallas

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not really kudos, but certainly some dialogue about the concepts or avenues to explore from here. It's interesting to see the pushback on hw raid without anything to really back up the opinions.
  2. So nobody tinkers with this stuff? You just read through the unraid wiki or watch the linus tech tips videos and follow step-by-step? I guess I need a samurai mask in my case for it to be interesting to the community..
  3. 4u, front load with trays, like the supermicro cases. I'll grab a pic or two, but my server room is in absolute disarray right now.
  4. What? Not 160 servers, just around 160 2TB drives, with about 140 2TB drives sitting as cold spares right now. I literally took all of the servers with the intent of only using one or two and keeping everything else as cold spares. Electricity cost hasn't been measured, but I've since stopped using several other virtualization hosts, so it should be about a wash (or less) compared to when all of that was running. No additional unraid licensing - I already had unraid running in a 16 bay server with the full pro license and just moved that OS, disks, and HBA's over into this larger chassis.
  5. I have several 16 and 24 port cards as a result of this haul, but your concern is valid in "normal" scenarios.
  6. I'm not sure I'm following your logic. Losing parity is losing parity - so you're saying it's better to have a single 10tb with potentially a day or more of rebuild time (plus ordering, bc who's keeping a cold spare at that size/cost) than to have a raid10 backed vd made from 2tb drives with a dramatically shorter rebuild time? And I have 140+ cold spares. And I have to actually lose the entirety of one of the mirrored pairs before the vd actually becomes compromised, which means parity is actually more robust. I get the more disks = greater risk of failure argument, but that's why I've gone with smaller disks for faster rebuilds. I don't think I'm any more exposed than someone rolling a single 10tb drive as their parity drive. What am I missing?
  7. Most of my unRAID gear has been free over the years, so while I'd love to spend thousands on a super dense micro server with 10TB disks, that's just not something I've ever been able to prioritize in the budget. But since I work in IT and datacenter environments, I do manage to get some fun stuff to play with. Most recently, it was a bunch of "Aberdeen" storage servers - a mix of 48+2, 24, and 16 bay servers all full with circa 2010 compute resources and RAID controllers. They also came populated with 2tb drives of varying ages, about 160 in total. I've moved my current 16 drive unRAID build into one of the 48 bay enclosures - it's a mix of 6tb, 5tb, 4tb, 3tb, and 2tb, drives totalling 56TB of array space. I've decided to leave a bunch of bays empty since I'm only at 50% utilization, but I'd like to slowly move to 10TB disks and this will give me the space to either replace existing drives or add to the array without worrying about enclosure limits. I hate the thought of throwing a 10TB disk into the parity slot, however, so I've instead got 12x 2tb drives running in the bottom of the chassis in RAID10. This presents to unRAID as 12tb and gives me plenty of overhead for whatever sizes the 10TB drives happen to show up with. These 2tb disks are old, and they're going to fail.. but I have about 140 sitting and waiting as hot spares ready to pop in and automatically rebuild on the HW controller. This actually happened a couple of weeks ago - 2 drives failed. I replaced them and the RAID10 rebuild was completed in just a couple of hours. unRAID was none the wiser and so I didn't have to worry about a parity rebuild on the 12TB virtual disk, which is about 22 hours. I've seen some old threads about this, but most seem to be abandoned and full of people chiming in with how stupid it is. I could understand some arguments against going out and purchasing a bunch of hardware to make this setup, but when it's just sitting in my lap available for use? So far, it's been fairly solid. Anyone else doing anything remotely similar with hardware RAID and virtual disk presentations?
  8. Motherboard - Supermicro - X9DR3-FCPU - dual Xeon E5-2620 v2 @ 2.10GHzRAM - DDR3 44GB $350 shipped and paypal'd (conUS)
  9. Thanks, I'll dig in and learn about what's required there. A quick look at TechPowerUp's database and I don't see the 1030 listed, so if you don't mind providing what you have as a starting point that would be great. Shoot me a PM. Thanks!
  10. Excellent, that's the one I'm looking at as well. Any quirks you had to overcome for passthrough?
  11. I'm looking to add a video card to my R910 chassis to improve Lightroom performance in my VM and have some size and power restrictions which require me to look at something like the EVGA GeForce GT 1030 SC. I was hoping to find an updated guide on supported GPU's for pass through or at least a clear indicator if NVIDIA graphics cards are still problematic, but I'm not finding anything which concisely answers that. Any links or direct answers you guys can provide?
  12. And let's be honest.. the cost of a pro license is fairly small, even for a home/lab environment. I bought my license almost 3 years ago and have been able to enjoy upgrades and a functioning system without any additional licensing costs, in spite of changing the disks and compute hardware numerous times.
  13. Negative ghost rider.. each system requires a separate license.