whipdancer

Members
  • Posts

    336
  • Joined

  • Last visited

  • Days Won

    1

whipdancer last won the day on April 15 2019

whipdancer had the most liked content!

Converted

  • Gender
    Undisclosed
  • Location
    Houston, TX

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

whipdancer's Achievements

Contributor

Contributor (5/14)

38

Reputation

  1. I had Ubiquiti (technically still have the EdgeRouter X and 2x Unifi AP-AC-Lite). I switched to 2x Aruba Instant On and use an old PC for PFSense. I felt like the Aruba were easier to configure, but not a lot. If it wasn't for power-outages, I think this setup would have uptime measured in years at this point. Not that UBN wouldn't, it's that I ran Unfi controller docker on my unraid and if power was out I couldn't make updates to my router or my AP's to switch my source. My pfsense box and 1 Aruba AP will run for almost an hour on the UPS I use for them, so I haven't run into that problem again since I switched. I've also learned that the AC Lites were updated in the years after I bought them, so that they can be configured individually in a manner similar to the Aruba's (the Unifi Controller software is no longer required for setting them up).
  2. I have no idea. I use glacier at work. Don't use anything like it at home. My critical data fits on a single 10TB drive. My family photos are synced between iCloud and my unraid server (so 2 copies of everything). I'll probably add a dedicated back for it as well, but I've been able to recover from our mistakes so far. The remaining 70TB of data on my server is replaceable. it would be a pain in the ass, but it is replaceable.
  3. Have you looked at AWS Glacier? And realistically, how many TB are critical data (you could not replace it)? Cheaper to buy 10/12/16TB externals if you are disciplined enough to manage it.
  4. I don't care whether it is an enterprise drive or not. I've had consumer drives give me 40k hours of service and are now another 5k or 6k beyond that in a friends NAS (IIRC my oldest drive was a consumer drive with 59k hours when it died). My server Tower is in my office and sounds like a relatively quiet desk fan. I certainly can hear it when everything spins up though. I don't have any extra cooling in my case. I did upgrade to some better quality fans to minimize the noise. My SSD drives report getting hot whenever I'm copy a bunch of stuff onto them (400 - 900GB will take long enough to raise their temps). I have a couple of drives that seem to routinely report a high temp, but it always seem to come back to normal pretty quickly. Basically, you can here my server when it's running, but it doesn't stand out from the environmental noise that is present in the room I use as my office - off the kitchen and utility room - so dishwasher, washing machine, dryer, sous vide circulator, oscillating fan - something is always running.
  5. IMO WD <Color> <anything> is currently overpriced. The exception probably being Blue drives which have slower RPM and smaller cache and don't come in a size I'd consider for data storage - but otherwise, attributes that make very little difference in Unraid in my limited, strictly anecdotal, experience. I'm curious if those price trends are recent and/or more indicative of Toshiba, than of general pricing strategies. I know that 12TB Iron Wolf, RED Pro, Exos, Iron Wolf Pro, Red+, Toshiba NAS were all over $360-ish when I was looking last summer. Technically, each of those models is targeted toward a different market, but that does not factor in to my purchases (which is why I bought the enterprise drives I did, when i did - strictly $/TB). Nostalgically speaking, what I wouldn't give for some WD Green 18TB drives. My green drives all gave me better than 40k hours of service before I retired them. 4 of them now live in a friends QNAP (or whatever) NAS.
  6. I'm using WD Ultrastars which are enterprise drives. No issues so far. I got them because of the deal at the time, not because I care that they are enterprise drives. The Backblaze data is rather eye opening if you've never seen it. There does not appear to be a compelling reason to use enterprise drives, when focused purely on costs (warranty/support associated with enterprise relationships are an entirely different consideration).
  7. Never used the enterprise Toshiba drives, but have used a few in my server no issues (can't say the same for either WD or Seagate).
  8. I've given up on anything close to $15.xx/TB that I found a couple times on 12 and 14 TB externals. If prices don't trend back down again for externals, then I will be moving to something like OP's approach as well.
  9. Given that most Epyc models + M/B would cost more than the OP's entire budget, I don't see it as being relevant to the discussion. I will readily concede that in the enterprise server market Epyc is on top. However, in the consumer market that is not the case.
  10. Usable PCIE lanes? Can you point to a MB or chipset that is available? The last I read (at least a year ago), was that with the 5000 series CPU's, AMD will finally match Intel in PCIE lane count, but chipset limitations effectively kept the usable bandwidth/lane count below Intel. I'll have to see if i can dig up the article. Hopefully, I'm wrong and those issues have been resolved. I had been waiting on my upgrade specifically for this reason.
  11. I never considered AMD to be experimental at the time. Best bet is to just search for new build threads. I will say that if you need PCI-E lanes, Intel is your answer. If you want dual CPU's, Intel is your choice. If you want more bang for your single CPU buck, Ryzen/Threadripper is a solid choice (and generally offer better performance). There are a fraction of the people that run into issues with AMD, but that seems to be mostly chipset/bios related. So, pick your criteria. PCI-E lanes - do you want to use a GPU? more than 1 GPU?, 4 port ethernet nic? M2 drives? Multiple add-in cards? Those all eat up PCI-E lanes (usually). CPU - Do you want dual CPU's? Great for larger workloads with multi-vm's, dockers, etc that you can spread out. If either of those is important - then probably Intel. Maybe you can find a used server that's only a single gen behind for a good price?
  12. 1. Look at my sig. Been running almost non-stop since 2012, with 2 cpu upgrades along the way. Server grade components are not required. They certainly won't hurt anything (except maybe your wallet). My oldest drive (retired at least 2 years ago) had 55k power on hours. I think that's pretty representative of the uptime. 2. I have used up to 14 dockers on my existing setup. Pared that down to 6 full time, with 2 as needed. 3. H/S is nice, but I removed the 5-into-3 H/W Icy Dock I was using. I personally don't miss it. If I was starting out new, I would likely include it from the start. 5. I gave up on my system being low-power, but it is relatively speaking, efficient for a server. 7. My 2 parity drives are used (they both showed under 14k power on hours when I got them). 8TB Toshiba is used. 8TB HGST is used. All the others are shucked.
  13. Isn't that just based on your settings?
  14. In the US, being shucked is not automatically grounds for rejecting a repair. It's certainly more of a hassle, but the right-to-repair laws are normally on your side. After having my internal drive repair rejected because of "evidence of tampering", I quit worrying about it. I'll save the approx. $120 per drive (x15 so far) and take my chances in small claims court if it fails within the warranty period.