• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About aiden

  • Rank


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Okay, I've been absent for a while, and suddenly we're on v 6.5.3 with 6.6 looming. The sys admin in me just doesn't trust the "Check for Updates" easy button in the unRAID gui, especially given how complicated past updates have been (I've been using unRAID since the 4.x days). I've been reading countless update threads for the past hour, trying to discern what the generally accepted stable version is currently (6.4.1, 6.5.0, 6.5.1..?), and I just can't get a good handle on what to upgrade to. What is the recommended update procedure from v6.3? Which version should I go to?
  2. I find myself in the position of needing to upgrade my hardware to a new chassis. I have been looking hard at 45 Drives, but they are spendy. I've always loved the look of the XServe design, and it's possible to purchase SEVERAL of them for the price of one empty Storinator case. I realize that it would take 10U (3 XServe RAIDS + 1 XServe) to equal the capacity of a 4U Storinator. But the aesthetics are are a different story. I notice cooling is a concern, but that seems to depend on fan placement and channeling. But my question is, is it worth spending the time + money to buy some of th
  3. The biggest advantage of ZFS to my mind is the self-healing properties, protecting against bit rot etc. That is ideal for a hard disk archive. But the downsides have already been mentioned: no mixed drives, multiple drives per pool means all drives spin up together, etc. The merits of ReiserFS have been discussed in ancient threads on here, but the primary reason it's still in use is its robust journaling. BTRFS is most likely the path forward, though there has been no mention of it beyond the cache drives.
  4. You do you realize that this is pre-1.0 software with only 2 point releases so far? Unfortunately, some people don't understand the concept of pre-releases, alpha testing, development cycles, proof-of-concept, etc. They just want it now. Typically, these are people who have not the skills or experience necessary to truly appreciate what it takes to do all of this kind of work on your own time, by yourself, and with no compensation beyond gratitude. I look forward to your continued development on this project. It is indeed a lofty goal, though sorely needed. Tom has attempted to r
  5. I would agree with you, if I hadn't detected errors in my third pass of 2 drives in the past few years. If I had only done 2 cycles, they would have passed. I know there's a point of diminishing returns with repeated cycles, and someone could argue that 4 cycles caught a failure. But after my experiences, 3 passes are worth the wait in my mind.
  6. Very nice... thanks for posting this update.
  7. He listed the case he's using with the parts list -- HARDIGG CASES 413-665-2163 ... and a quick Google for this case shows they are available on e-bay for exactly what he's listed ($184.23), so I suspect that's where he purchased it. Interesting, because my "quick" Google search came back that 413-665-2163 is the TELEPHONE number for Hardigg. That's why I asked the question. My guess is that is in fact NOT the part number for the actual item, and that the eBay listed it incorrectly. I appre
  8. More importantly how will he/she move it, wouldn't want that thing sitting in my living room lol. This is no joke. With 24 hard drives in the unRAID server, another server, a rackmount UPS, and the weight of the case, this will weigh a LOT. Why one large case instead of several smaller ones? I would think when going mobile, that would help make it easier to move the whole system. EDIT: Nevermind. The cost on these kind of cases is very prohibitive. Could you provide a link of where you are sourcing yours? I like the idea of a modular setup like this, because it's a completel
  9. That's definitely a proof of concept. The airflow is completely wrong, however. You need the fans to blow across ALL the drives, not hit the first one flat and disperse around the rest. If you follow that layout exactly, you would need to put a fan on the bottom and top, pulling air from the bottom and exhausting out the top. Whatever you decide to do, keep that in mind.
  10. It IS available in the US, but it is out of stock right now.
  11. I agree. Plus, redundancy would make things more redundant, which is always more durable on dedicated systems.
  12. Love my Coolspins. They are also the drives of choice for Backblaze.
  13. There's something to be said for having some extra power on board if you need it. When the 5TB drives come along and you get that upgrade itch, the drives may be more power hungry than your current drives. Then you'd have to upgrade the PSU as well. Just more food for your brain.
  14. EDIT: Lol... Lainie beat me to it.
  15. I hate cross-posting, but this seems relevant to this discussion as well: Remember that BackBlaze is currently using (2) 760w PSUs for their 45 drive Storage Pods, plus one more OS drive. From the blog post: The specs for their recommended power supply, the Zippy 760w PSM-5670V: This is in a production environment designed for long term storage, where customers will not tolerate a lot of "poof" type failures. They are effectively running twice as many drives as we can on one unRAID installation, on about 1500W worth of PSUs. So dividing their requirements in h