• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

aiden's Achievements


Collaborator (7/14)



  1. Okay, I've been absent for a while, and suddenly we're on v 6.5.3 with 6.6 looming. The sys admin in me just doesn't trust the "Check for Updates" easy button in the unRAID gui, especially given how complicated past updates have been (I've been using unRAID since the 4.x days). I've been reading countless update threads for the past hour, trying to discern what the generally accepted stable version is currently (6.4.1, 6.5.0, 6.5.1..?), and I just can't get a good handle on what to upgrade to. What is the recommended update procedure from v6.3? Which version should I go to? System Config Single parity All drives are ReiserFS, including cache I noted several variations of upgrade issues for different users depending on their configuration, so I'm listing my add-ons below in case anything stands out as a "definitely disable / delete and reinstall" situation. Docker MySQL Plugins Community Applications (Andrew Zawadzki) 2017.11.23 CA Auto Update Applications (Andrew Zawadzki) 2017.10.28 CA Backup/Restore App Data (Andrew Zawadzki) 2017.01.28 CA Cleanup App Data (Andrew Zawadzki) 2017.11.24a Dynamix System Buttons (Bergware) 2017.06.07 Dynamix System Information (Bergware) 2017.8.23 Dynamix System Statistics (Bergware) 2017.10.02b Dynamix WebGUI (Bergware) 2017.03.30 Fix Common Problems (Andrew Zawadzki) 2017.10.15a Nerd Tools (dmacias72) 2017.10.03a Plex Media Server (PhAzE) 2016.09.17.1 SABnzbd (PhAzE) 2016.11.29.1 Sick Beard (PhAzE) 2016.09.17.1 Unassigned Devices (dlandon) - 2017.10.17 I have been totally stable with this build, and the only reason I want to upgrade is because Community Applications will no longer update unless I'm at least on version 6.4. I appreciate any advice and experiences with these versions to help me formulate an upgrade path.
  2. I find myself in the position of needing to upgrade my hardware to a new chassis. I have been looking hard at 45 Drives, but they are spendy. I've always loved the look of the XServe design, and it's possible to purchase SEVERAL of them for the price of one empty Storinator case. I realize that it would take 10U (3 XServe RAIDS + 1 XServe) to equal the capacity of a 4U Storinator. But the aesthetics are are a different story. I notice cooling is a concern, but that seems to depend on fan placement and channeling. But my question is, is it worth spending the time + money to buy some of these old girls and rip her guts out? Do all the status indicators still work for those of you who have embarked on this journey? Any other gotchas I should consider? Thanks!
  3. The biggest advantage of ZFS to my mind is the self-healing properties, protecting against bit rot etc. That is ideal for a hard disk archive. But the downsides have already been mentioned: no mixed drives, multiple drives per pool means all drives spin up together, etc. The merits of ReiserFS have been discussed in ancient threads on here, but the primary reason it's still in use is its robust journaling. BTRFS is most likely the path forward, though there has been no mention of it beyond the cache drives.
  4. You do you realize that this is pre-1.0 software with only 2 point releases so far? Unfortunately, some people don't understand the concept of pre-releases, alpha testing, development cycles, proof-of-concept, etc. They just want it now. Typically, these are people who have not the skills or experience necessary to truly appreciate what it takes to do all of this kind of work on your own time, by yourself, and with no compensation beyond gratitude. I look forward to your continued development on this project. It is indeed a lofty goal, though sorely needed. Tom has attempted to reboot the UI several times in the past, but this is much more advanced. Please don't feel discouraged by uninformed posts such as the one above, and realize that many of us respect the time and effort it takes simply think about designing something as complex as this. When life gets out of your way, I hope you'll be able to dive back into this like you did during the summer. Good luck.
  5. I would agree with you, if I hadn't detected errors in my third pass of 2 drives in the past few years. If I had only done 2 cycles, they would have passed. I know there's a point of diminishing returns with repeated cycles, and someone could argue that 4 cycles caught a failure. But after my experiences, 3 passes are worth the wait in my mind.
  6. Very nice... thanks for posting this update.
  7. He listed the case he's using with the parts list -- HARDIGG CASES 413-665-2163 ... and a quick Google for this case shows they are available on e-bay for exactly what he's listed ($184.23), so I suspect that's where he purchased it. Interesting, because my "quick" Google search came back that 413-665-2163 is the TELEPHONE number for Hardigg. That's why I asked the question. My guess is that is in fact NOT the part number for the actual item, and that the eBay listed it incorrectly. I appreciate your attempt to answer for him, but you of all people should know I do plenty of my own research before I start asking questions. If that is where the OP found the item, the so be it. But I would rather have him tell me, just the same.
  8. More importantly how will he/she move it, wouldn't want that thing sitting in my living room lol. This is no joke. With 24 hard drives in the unRAID server, another server, a rackmount UPS, and the weight of the case, this will weigh a LOT. Why one large case instead of several smaller ones? I would think when going mobile, that would help make it easier to move the whole system. EDIT: Nevermind. The cost on these kind of cases is very prohibitive. Could you provide a link of where you are sourcing yours? I like the idea of a modular setup like this, because it's a completely self contained enterprise quality infrastructure in a half height rack. I will watch this DIY with interest.
  9. That's definitely a proof of concept. The airflow is completely wrong, however. You need the fans to blow across ALL the drives, not hit the first one flat and disperse around the rest. If you follow that layout exactly, you would need to put a fan on the bottom and top, pulling air from the bottom and exhausting out the top. Whatever you decide to do, keep that in mind.
  10. It IS available in the US, but it is out of stock right now.
  11. I agree. Plus, redundancy would make things more redundant, which is always more durable on dedicated systems.
  12. Love my Coolspins. They are also the drives of choice for Backblaze.
  13. There's something to be said for having some extra power on board if you need it. When the 5TB drives come along and you get that upgrade itch, the drives may be more power hungry than your current drives. Then you'd have to upgrade the PSU as well. Just more food for your brain.
  14. EDIT: Lol... Lainie beat me to it.
  15. I hate cross-posting, but this seems relevant to this discussion as well: Remember that BackBlaze is currently using (2) 760w PSUs for their 45 drive Storage Pods, plus one more OS drive. From the blog post: The specs for their recommended power supply, the Zippy 760w PSM-5670V: This is in a production environment designed for long term storage, where customers will not tolerate a lot of "poof" type failures. They are effectively running twice as many drives as we can on one unRAID installation, on about 1500W worth of PSUs. So dividing their requirements in half effectively cuts it down to one sub 1000w PSU with a beefy 5v rail being sufficient for 20+ drives, the motherboard, and processor. Clearly they avoided the surprisingly 5v hungry WD drives. Based on their recommendations, I would submit that the Zippy 860w PSM-5860V (the bigger brother) could probably handle the job of a 24 drive, non-WD based unRAID server, with a moderate processor.