Jump to content

aiden

Members
  • Content Count

    949
  • Joined

  • Last visited

Everything posted by aiden

  1. Okay, I've been absent for a while, and suddenly we're on v 6.5.3 with 6.6 looming. The sys admin in me just doesn't trust the "Check for Updates" easy button in the unRAID gui, especially given how complicated past updates have been (I've been using unRAID since the 4.x days). I've been reading countless update threads for the past hour, trying to discern what the generally accepted stable version is currently (6.4.1, 6.5.0, 6.5.1..?), and I just can't get a good handle on what to upgrade to. What is the recommended update procedure from v6.3? Which version should I go to? System Config Single parity All drives are ReiserFS, including cache I noted several variations of upgrade issues for different users depending on their configuration, so I'm listing my add-ons below in case anything stands out as a "definitely disable / delete and reinstall" situation. Docker MySQL Plugins Community Applications (Andrew Zawadzki) 2017.11.23 CA Auto Update Applications (Andrew Zawadzki) 2017.10.28 CA Backup/Restore App Data (Andrew Zawadzki) 2017.01.28 CA Cleanup App Data (Andrew Zawadzki) 2017.11.24a Dynamix System Buttons (Bergware) 2017.06.07 Dynamix System Information (Bergware) 2017.8.23 Dynamix System Statistics (Bergware) 2017.10.02b Dynamix WebGUI (Bergware) 2017.03.30 Fix Common Problems (Andrew Zawadzki) 2017.10.15a Nerd Tools (dmacias72) 2017.10.03a Plex Media Server (PhAzE) 2016.09.17.1 SABnzbd (PhAzE) 2016.11.29.1 Sick Beard (PhAzE) 2016.09.17.1 Unassigned Devices (dlandon) - 2017.10.17 I have been totally stable with this build, and the only reason I want to upgrade is because Community Applications will no longer update unless I'm at least on version 6.4. I appreciate any advice and experiences with these versions to help me formulate an upgrade path.
  2. I find myself in the position of needing to upgrade my hardware to a new chassis. I have been looking hard at 45 Drives, but they are spendy. I've always loved the look of the XServe design, and it's possible to purchase SEVERAL of them for the price of one empty Storinator case. I realize that it would take 10U (3 XServe RAIDS + 1 XServe) to equal the capacity of a 4U Storinator. But the aesthetics are are a different story. I notice cooling is a concern, but that seems to depend on fan placement and channeling. But my question is, is it worth spending the time + money to buy some of these old girls and rip her guts out? Do all the status indicators still work for those of you who have embarked on this journey? Any other gotchas I should consider? Thanks!
  3. The biggest advantage of ZFS to my mind is the self-healing properties, protecting against bit rot etc. That is ideal for a hard disk archive. But the downsides have already been mentioned: no mixed drives, multiple drives per pool means all drives spin up together, etc. The merits of ReiserFS have been discussed in ancient threads on here, but the primary reason it's still in use is its robust journaling. BTRFS is most likely the path forward, though there has been no mention of it beyond the cache drives.
  4. You do you realize that this is pre-1.0 software with only 2 point releases so far? Unfortunately, some people don't understand the concept of pre-releases, alpha testing, development cycles, proof-of-concept, etc. They just want it now. Typically, these are people who have not the skills or experience necessary to truly appreciate what it takes to do all of this kind of work on your own time, by yourself, and with no compensation beyond gratitude. I look forward to your continued development on this project. It is indeed a lofty goal, though sorely needed. Tom has attempted to reboot the UI several times in the past, but this is much more advanced. Please don't feel discouraged by uninformed posts such as the one above, and realize that many of us respect the time and effort it takes simply think about designing something as complex as this. When life gets out of your way, I hope you'll be able to dive back into this like you did during the summer. Good luck.
  5. I would agree with you, if I hadn't detected errors in my third pass of 2 drives in the past few years. If I had only done 2 cycles, they would have passed. I know there's a point of diminishing returns with repeated cycles, and someone could argue that 4 cycles caught a failure. But after my experiences, 3 passes are worth the wait in my mind.
  6. Very nice... thanks for posting this update.
  7. He listed the case he's using with the parts list -- HARDIGG CASES 413-665-2163 ... and a quick Google for this case shows they are available on e-bay for exactly what he's listed ($184.23), so I suspect that's where he purchased it. http://www.ebay.com/itm/HARDIGG-CASES-413-665-2163-SHIPPING-STORAGE-HARD-CASE-/190709478495 Interesting, because my "quick" Google search came back that 413-665-2163 is the TELEPHONE number for Hardigg. That's why I asked the question. My guess is that is in fact NOT the part number for the actual item, and that the eBay listed it incorrectly. I appreciate your attempt to answer for him, but you of all people should know I do plenty of my own research before I start asking questions. If that is where the OP found the item, the so be it. But I would rather have him tell me, just the same.
  8. More importantly how will he/she move it, wouldn't want that thing sitting in my living room lol. This is no joke. With 24 hard drives in the unRAID server, another server, a rackmount UPS, and the weight of the case, this will weigh a LOT. Why one large case instead of several smaller ones? I would think when going mobile, that would help make it easier to move the whole system. EDIT: Nevermind. The cost on these kind of cases is very prohibitive. Could you provide a link of where you are sourcing yours? I like the idea of a modular setup like this, because it's a completely self contained enterprise quality infrastructure in a half height rack. I will watch this DIY with interest.
  9. That's definitely a proof of concept. The airflow is completely wrong, however. You need the fans to blow across ALL the drives, not hit the first one flat and disperse around the rest. If you follow that layout exactly, you would need to put a fan on the bottom and top, pulling air from the bottom and exhausting out the top. Whatever you decide to do, keep that in mind.
  10. It IS available in the US, but it is out of stock right now. http://www.u-nas.com/xcart/product.php?productid=17617&cat=249
  11. I agree. Plus, redundancy would make things more redundant, which is always more durable on dedicated systems.
  12. Love my Coolspins. They are also the drives of choice for Backblaze.
  13. There's something to be said for having some extra power on board if you need it. When the 5TB drives come along and you get that upgrade itch, the drives may be more power hungry than your current drives. Then you'd have to upgrade the PSU as well. Just more food for your brain.
  14. http://download.lime-technology.com/download/ EDIT: Lol... Lainie beat me to it.
  15. I hate cross-posting, but this seems relevant to this discussion as well: Remember that BackBlaze is currently using (2) 760w PSUs for their 45 drive Storage Pods, plus one more OS drive. From the blog post: The specs for their recommended power supply, the Zippy 760w PSM-5670V: This is in a production environment designed for long term storage, where customers will not tolerate a lot of "poof" type failures. They are effectively running twice as many drives as we can on one unRAID installation, on about 1500W worth of PSUs. So dividing their requirements in half effectively cuts it down to one sub 1000w PSU with a beefy 5v rail being sufficient for 20+ drives, the motherboard, and processor. Clearly they avoided the surprisingly 5v hungry WD drives. Based on their recommendations, I would submit that the Zippy 860w PSM-5860V (the bigger brother) could probably handle the job of a 24 drive, non-WD based unRAID server, with a moderate processor.
  16. Yes, you can use the array while it is rebuilding. That's part of the design. I would suggest you not write a lot to the array during the rebuild, as this will slow things down. Plus it always makes me a little nervous.
  17. That is definitely food for thought Jonathan.
  18. Here's a great breakdown of common high capacity drives and their power states (taken from Storage Review): You can see just how much more drives consume at startup than during reads and writes. Staggered spinup would allow for smaller (by a factor of at least 2x smaller) and more efficient PSUs to be used in our systems.
  19. Yes, a few of us contributed to a script over the years. Started with Starcat, then me, then dstroot, then Guzzi. Don't know where that falls in line with this one, time wise. Thread is here, and in my signature.
  20. Wow, this is a very nice little case! Hopefully someone can do a full review on it soon. EDIT: Found a user review - http://forums.overclockers.com.au/showthread.php?t=1084681 Seems like a very tight fit, which means it's likely about as small as you can get with that number of drives. Very efficient use of space, and surprisingly good cooling considering how crammed everything is. Some QC issues with a little plastic and metal warping, but that's standard fair with these kinds of imports. Need to use a super low profile CPU cooler, like the Noctua he's using. Since you'd have to add offboard SATA channels (like m1015), you could use a 4 port mini-ITX board and still have enough ports for the job. I'm starting to like this box quite a bit as a replacement for the Microserver, which is sadly removing a lot of mod-friendly features in the next generation.
  21. All of my disks show this until I start the array... After the array is online, they show Partition format: MBR: 4K-aligned File sytem type: reiserfs +1... every upgraded machine I have done from the "5bX" series to RC has exhibited the exact same behavior - MBR: 4K-aligned partition, unknown file system. Since the wiki specifically states that MBR: unknown partition is the danger, I took the risk initially and started the array. All of the data drives had backups, so I was comfortable I could recover if needed. I've seen this over a half-dozen times now on various systems, and every time, the data drives stay intact when the array is started. My guess is that something in the initial config phase is not correctly reading the filesystem, or is not relaying the correct string value to the GUI. Because if you take a look at the drives themselves, you can see the reiserfs partitions. I'm not advocating starting the array if the experts tell you to wait. I'm just relaying my personal experience.
  22. No worries Ford. It's a fruitful discussion, because frankly I hadn't considered AMD as a viable option either. But the fact that the Microservers to this point have been using AMD procs with ECC memory should be enough validation that AMD can be a great alternative. Just have to find that magic bullet - ie, a mini-ITX AMD board with 6 SATA ports, 1 PCIe x16, and IPMI.
  23. Get a couple of cheap DD-WRT compatible routers, set them up like Gary said. Use an app like inSSIDer to sniff out what bands are saturated in your area. You can hopefully find a channel that isn't too crowded. Remember that radio signals don't confine themselves to the exact channel you set - they bleed off into other frequencies. So try to avoid overlap of any kind with any other signals if possible. You'll see what I mean when you look at the display in the app.
  24. With the 3-in-2 cages modded with 80mm fans in my main server, I'm satisfied that I can keep the drives in the low 30s when building/checking parity. I like the flexibility of not having to open my case to swap drives. Call me lazy. With the S-915, I could put 2x 3-in-2 cages, and then a 2-in-1 or 4-in-1 SSD cage. That would make a 1 parity + 5 data drive box, with a cache drive pool. With 4TB drives that's a 20TB box. That's the config I'm thinking. Regardless, I'm loving the fact that the OEM manufacturers are coming up with so many good options for home NAS solutions!
  25. Anyone see this? There's very limited information other than dimensions of the different models here. I can't find any internal pics or any reviews, or even a place to source one yet. But the loss of that 5th or 6th drive in the new Gen 8 HP Microservers has made me want to look for an alternative. The S-915 pictured above looks to be my ideal mini-server, although the S35 more closely matched the Microserver in dimensions (10.5 x 8.3 x 10.2 in vs 7.87 x 8.35 x 12.09in). With a 5-in-3 you can get that 5th drive back.