GreenDolphin

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by GreenDolphin

  1. I'm about to pull the trigger on Unraid (pending a successful trial the next few weeks); I actually bought the HW and built the system a few years back, but life got in the way and I made do with an awful maze of external drives. I was very concerned with the way the release notes for 6.10 started, as a strong opponent of cloud computing and the way every Thing vendor is trying to get every customer logged on 100% of the time for every little thing. Apple was the first major offender here with iCloud, but Microsoft has followed suit, and the worst of all is Tesla. Since I might end up posting a lot of info about my system or usage in the Forum, I didn't like the direct association of my server credentials with a public forum account. Anyway, the above quote is an absolutely fine solution for me. Since a Forum signon doesn't require any PII (no real-life name or address, I just rechecked), I'll be maintaining a dummy account for this. Being able to do this wasn't a given, since lots of online forums, especially official company-owned ones, forbid multiple accounts for the same person and sometimes enforce it quite strongly (banning the user outright); I think there would be a lot less pushback if Limeware emphasized that it is possible. Incidentally, Limeware will now also have to come up with a policy of what to do if a forum user seriously violates the forum behavioral guidelines. Banning, even temporarily, that user's logon is no longer possible, because s/he still needs to access UPC. Sure, from all I've seen (been casually reading the forum for several years, some threads & subjects in depth) this isn't a common problem here -- discussion is remarkably even-keeled, even on "religious" subjects. Still, there has to be both a policy and a way to technically implement it. Re the various questions @eweitzman raised about what type of info would be collected and how, that's a rather complicated question -- basically, anyone who has a forum member's username can scrape all his/her posts and any information posted in them... That's not info Limeware is ever likely to collect itself, of course, but if the DB associating Unraid server addresses and emails gets hacked, there might be bad outcomes. That's why I'd recommend that anyone who doesn't intend to use the UPC-based applications, MyServers and/or future ones, keep their active forum account and UPC account separate. Also, while it is a minor manner, I'd also appreciate an option for not having the signon field on the WebGUI banner: I think it belongs on an Admin page like Management Access anyway, esp. since it's not info the user needs all the time. Again, commercial web storefronts are not a good comparison: There you can't do anything except casual browsing without being logged on, so an indication whether you are or not is useful. On the main management console of my private server running locally, I'd rather information that wasn't always pertinent wouldn't be always displayed.
  2. I'm about to set up my first Unraid server. I have an 8TB Seagate IronWolf I bought a while back and never used, but in the meantime my data has grown such that I need to buy another drive anyway simply to be able to get data onto the array to begin with. I was going to buy another 8TB Ironwolf, but to my surprise, these Exos 16TB drives offer by far the best $/TB of any other HD I've seen. Basic Googling doesn't yield any obvious catch (obviously, if it fails mechanically, that's a lot of data, so backups are vital, and parity checks, rebuilds, preclear etc. take a long time). I'd be using the Exos as the parity drive in the new server, which will live right next to my desk as well. @KuniD, did you end up buying the drive? How happy are you with it? Reliability or performance issues? Noise wise?
  3. This is offtopic, but IME it is a really bad idea to ever use ISP-provided equipment on the customer side, if you're doing anything more complex connecting a single computer with consumer-type defaults -- and any setup using Unraid fits that bill. If your ISP doesn't allow you to provide your own router, I'd switch providers just because of that (using their physical-layer modem only is OK).
  4. Bloomberg is usually a trustworthy source. Their tech understanding is of course low, but they claim to have done a lot of investigation, and have a lot of supporting info given in the article. I can't see so many details created out of whole cloth -- Bloomberg would leave themselves open to endless lawsuits otherwise. OTOH, both Apple & Amazon have made explicit statements denying the story, and they'll take a credibility hit if it turns out true. Someone is lying. If I had to bet, I'd guess the story is mostly true. It also sounds very likely that US authorities would hesitate about taking steps that would harm a large US company (that itself was presumably not complicit). This story will definitely be continued...
  5. In the future, if you decide to move a forum, could you update the title of the announcement thread to reflect this? I just wasted 45min trying to find posts I'd bookmarked, and then the forum itself which seemed to have disappeared. I was sure it was a bug in the forum SW. It's confusing to announce the experimental forum in the Announcement forum, but not the change back to a different forum (esp. since it's at a different location in the hierarchy).
  6. Opinions will vary on this... First, I would personally not use any Threadripper CPU on a production server. The product line is far too new (<6 months since launch) for software written for it to be considered stable, which is a prime goal of any server implementation. Ditto for that matter for any other Ryzen CPU -- the architecture is too new. Second, which you may or may not care about is ECC support. Threadripper, officially supports ECC, but from discussions I've seen, apparently not all ThreadRipper motherboards do, and for some that explicitly say "ECC" in their specs, there are no ECC memory modules in their QVL. I don't know what that means in practice (I've seen conflicting reports), but I wouldn't bank on a non-officially-supported data-integrity feature. That said, ECC isn't mandatory for UnRAID (some folks here use it, some don't) although personally I think the extra protection, even for rare events, is worthwhile for the relatively low extra cost. For any actual issues with Ryzen , search the forum for "Ryzen", for example here -- there are a bunch for issues, although they are gradually being fixed. Some of the issues apparently require changes/fixes to the Linux kernel, which isn't up to Lime Tech, and although I'm sure this'll happen, I don't consider Ryzen a "prime time" product yet. Alternatives: If you're willing to consider "pre-owned" CPUs, an older (1st-gen) Intel X5-2xxx Xeon might work for you. These are 8 core / 16 thread CPUs, see for example here (Intel ARK specs), usable in single- or dual-CPU configs. Originally selling for $1400-$1500 each, a bunch of large datacenters got rid of a huge amount about 2 years ago, and their price on the open market went <$100 . Even after several years of server farm use, they should still be good for quite a few years of UnRAID use. The price has gone up a bit since then, but they're still available at <$150 on eBay and other places; I'm running an E5-2665 I bought here using a new AsRock mobo. There's a really long thread on the forum on them.
  7. I very much appreciate the heads up -- it helps me determine whether to update now or wait for near-term additional updates -- the latter in my case, as I don't want to update the OS again shortly after updating it previously.
  8. Depends on your landlord and your rental contract. I knew of people both here and in Germany who rented and had rental contracts explicitly forbidding any drilling whatsoever. In the dorms I lived at while studying in the US, it was even forbidden to use the tiny picture nails -- we were supplied with special sticky tack and adhesive hooks, like this. Also, depending on the wall material, it's not necessarily trivial to fill holes so they're completely invisible afterwards. Over here, you'd basically need to repaint a wall after filling.
  9. (Slightly offtopic for a best-deal thread, but I figure this is the best place for it, as anyone who bought one of the Xeon E5 V1s will likely be interested...) Turns out the Sandy Bridge Xeon E5 v1are affected by Spectre / Meltdown, and Intel does have some kind of microcode update: https://downloadcenter.intel.com/download/27431/Linux-Processor-Microcode-Data-File?product=64597 Anyone know what exactly the update addresses? Assuming there is a relevant fix, what would be the indicated action for an UnRAID user with such a CPU? Would this update likely be included in a Linux kernel update which would eventually trickle down to an UnRAID update? I understand Linux does have a general mechanism allowing a microcode update on boot, unlike other OSes, so it might need to be re-loaded on every boot but would still work.
  10. A person can't have everything. Wireless solutions are never as good as wired ones, either in performance or consistency; the major problem isn't throughput, it's latency. As for energy consumption, the nano router I have, the TL-WR902AC, actually has a USB power supply. While it comes with an AC adapter, it can be powered via any USB port as well, e.g., one on the wired device it's attached to. If a 100mbps connection to the server is enough, that will work fine (and if there's a single PC that connects to the UnRAID server, as in the OP's case, that might well be enough).
  11. That's... a lot of drives, despite being >8cm shorter than the Azza. Looks like it's been discontinued for years, but the Cooler Master Storm Trooper is still being made; a bit more expensive, however -- currently $126 after rebate on Amazon. It does however have 4 included fans, and a 2-drive 2.5" dock in addition to the 9x 5.25" bays. There's even a built-in toolbox to keep all the screws, cables brackets etc. (-: Bottom line, looks like there are still a few choices.
  12. Looks like this could do the job, if you have the room for it -- it's a full tower. The price is certainly right (-: I'm a bit confused by the airflow options -- with multiple HDs in front, IMO the best way to go is definitely front-to-back for all fans, aligned along the HDs & internal cards, toward the back ports. Many of the pics (also in the Tom's review linked ot from their site) seem to assume bottom-to-top airflow. The only thing I'd be concerned about is some of the the QA- & build quality -related complaints in the Newegg & Amazon reviews -- maybe not an issue if the server stays put once it's built. See if you can find a review of someone using it in a NAS build and/or with lots of drives in 5-in-3 cages, to see if there are long-term heat issues. From the look of the 5.25" trays, there will be tabs separating the bays you'll need to bend back or break off. I'd still consider the Antec 900 as well -- there's seems to be experience using hot-swap cages with it, and it's currently not that expensive -- $70 after rebate at Newegg.
  13. ++ This is best solved at the network level, not the (UNRaid) OS/server level. If the router jonathanm suggests isn't a fit aesthetically or size-wise, I'd suggest the TP-Link RE210 AC750 Wi-Fi Range Extender; it has wireless bridge capability where you can use its gigabit Ethernet port to connect a wired device (your UNRAID server) to your main router via WiFi. Since it supports dual-band, performance should be good; It's the only tiny-size WiFi access device I know of that supports a gigabit connection and 5GHz. While I haven't used it myself, I have its cousin, the TL-WR902AC nano travel router, which is a fully-featured router with amazing WiFi performance despite being barely larger than a matchbox (less suitable for your purpose since its Ethernet ports are 10/100). The RE210 is a bit expensive in the US at the moment, but can be bought from Amazon Germany for ~US$35 delivered to the US.
  14. Sure, but rack-based servers aren't just a cost issue... While a 4020 certainly has a lot of room for drives, from what I hear (pun intended) from people who have one it's quite loud, and wouldn't be suitable for a living area. I live in a small apartment, and my UnRAID server lives in my study (I work from home) <2ft from my head. It's barely audible most of the time.
  15. It does look like the Sharkoon is finally out of production, although both Amazon UK and Amazon Germany show it as "temporarily out of stock" and allow ordering, so they should get more of them. Looking back at my notes while I was researching cases: Another option if you can find one may be the discontinued Zalman MS800 or MS800Plus. It's intended for gaming rigs, but does have 10 5.25" bays (in the Plus, config3 of them come pre-occupied with 3 hot-swap 3.5" bays, the latter being replaceable with a 5-in-3 cage, so either model supports 15 3.5" drives overall). It's taller & deeper than the T9 (but still considered mid-ATX), and quite a bit fancier -- 4 included fans+room for 2 more, outside fan-speed knob, etc. Amazon lists it as temp. out-of-stock, so I'd probably contact them to ask whether there's a chance it'll actually be available. You may also want to keep an eye out for a used Lian-Li PC-A17 (also discontinued, but the same approach as the T9, 9x straight top-to-bottom 5.25" bays). But yes, the situation with this type of case sucks, which is odd given that home NASes are becoming a thing. The 8-bay solutions marketed for NAS simply don't cut it for a decent-size server for >2-3 years of expansion, and rack solutions are a bad fit for someone who doesn't have a basement to hide them in.
  16. I originally thought of getting a purpose home-NAS case like the Silverstone CS380, but decided I wanted room for >8 drives for the same reason you do -- I eventually intend to have lots of drives expanding gradually, and want to be able to add & hot-swap and keep a spare drive as well without powering down; this system should last 5-6 years. It turned out cases with top-to-bottom 5.25" bays had become very rare, let alone for a reasonable price. After a lot of research, I settled on the mid-tower ATX Sharkoon T9 Value case (my system isn't fully set up yet -- all components installed except some of the disks -- but it's all working fine under UnRAID with decent temps, and fairly quiet.) The case has 9x 5.25" bays -- along the entire front, and has a very clean look which isn't too boring (IMO). I currently have one 3x5 hot-swap 3.5"-drive cage in it, & a second cage ready. Eventually I'll either add a 3d cage, or a 3-in-2 cage of 3.5" drives place + a 2-in-1 cage for 2x 2.5" cache/VM SSD drives, so a total of either 15x 3.5" or 13x 3.5" + 2x 2.5" drives. Currently I have a single 2.5" SSD which is simply screwed on the bottom of the case, non-hot-swap. The case is very well made, much more than you'd expect for the cost, and I found quite a few people using it for many-drive NAS builds. As for the cost: I'm in Israel and bought mine from Germany. Are you in the US? I don't think the case is directly available there , but it would cost <US$70 delivered to the USA from Amazon UK. A fairly long review of the case. Video of an UnRAID build using the case, and the same system being upgraded a couple of years later with an additional cage. As you can see, it's pretty easy to work on. There are tabs between the 5.25" bays that may need to either be bent back or broken off to fit the cage(s), depending on the ones you use. I have the iStarUSA BPN-DE350SS (in red, which matches my red-accent version of the case), and I bent the tabs using a C-clamp. Took ~20min to bend all 36, IIRC.
  17. Thanks for the replies! RobJ, what concerns me is that the pre- and post-read passes cover a different number of blocks. Even if dd doesn't have an end specified when zeroing, I'd expect the pre-read pass to behave the same, and to have stopped at the same location. Frank1940, my forum search didn't turn up that thread for some reason -- thanks, will repost there! I've already seen at least 2 posts there about similar issues.
  18. Hi, UNRaid noob here. I may be just misunderstanding something, but want to make sure I don't have bad disks... I posted this in the General Support subforum, but it was suggested I repost here. --------- Just completed my first build, and to prep for configuring the array, I ran pre-clear on 2 brand new 8TB drives (Seagate Ironwolfs), in parallel; this included pre- and post-read passes, one zeroing cycle.) It took ~36 hours which seems reasonable. After completion, the WebGUI Preclear plugin page's status says "Preclear Finished Successfully!" for both drives, and the post-preclear report (generated by the "eye" icon) also looks fine to me. However, the actual log has the warning "Zeroing: dd command failed, exit code: 1" on the dd command in the zeroing phase, from both drives at the exact same location before the location where the pre-read ended. The post-read pass also stopped at the same location. Searching the forum, I've seen a couple of threads with the same error message, but it doesn't seem to be the same behavior: In those cases, the pre-clear stopped on the error, which it didn't in my case, or it was a HW issue (cable / bad SATA port) -- the fact that in my case the error happened at the exact same disk location for both disks seems to rule that out. I notice that there was another post a few days ago about the zero-write phase not stopping the pre-clear process despite showing an error; even if the dd command is not told when to stop, I'd expect the pre-read pass to have stopped on the same location. Having pre-read and post-read cover a different number of blocks seems problematic. Another odd thing is that after the Pre-clears completed, the UNassigned Devices plugin shows only one of the 2 disks as being pre-cleared in the "FS" (I assume Filesystem?) column, and displays Auto-mount, Share, Script Log and Script as available for that one only. Anything to worry about here? I'd rather not redo the whole pre-clear unless really necessary. Additional details: UNRaid v6.3.1, plugin Pre-clear Disks 0.8.4-beta of 2017.02.16a Before the pre-clear, I ran the short SMART test which passed fine (both drives). Attached: pre-preclear SMART tests, Post-preclear reports, preclear log, Diagnostics file, screen cap of Unassigned Devices. Any additional info I can provide? I'm holding off on further pre-clears, reboots etc. so I don't clobber it. ST8000VN0022-2EL112_ZA160BZK-20170313-0214.txt ST8000VN0022-2EL112_ZA160HNB-20170313-0220.txt post-preclear report ZA160BZK.txt post-preclear report ZA160HNB.txt redrock-diagnostics-20170315-1548.zip
  19. Hi, UNRaid noob here. I'm probably just misunderstanding something, but want to make sure . Just completed my first build, and to prep for configuring the array, I ran pre-clear on 2 brand new 8TB drives (Seagate Ironwolfs). The WebGUI Preclear plugin page's status says "Preclear Finished Successfully!" for both drives, and the post-preclear report (generated by the "eye" icon) also looks good. However, the actual log has the warning "Zeroing: dd command failed, exit code: 1" on the dd command in the zeroing phase, from both drives at the exact same location before the location where the pre-read ended. The post-read pass also stopped at the same location. Searching the forum, I've seen a couple of threads with the same error message, but it doesn't seem to be the same behavior: In those cases, the pre-clear stopped on the error, which it didn't in my case, or it was a HW issue (cable / bad SATA port) -- the fact that in my case the error happened at the exact same disk location for both disks seems to rule that out. Anything to worry about here? I'd rather not redoing the whole pre-clear unless really necessary. Additional details: UNRaid v6.3.1, plugin Pre-clear Disks 0.8.4-beta of 2017.02.16a Pre-clear was run in parallel for both drives (pre- and post-read passes, one zeroing cycle); it took ~36 hours which seems reasonable. Before the pre-clear, I ran the short SMART test which passed fine (both drives). Attached: pre-preclear SMART tests, Post-preclear reports, preclear log. Any additional info I can provide? I'm holding off on further pre-clears, reboots etc. so I don't clobber it. ST8000VN0022-2EL112_ZA160BZK-20170313-0214.txt ST8000VN0022-2EL112_ZA160HNB-20170313-0220.txt post-preclear report ZA160BZK.txt post-preclear report ZA160HNB.txt
  20. Sure, this entire discussion is hypothetical... But half the reason I decided (after many years) to DIY a system, rather than buying a Synology or QNAP appliance, is so that I'd have an excuse to do stuff hands-on and learn new things. Thanks for indulging me Just a minor correction -- according to Noctua specs, that's ~18dB(A) each without the LNA, and ~13dbA with; since the dB scale is logarithmic, two fans together without LNA would yield 20.6 dBA (see here), and that's the max noise levels, at 1600RPM. Noctuas are apparently pretty amazing -- every review I've read shows them both quieter and cooler than virtually anything else. As I mentioned above, so far I've only done testing outside the case; both with and without LNA yielded the same results (~48-50C, 900RPM), and the same whether idling in the BIOS or running MEMTEST86 (not sure I understood its display, but I think only 1 core of the 8 is used in the test). I'll wait until I have the system fully assembled and all fans attached to see what temps I get (and sound levels ) at various loads -- I'll definitely have questions at that point... I want to achieve as low baseline idling temps as reasonably possible -- that's where the machine will be spending most of its time.
  21. Interesting, thanks... Definitely cheaper than it used to be. The below is strictly hypothetical for now, since I already have the air cooler. However, am always interested in learning new stuff 1) Neither of the above, AFAICS, fit my specific CPU socket (LGA2011-1 Narrow ILM). Both (the Corsair with adapter) only fit the LGA2011 Square ILM which is mechanically different than what I have. I checked their instructions to be sure. 2) That's the minor issue. More significantly, I still don't see how it improves overall cooling capacity for the case overall, or noise: My existing (free) case exhaust fan is also 120mm, and controlled by the motherboard just like these watercoolers' fan would be. Given those Noctua fans are silent (I can't hear them, at all, from 30cm away at 900RPM), there's no noise advantage. Incidentally, spec-wise the Noctua fans are 18dBa vs. 30dBa for the Corsair and 35 for the Intel. 3)There's no other exhaust area on the case, so unless I drill an additional exhaust grille somewhere else on it for the watercooler's radiator fan, it can only replace the stock case fan, not augment it. That is, to the extent the waterblock moves more heat off the CPU than the Noctua heatsink + 2x 92mm fans (see (5) below), it still needs to leave the case; that removed extra heat would come at the expense of other heat generated from the case, since the overall case heat removal capacity is the same. 4) Ergo, the only advantage of watercooling I can see using these solutions is if the Noctua cooler can't deal with the CPU heat (*), and otherwise the case total heat is within the bounds removable by a 120mm fan. 5) The fact that watercooling has much better cooling potential than air in general isn't relevant; it's far from a given these specific water blocks can remove more heat than the specific (quite large) Noctua heatsink. I note neither one gives any specs in that regard (BTUs removed per minute). Without that, who knows what the actual performance is? 6) The damage if it leaks is serious, with no warning, and the MTBF is a third of the aircooled system's (not trivial given unattended 24/7 operation) At least if a fan fails, on either a watercooled or aircooled solution, at least the CPU and many other electronic components have thermal protection shutdown To really make use of a watercooled solution using my case, if there were a serious heat problem, I'm fairly certain I'd need to mount radiator & fans outside the case. Anyway, thanks for the ideas! (*) Doubtful IMHO -- this model has been out for 2.5 years, and was specifically designed for Xeon cooling, spec'd for CPUs to 140W TDP, and used by many UNRAID and other home server builds. If there were issues with it, it would have been common knowledge by now.
  22. Not really. Not familiar with watercooling in any detail, but: -- I don't see any inherent reason a watercooled solution would be quieter than an aircooled one. The same heat wattage needs to be dispersed, after all (actually, the watercooled system has extra wattage for pumps, though that's probably pretty low), so the radiator fan has the same potential noise issues as aircooling fans. -- While I haven't run any thermal calculations , I doubt the overall thermal load is large. The CPU's TDP is 115W; While there are multiple disks, given the way UnRAID's array works, only one disk is written to at a time. I won't have any PCIe cards to begin with, and the graphics card I intend to eventually add for an OS X VM is only 15W max. -- The system won't be highly stressed for hours at a time, like an overclocked gaming rig; -- An ATX case should IMO have enough internal volume such that decent fannage will do the job; -- Given how uncommon narrow-ILM LGA2011 is to begin with, I suspect suitable watercooling components for the CPU will be even harder to find than aftermarket air coolers . -- I very much dislike the failure modes of a liquid cooled system... Esp. in combination with a 24/7 server. And there do seem to be a lot of accidents / failures. For a first build (well, since the late 1970s)? Prob. not a good idea. -- I doubt it's an accident that watercooling isn't commonly mentioned for large server installations; issues . -- Water cooling has the reputation of being very expensive (anything over ~$100 for the entire cooling system is "very expensive" in the context of this build); -- Given added system complexity & cost, it doesn't make sense to me to even think of watercooling unless I can't tweak the aircooled system to fix the issues (and I'm pretty sure the only issue I'm likely to have is noise due to the cheap-ish included fans; actual cooling performance should be sufficiently tweakable using BIOS)
  23. The Z9PA-D8 mainboard was no longer available (at least in Europe) just when I was about to pull the trigger. After some intensive research, I settled on the Asrock Rack EPC602D8A as a replacement ; there are pretty much no dual-CPU E5 (LGA2011-1) motherboards in the ATX form factor that are reasonably priced, that I could find. The AsRock is somewhat similar to the single-CPU version of the Asus, but is slightly better for my purposes: 12 native SATA ports rather than 6, and 7 PCIe slots rather than 5. It also has an internal USB3 headerm so supports 4 native USB3 ports rather than 2. It does have IPMI support. What took longest to research is what CPU cooler would work... Both the Asrock & Asus use the narrow version of the Intel ILM LGA2011-1 socket, and in addition have almost no space between the CPU and DIMM sockets. They were designed to use commercial-server type radiators with no fan and forced-duct cooling, or extremely noisy small-diameter fan/high-RPM coolers (only available from brands I'd never heard of); almost no common consumer/prosumer coolers will work here. I really wanted a decently quiet solution. After a lot of back and forth with Noctua, SuperMicro (re their SNK-P0050AP4 cooler, which isn't low-RPM really), Asrock (previously with Asus as well), I settled on the Noctua NH-U9DX i4 . Even so, the cooler obstructs access to the two innermost DIMM slots; if I ever have RAM problems and need to replace these two, or even just do a test-via-DIMM-swap, the cooler will need to be removed, thermal paste reapplied etc. BTW, If you end up using the Asus Z9PA-D8 and are interested in using a Noctua cooler, to save you the 3 weeks (!) I corresponded with them on fan choice/placement, and want airflow to wards the rear of the case (I/O port panel), the only suitable Noctua cooler is the NH-U9S (assuming no height issue, which there shouldn't be with an ATX case); also, the fan location on one of the coolers would need to be reversed. Let me know if relevant, I'll PM you or post the detailed drawing they sent me. Incidentally, I'm very impressed with Noctua's presales support -- they spent hours on obtaining accurate photos of the mainboard, making drawings of various coolers superimposed etc. I only received the final parts of the build a few days ago, so it's not done, but I did get UNRaid to boot with no issues on the very first try, just mainboard & power supply, not assembled in the case yet. I'll report back in a few days once it's all assembled . Only issue is I am getting warmer idle CPU temps than I'd like (48C-50C), however, it could be because it's only using the CPU cooler fans, no case fans, so I'll hold off worrying until assembly is complete. Final build list, excluding the array data disks: -- Case: Sharkoon T9 Value mid-tower ATX <== My first impressions are very favorable. Very well made, and unbelievable it only cost 51 Euros. (*) -- Hot-swap cages: 2x iStarUSA BPN-DE350SS 5-in-3 trayless. One installed initially, the 2nd will go in once it's needed. -- Mainboard: Asrock EPC602D8A, ATX -- CPU: 1x Xeon E5-2665 ("v1") . -- Memory: 64GB, 8x 8GB ECC RDIMMs, Samsung M393B1K70DHO-CKO . -- PSU: EVGA SuperNOVA 750 G2 (750W) -- Cooling: ---- CPU coolers: 1x Noctua NH-U9DX i4 (using both fans) ---- Case cooling: Initally (to be modified as necessary): Exhaust fan: 120mm fan that comes with case Intake fans: 2x 120mm fans that come with case 1x 80mm fan ((fixed speed) on hot-swap cage -- Cache drive: 1x 2.5" Samsung EVO 850 500GB SSD ---------- (*) One caveat about this case: All the 5.25" bays have metal tabs between them, so if you want to install a hot-swap cage that doesn't have channels accommodating such tabs, you'll need to flatten the tabs first; I used this G-clamp.
  24. I'll hitch a ride on this thread... I'm in the process of ordering parts for my first UnRAID build, so also need to make the same decision as the OP. For the cache drive(s): I'll be starting out with one (SSD) drive, but adding a second in a pool later on, so obvious choice is BTRFS. I'm curious why XFS for the array drives. I recall reading that BTRFS's "native" RAID support has issues (particularly RAID5/RAID6), but UnRAID doesn't use that. Are there stability issues (and if so, which) with the BTRFS version UnRAID uses? Or maybe there aren't as good disk recovery/analysis tools? The BTRFS feature I find particularly attractive is bitrot detection/correction (eventually if it's not working yet). I expect my array to hold ~15TB to start, and expand continually. I'd rather avoid the hassle of a large-copy project later on just to switch filesystems, and was hoping BTRFS would be reliable enough to start with.
  25. Thanks! I was going to start off with a single SSD cache drive anyway... I'll start off with one NVMe device and one of the $20 adapters, and hopefully the cost of the PCIe switch-based ones will go down eventually.