andacey

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

andacey's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, I run an OpenVPN client container. On previous versions of Unraid I was able to then use the --network='container:[name]' syntax to have any other containers connect to the internet through the VPN. I used to achieve this by selecting none in the network dropdown and then adding my custom network argument into the extra parameters and it worked great. What seems to have changed is that now selecting none adds an explicit --network='none' argument, which then conflicts with my extra parameters. I can fix this by copying the failed command and then running it again from the command line, removing the --network='none' argument, but then on the next auto-update they become orphaned containers and I have to set them up again. Is there any way to achieve this argument via the UI now? Update, solved So I found another thread which referenced using docker network create to create a "network" named with the parameter you want to use. So in my case, docker network create container:openvpn After doing that, I was able to add the containers and select that option as if it were another network. Works great and no, hopefully, they'll persist across an auto update.
  2. Haven't run a full parity check since I've put the lid on but preclears are working great with spikes only into the low 30s. The M.2 card did also drop significantly, it's idling in the mid-20s now instead of low 30s so at least a 5-degree drop in idle temps. I figured the lack of airflow was the issue, the big 140mm fan on the back was going to pull way more air from the top with the lid off, with it closed down it's definitely pulling the air away from the M.2 as it should. Sadly, better visibility to my drives is also highlighting the state of my old NAS drives. I knew I had some bad sectors but wasn't immediately freaking over that, but I'm seeing some pending sector errors on 2 of the drives, not counting the one that's not working properly at all yet (haven't had time to check the connections). The old 3TB Reds are right around the end of their warranty too of course, so I may or may not be able to RMA them.
  3. Well, I have one possible issue. I finished copying the data over and installed the remaining drives. Everything showed up fine, but preclear aborted within about 30s for one of the drives. The other 2 are churning away fine, I could tell something was off because it was showing only 80MB/s for that drive and dropping down to 40MB/s before it aborted. The drive was working fine in my old NAS, including the final copying over with 1 drive removed (needed it for disk space on Umraid) so I'm pretty confident that the drive is fine. My guess is that it's a bad SATA connection. I'm using some really thin Seasonic SATA cables and I had to bend them pretty aggressively so that's my first guess as to where the problem is.
  4. Any word on when we can expect the kernel to move to 4.15 or 4.16? Not wanting to complain, but I have a new build with a Coffee Lake CPU and I'd love to get official support for the graphics.
  5. Good tips. I knew that 47C wasn't too much of a concern, but it did get flagged from the default temperature warnings. My only concern was that this was with the case open so I'm going to keep an eye on it once I close things up since the other M.2 on the PCI-e adapter was still running at a nice 21C, it definitely highlighted that's an area of the board that's not getting any airflow at the moment.
  6. So far the build is going quite well. I had some delays on the parts, but once everything showed up it wasn't too bad getting the initial parts in and booted. I've copied over nearly the entire contents of my old NAS, filling my 8TB drive in the process. I've now pulled one of the 3TB drives from the NAS and am waiting for it to preclear so that I can add it to the array and complete the copy. Yes, this is a little bit sketchy running the NAS with no redundancy, but there's nothing that's irreplaceable that's left if I do suffer a drive failure before I get the last stuff copied over. Once that's done I'll be able to properly install those 4 drives and get myself up to a very nice 20TB array with another 500gb of cache. I know I'm going to have to redo some of my cable management once I install the drives, but with the modular PSU, it's pretty easy to unplug from the PSU and reroute those cables in a more sensible layout. The x4 PCI-e to M.2 card worked flawlessly and showed up fine in the BIOS. Being close to the big vent on the side it's also staying nice and cool, although I am still running with the case open so I don't have any final temperature data yet. The M.2 card on the motherboard is not quite so lucky, I'm going to see how it fairs with the case closed in case that helps some air to flow closer to it rather than just rising out of the open top. At one point, after doing some heavy copying and a mover operation it did get up to 47C so that's a bit toasty given that ambient was only around 21C at the time. If it still spikes in temperature once everything is assembled then I'll probably look into an M.2 heatsink to keep it a bit cooler.
  7. I went for the NH-U12S to get a little bit more height between the motherboard and the heatsink.
  8. Thanks for the offer. Parts should be arriving on Tuesday (sadly, I missed the cutoff to have them delivered for the bank holiday weekend. I ended up switching to the H370 version of the board since I'm not going to be overclocking and it was slightly cheaper but also offers more USB 3.1 ports if I need them. It looks like the layout of the board is nearly identical. Looking at your photos, it looks like you're going for a similar plan as me, I'm going for a Noctua tower cooler and replacing the case fans with Noctua PWM fans. I really like the idea of labelling the drives on the brackets, will have to steal that idea. I didn't order the exact model of M.2 to PCI-e adapter I'd linked earlier as it was out of stock, I'm getting a Silverstone version instead which I'm 90% certain is the identical board but with Silverstone branding. That will let me run 2 500gb 960 EVO M.2 cards for the cache. For the PSU, I was looking at Corsair but then started looking at the size of the PSU and for smaller options. I've opted for a Seasonic 850W Focus Platinum. It's overkill for the power I know, but I wanted to get a Platinum rated PSU if possible. It's also smaller than the Corsair PSU, it's not much but I figure every millimeter will count in this case so short of going for SFX it seemed like a good idea.
  9. Sweet, I'd contacted Fractal Design about mounting larger HDs in the Node 304 and they said they've recently released new adapters and sent me this link, https://www.fractal-design-shop.de/navi.php?a=1116. I've also seen some info suggesting that there are newer brackets that were released for the 304 and maybe in newer production models, so I might be good either way but at least I'm not going to be absolutely stuck if I order the 304 and find out that wasn't accurate. I'm now just down to finalising some details on which PSU to use and then I should be pulling the trigger.
  10. Ha, I know this is going to sound like I'm flip-flopping, but that's because I'm nearly ready to buy and I'm just doing last minute sanity checks. I did a quick check on the dimensions of the Node 804 and it's way bigger than my current NAS. I knew that I'd have to go bigger, especially just for holding more drives, but this was a massive increase in size that likely wouldn't fit where I wanted to put the NAS. The Node 304 is still bigger but much closer in size and should be fine. That led me to look at board options again and I realized that the Asrock Z370M-ITX/ac is nearly perfect except for the lacking 2nd M.2 slot. I'd done a bit of searching before, but I just stumbled across this adapter, https://www.scan.co.uk/products/lycom-dt-120-m2-ngff-ssd-host-adapter-card-pcie-30-x4-22x80-22x60-22x42mm-for-pc-mac-linux?v=c which seems to be perfectly suited, and really cheap. That's a much better option than going for a more expensive mITX board to get the 2 M.2 slots and then be left having to use the PCIe slot for more SATA ports. The only remaining thing is to double check if the WD Red 8TB drives I'm planning on buying can be mounted in the Node 304.
  11. Ooh, that's a thing I could have easily overlooked, thanks! Honestly, while mITX is really tempting, and the Node 304 is probably the perfect case for what I want to do in that form-factor, but the issues with finding a mITX board that meets my needs are putting me off. I could get exactly what I want if I was willing to sink in the costs for an x299 board, but then I'm spending a lot more money on the hardware. Alternatively, if I did stay with Coffee Lake then there's an Asus ROG mITX board, but it's about twice the cost of the Asrock mATX board that I'm looking at, plus I'd still need to add an expansion card to get enough SATA ports. It seems like my options are mITX and spending way more money in one way or another (possibly also adding in extra power demands as well), or go for mATX with the only downside being that the Node 804 is much larger than I need.
  12. The CS381 does look tempting. I'm trying to go as small as possible so that's where mITX has some appeal. I doubt I'm going to go above 6 3.5" drives, and if I use M.2 for the 2 SSDs then I can get a pretty small case, but as I said, the combination of 6 SATA and 2 M.2 in mITX seems to be impossible to find. I realized that the better option might be to go for a board with 2 M.2 and then put a SATA expansion card in the slot. I think I'd discounted that option earlier because it would mean having to buy that expansion card day 1 since I know I'm going to start with 5 drives for sure starting out (the 4 in my existing NAS plus 1 new 8TB parity drive). I was kind of hoping to avoid buying a 2nd 8TB right away in order to spread out the costs a bit more and that's where hot-swap bays would be really useful, but with the tight cases I'd really like to at least have the bays all wired up so that adding a new drive would just be a simple matter of adding the drive and not having to disassemble the whole case. If I was willing to go all in right at the beginning then the Node 304 could be ideal. It's nice and small and doesn't hold anything more than what I need with my reqs. I realize I'm deliberately sacrificing later expansion, but with 8TB drives available now I really don't see the need to go above 6 any time soon. I'm running fine with 4 3TB Reds at the moment other than sometimes filling it up and having to make some decisions on what gets cleared away. If I add in the 2 8TB drives to that mix then the extra 12 TB of storage that will net me is going to last a very long time, and I would have the option to later swap out the 3TB drives as needed.
  13. So another case of, "tell me if I'm crazy". I'm debating between the Fractal Design Node 804 or the Silverstone DS380. The Node lets me run with mATX but is a lot bigger. I know about the airflow issues with the DS380, but assuming that I use a skirt behind the drives and upgrade the case fans then it seems like a really nice small footprint. Now here's the downside, if I go to mITX then I can't for the life of me find a 300 series chipset on a mITX board with 6 SATA ports and 2 M.2 slots. But then I realized that there is still an unused x16 slot since I'm only planning on using the onboard graphics. So are any of the PCI-e cards that add an M.2 slot and a couple of SATA ports worth looking at? I'd really like to run 2 M.2 drives for my cache so that I have some data protection there, and if I could get up to 8 SATA ports then at least I'm good if I ever do want to max out the drives in the DS380.
  14. I may have just answered my question about Coffee Lake. I realized I was misreading some motherboard specs that were saying they'd disable the SATA port if the M.2 port was in use, on some that's only an issue if using a SATA M.2 key. I did some digging and found the Asrock Z370M Pro4, that has 6 SATA ports and 2 x4 M.2 ports. I'm still reading through the manual now, but am I missing anything? That seems like it'd let me get 2 M.2 SSDs for my cache pool, and then put the 6 3.5" drives I'm planning with no expansion card needed. Plus, I'd have 12 threads to work with off the i7-8700. Aside from losing ECC is there any other downside I'm missing here? Obviously, the 2 M.2 drives would be more expensive but I'm willing to pay the difference for the massive increase in performance.
  15. Guessing it's not mATX then? I've not found any mATX Ryzen boards that can do that many SATA ports. Given that I want graphics that I can pass to the VM that's running PMP (probably just the embedded image) it sounds like Intel may be the better solution for the mATX form-factor. If I go for the E3-1245 v6 then I can get the 8 ports I want on the board easy but it's a bit more of an expensive build, but I do also gain ECC support as well. Coffee Lake gives me more cores, and a cheaper motherboard but then I lose ECC support and I'd need a board to get the SATA ports I need. Unless there's a mATX board with 2 M.2 slots and 6 SATA ports that don't get disabled by the M.2 slots. Then I'd have a great SSD setup and could still run the 6 3.5" disks I'm planning as a starting point.