BVD

Members
  • Posts

    330
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by BVD

  1. This ones been gone through a number of times on the forum (I think in this thread actually even) - check your docker settings and reconfirm, you'll get it sorted 👍
  2. That can be done, but is sub-optimal - as the file wasnt created on the zfs system, the zfs specific metadata (xattr, etc) aren't applied, instead using whatever it was at creation, which may not match the zfs filesystems settings/config. Worst case is leaving performance on the table though (unless you start accessing that image over the network), so not a huge deal.
  3. XFS on ZFS zvol should be fine 👍
  4. Also, why would we want btrfs on top of zfs?
  5. I updated with a response in my thread (again, apologies for the wait!), but figured I'd check in here as well - The biggest 'boon' to choosing epyc over TR pro is the wealth of motherboard options out there (imo) - the market for epyc is so much bigger that you'll never lack for MB options, available in every shape and size. Lots of niche needs can be filled that way - need a mini-ITX motherboard to fit in a tiny box? Epyc's got em. Dual processor 16 DIMM behemoth? Epyc's the way. Need a specific NIC chipset onboard? You'll probably find one on an epyc MB by some vendor "somewhere". However, I feel that's the only real benefit to it, at least personally... If any of the sWRX80 MB's suit your needs, I feel like TR Pro is the way to go. You get onboard audio (specific review of the audio can be found here, at least for all wrx80 boards minus the newer MSI unit), usually better IO / peripheral connectivity, and generally a better likelihood of the board's individual components/chipsets being tested with 'consumer applications / hardware'. Since they're workstation boards, they're more likely to be tested with components we (as home users) might consider 'normal use' (consumer graphics cards, etc). Essentially you get the best of both worlds - ECC memory, IPMI, loads of memory channels and PCIe lanes (from the server side), along with all the things one would expect when building a computer for themselves these days (onboard audio, bunch of USB ports, etc). Some other points - You might check out some other cooler options - since you're in the 4u rackmount space, might check out dynatron or supermicro's units. There's definitely a price gap between the 16GB and 32GB DIMMS, at least on the second-hand market. I've no problems buying used memory personally, and have seen 16GB sticks go for as little as $2.80-3/GB on various swaps and the like, while 32GB DIMMs seem to typically stay closer to the $5-6/GB range. With 8 channels, you can get your 128GB with 16GB DIMMs, and maybe save some $$$ if you're patient The 'TDP' values listed for TR Pro are all just nonsense imo - I don't think they ever tested them individually, instead opting for "well, this socket supports 280w, so everything's 280w". Consider the listed TDP as the 'worst case scenario'. If you're looking to make it through the next 5+ years, I'd ditch the norco at some point. When they work, they seem to be great. But as the company's dead (along with all the nightmare scenarios I've come across from other users), I'd replace it before it dies; sell it off, and use the recouperated $ to help fund a different chassis from a vendor that's still operational. I obviously tend toward supermicro, as all their stuff is practically lego-like... I still have the first chassis I ever bought from them over 10 years ago, as I just upgraded the backplane from what it came with [3Gb/s SATA] to a 12Gb/s SAS unit, and it's still happily chugging along. But Chenbro, Asus, and several others are options as well. Just my .02 - best of luck with the build, and keep us posted if you would!
  6. Wow, I'm super delayed in responding, sorry about that! Probably no longer useful, but figured I'd respond anyway for posterity: Idle power consumption runs around ~200w, which is pretty typical for me - that's with a couple disks spun up for folks watching plex (direct streams / no transcoding). The highest I've ever seen it pull was just over 670w - this was during a parity check while playing some games on the 2070 (and a whole bunch of other crap). The TR Pro boards only 'support' for ECC insofar as I'm aware, so right now it's 128GB of registered ECC memory. That 763w though... That seems awful high? Is that your max power consumption I assume? Promise I'll be a little more prompt on the next response lol
  7. I was working on typing up a follow-up to the sr-iov thread I did a while back, and got to wondering about the value of it (e.g. how many people here are using a given NIC type). Mine are all Intel or Chelsio, but I see a lot of broadcom/aquantia/mellanox as well mentioned in the forum, and wondered weather one on broadcom (or even chelsio) would be helpful... For example: Intel Chelsio Mellanox Broadcom Aquantia Realtek Marvell
  8. Same benefits with 1Gb - just ensure that whatever intel nic you get is both genuine and has sr-iov support per intel's ark page listing and you'll be fine. They're more expensive, but I typically go straight for new i350 intel oem cards these days. Too much risk of fakes out there on the used market, no matter how careful you are.
  9. The biggest advantage is performance. I've been able to get north of 1.1m IOPs out of mine, something thatd be nearly impossible with any btrfs setup. The disadvantage though is that if one simply dives into zfs without doing any tuning (or worse, just applies something they read online without understanding the implications), they're likely to end up worse off than they would've with virtually any other filesystem/deployment.
  10. 4 drives in z2 is always going to be awful. As mentioned above, if only using 4 drives, you should mirror them so you dont have the z2 overhead in a place it doesnt make any sense. Once you get the other 4 drives, then you can redeploy with z2. I'd recommend spending some time doing a bit more research before jumping into zfs though personally - jumping in with both feet is fine, but if unprepared, you'll likely fail to realize the filesystems full benefits.
  11. Would need to know more about your network for this one - e.g. the ifconfig outputs for each of the interfaces involved, whether you've got them both configured under the same domain, what routes you've configured, how your attempting to communicate (tried pinging from one to the other and vice versa, or just using 53/853 for dns/DoT, etc), and so on. When using the virtio driver, you're saying "this application is part of this host" as far as the network is concerned - so if your host certs are correct and the like, your container's under that umbrella (with caveats of course). When using this guide though, you're saying "this is a separate server which simply communicates across the same physical port", so you essentially need to treat its configuration like that of a brand new host being added to your network.
  12. Your zpool wont be managed within the unraid system/UI - it's not yet a filesystem that's supported for array/cache pools, and will be managed entirely outside of the way you manage your unraid arrays storage (cli)
  13. For the first year, I didnt even use the unraid array. It was just 14 disks (eventually grown up to 29), all in zfs. I've been using it since sun released opensolaris way back when, and was just more comfortable with it. Many of us use zfs, and in a myriad of ways. If you can be a bit more specific about your desired implementation, we can probably answer your questions more explicitly though? If you've never used zfs before, just make sure you fully understand the implications before throwing all your data on it 👍
  14. I'd have to defer to the plugin creator for the answer to that one.
  15. Beats the snot out of what I was doing, recompiling the kernel with LXC and doing it all command-line. KUDOS!!! I'm still on 6.9.2, so I have to stick with what I've got for now
  16. This is controlled by the "Enhanced Log Viewer" plugin: I'd recommend requesting the author update their plugin for compatibility with 6.10 in the thread above - keep in mind though, the same author is responsible for the unassigned devices, open files, tips and tweaks, hotplug USB, and several other plugins, so his desire to focus efforts on those projects they're maintaining with higher benefit/impact may mean limited time for this (which I totally get!)
  17. Not to be pedantic, and I do really get where you're coming from (I've posted several times in the thread outlining justifications for this feature above), but changing your servers name, especially on a NAS, is kind of a big deal as it has lots of implications - windows even makes you reboot, for example. Even changing a ports IP address I'd consider to be a less disruptive task, as you could simply be changing one of any number of ports addresses, while the hostname impacts em all across the board.
  18. The problem insofar as I'm aware is/was related to automatically created snapshots of docker image data (each individual layer of each image), to the point that the filesystem would start encountering performance issues. Basically so many filesystems ended up getting created that the system would grow sluggish over time. Not everyone has enough layers/images that theyd encounter this or need to care, but in general, youd be best suited to have the docker img stored on a zvol anyway imo. Containers are meant to be disposable, and at a minimum, shouldnt need their snapshots of them cluttering up your snapshot list.
  19. It's dead, at least for now - I'd submit for a refund
  20. Seagate, WD, doesn't really matter to me, as long as it's their enterprise line - currently Seagate Exos x16 and WD Gold 16TB. Best $/GB with a 5 year warranty, and right at 1w each in standby. Get enough drives in any chassis going, and I've had poor luck with the damned every single manufacturer's consumer lines (at least as far as HDD's go.
  21. I'd guess it's well and truly finished... but I'd also guess that he probably shares my concern that publishing such, while still legal, could greatly impact limetech in a negative manner. I'd only just realized that the whole thing wasn't a completely proprietary blob a few weeks back when I was trying to recompile mdadm to 'behave normally' so I could test out an lvm cache for the unraid array and stop worrying about mover. For those who are truly in need of this, I do hope they find a way to implement it within the OS, but if not, they've always got the option to compile it - if they don't know how, but have a strong enough desire to warrant it, what a great opportunity to learn! ... I could just imagine someone seeing a guide they could copy/paste without ever knowing or understanding the herculean efforts the fellows have put into making this thing, reaping the benefits but without paying for it (in either sweat equity, or to the unraid devs). I'm all about open source, but I'm also pragmatic; most don't contribute back to the projects they use, and limetech's already got a business around this with people's family's depending on it for income. I'd hate to be the person whose actions resulted in slowing down development due to layoffs, or worse, them having to close up shop altogether.
  22. You can technically already do this by pulling the customized raid6 module from unraid and re-compiling md to use it - then you're just depending on whatever OS you're running to do the VM/container work.
  23. On Hiatus per @jonp, but it's sounding like it'll make a return!
  24. The limitation of IOMMU grouping is always on the side of the motherboard (I usually hesitate to make statements like this as fact, but I think it's pretty close here). The ability to segment to individual traces down is completely dependent upon the PCB design and BIOS implementation, which is on the MB side. The 'downside' to enabling the override is, at least in theory, stability - what we're doing is 'ignoring' what the motherboard says it should support. In reality, if you enable it and everything 'just works', it'll more than like continue to 'just work' indefinitely (barring any changes to the MB BIOS). Unfortunately IOMMU grouping and how it all actually works is beyond the scope of this tutorial, but I agree it's a subject that could use clarification. A lot of it boils down to hardware implementation and optionROM size the MB vendor implements into its design - most consumer boards, they only have enough space to load the fancy graphical BIOS they come with, where workstation/server boards still tend towards keyboard driven UI's (leaving more space for other operations).