BVD

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by BVD

  1. Same benefits with 1Gb - just ensure that whatever intel nic you get is both genuine and has sr-iov support per intel's ark page listing and you'll be fine. They're more expensive, but I typically go straight for new i350 intel oem cards these days. Too much risk of fakes out there on the used market, no matter how careful you are.
  2. The biggest advantage is performance. I've been able to get north of 1.1m IOPs out of mine, something thatd be nearly impossible with any btrfs setup. The disadvantage though is that if one simply dives into zfs without doing any tuning (or worse, just applies something they read online without understanding the implications), they're likely to end up worse off than they would've with virtually any other filesystem/deployment.
  3. 4 drives in z2 is always going to be awful. As mentioned above, if only using 4 drives, you should mirror them so you dont have the z2 overhead in a place it doesnt make any sense. Once you get the other 4 drives, then you can redeploy with z2. I'd recommend spending some time doing a bit more research before jumping into zfs though personally - jumping in with both feet is fine, but if unprepared, you'll likely fail to realize the filesystems full benefits.
  4. Would need to know more about your network for this one - e.g. the ifconfig outputs for each of the interfaces involved, whether you've got them both configured under the same domain, what routes you've configured, how your attempting to communicate (tried pinging from one to the other and vice versa, or just using 53/853 for dns/DoT, etc), and so on. When using the virtio driver, you're saying "this application is part of this host" as far as the network is concerned - so if your host certs are correct and the like, your container's under that umbrella (with caveats of course). When using this guide though, you're saying "this is a separate server which simply communicates across the same physical port", so you essentially need to treat its configuration like that of a brand new host being added to your network.
  5. Your zpool wont be managed within the unraid system/UI - it's not yet a filesystem that's supported for array/cache pools, and will be managed entirely outside of the way you manage your unraid arrays storage (cli)
  6. For the first year, I didnt even use the unraid array. It was just 14 disks (eventually grown up to 29), all in zfs. I've been using it since sun released opensolaris way back when, and was just more comfortable with it. Many of us use zfs, and in a myriad of ways. If you can be a bit more specific about your desired implementation, we can probably answer your questions more explicitly though? If you've never used zfs before, just make sure you fully understand the implications before throwing all your data on it 👍
  7. I'd have to defer to the plugin creator for the answer to that one.
  8. Beats the snot out of what I was doing, recompiling the kernel with LXC and doing it all command-line. KUDOS!!! I'm still on 6.9.2, so I have to stick with what I've got for now
  9. This is controlled by the "Enhanced Log Viewer" plugin: I'd recommend requesting the author update their plugin for compatibility with 6.10 in the thread above - keep in mind though, the same author is responsible for the unassigned devices, open files, tips and tweaks, hotplug USB, and several other plugins, so his desire to focus efforts on those projects they're maintaining with higher benefit/impact may mean limited time for this (which I totally get!)
  10. Not to be pedantic, and I do really get where you're coming from (I've posted several times in the thread outlining justifications for this feature above), but changing your servers name, especially on a NAS, is kind of a big deal as it has lots of implications - windows even makes you reboot, for example. Even changing a ports IP address I'd consider to be a less disruptive task, as you could simply be changing one of any number of ports addresses, while the hostname impacts em all across the board.
  11. The problem insofar as I'm aware is/was related to automatically created snapshots of docker image data (each individual layer of each image), to the point that the filesystem would start encountering performance issues. Basically so many filesystems ended up getting created that the system would grow sluggish over time. Not everyone has enough layers/images that theyd encounter this or need to care, but in general, youd be best suited to have the docker img stored on a zvol anyway imo. Containers are meant to be disposable, and at a minimum, shouldnt need their snapshots of them cluttering up your snapshot list.
  12. It's dead, at least for now - I'd submit for a refund
  13. Seagate, WD, doesn't really matter to me, as long as it's their enterprise line - currently Seagate Exos x16 and WD Gold 16TB. Best $/GB with a 5 year warranty, and right at 1w each in standby. Get enough drives in any chassis going, and I've had poor luck with the damned every single manufacturer's consumer lines (at least as far as HDD's go.
  14. I'd guess it's well and truly finished... but I'd also guess that he probably shares my concern that publishing such, while still legal, could greatly impact limetech in a negative manner. I'd only just realized that the whole thing wasn't a completely proprietary blob a few weeks back when I was trying to recompile mdadm to 'behave normally' so I could test out an lvm cache for the unraid array and stop worrying about mover. For those who are truly in need of this, I do hope they find a way to implement it within the OS, but if not, they've always got the option to compile it - if they don't know how, but have a strong enough desire to warrant it, what a great opportunity to learn! ... I could just imagine someone seeing a guide they could copy/paste without ever knowing or understanding the herculean efforts the fellows have put into making this thing, reaping the benefits but without paying for it (in either sweat equity, or to the unraid devs). I'm all about open source, but I'm also pragmatic; most don't contribute back to the projects they use, and limetech's already got a business around this with people's family's depending on it for income. I'd hate to be the person whose actions resulted in slowing down development due to layoffs, or worse, them having to close up shop altogether.
  15. You can technically already do this by pulling the customized raid6 module from unraid and re-compiling md to use it - then you're just depending on whatever OS you're running to do the VM/container work.
  16. On Hiatus per @jonp, but it's sounding like it'll make a return!
  17. The limitation of IOMMU grouping is always on the side of the motherboard (I usually hesitate to make statements like this as fact, but I think it's pretty close here). The ability to segment to individual traces down is completely dependent upon the PCB design and BIOS implementation, which is on the MB side. The 'downside' to enabling the override is, at least in theory, stability - what we're doing is 'ignoring' what the motherboard says it should support. In reality, if you enable it and everything 'just works', it'll more than like continue to 'just work' indefinitely (barring any changes to the MB BIOS). Unfortunately IOMMU grouping and how it all actually works is beyond the scope of this tutorial, but I agree it's a subject that could use clarification. A lot of it boils down to hardware implementation and optionROM size the MB vendor implements into its design - most consumer boards, they only have enough space to load the fancy graphical BIOS they come with, where workstation/server boards still tend towards keyboard driven UI's (leaving more space for other operations).
  18. Whether or not PCIe ACS override is needed is widely system dependent - any time you hit an issue with IOMMU grouping, it's one of the options that are available, but unfortunately not a one size fits all. Glad you got it figured out!!
  19. I've an theory on that, but it's probably an unpopular one - ZFS has gotten so much more popular in recent years, with a lot of folks diving in head first. Hearing how it protects data, the ease of snapshots and replication, how easy it makes backups, that they migrate everything over without first learning what it really is and isn't, what it needs, how to maintain it, and what the tradeoffs are. Then when something eventually goes sideways (neglected scrubs, power outage during a resilver, whatever, they change xattr with data already in the pool, set sync to disabled for better write performance, any number of things both environmental or user inflicted), the filesystem is resilient enough that it 'still works', so it's expected that anything on it still should as well... Hell, I've been neck deep in storage my entire career, and using ZFS since Sun was still Sun, and I *STILL* find myself having to undo stupid crap I did in haste on occasion. The fact that you partitioned the disks first wouldn't change any functional behavior in the driver (similar to running zfs on sparse files, the same code/calls are used). Either 'it's fixed', or I'd simply taxed the driver beyond what it'd optimized for at the time, at least that's my feeling anyway.
  20. Are you running 6.10? I've not tested it since 6.8.3, so perhaps there were updates to the docker zfs driver in between that've made a difference, I'm not sure. I'd say it's more likely though that you're just not 'pushing the limits', so any inefficiencies never really bubble to the surface. I will say though, I likely would've been in a similar situation with btrfs. At the time, I was running some ~160 containers on one system (testing for some work stuff in addition to my home environment), and it got to the point where any time one of the containers was restarted, there were significant latency spikes, even though the hardware layer still had loads of overhead available. I tracked it as far as a long running TXG (but before attempting to commit the TXG) before I realized I not only didn't have time for this, but was way the hell out of my league lol. Something funky going on in spacemap, but I'd no idea what. To me though, it makes some sense - why expend the resources necessary to create a snapshot of every single layer of every single container every single time for no benefit really given the containers are disposable (comparatively higher transactional expense to both CPU and filesystem), when you can instead just give it a single entrypoint and let the lower level IO scheduler handle it with an image or block device? Honestly for most folks it probably doesn't matter though - a single current gen NVME device can handle enough IO these days that even with any number of inefficiencies in configuration, they likely wouldn't notice. And if they do, it'll likely be some higher level code efficiency that crops up before the storage starts to give em any headaches lol. "Back in my day, we had to pool 80 drives, manually align blocks to page sizes, and add a SLOG just to get enough IOPs that the DBA's didn't burn us at the stake! Don't get me wrong, they were still out for blood, and if you saw them at the coffee machine when you went for your refill, you still turned around and came back later... But at least they just wanted you harmed, and not dead!" 🤣
  21. You've got a couple options - 1. Create a zvol instead, format it, and keep docker writing there (which is now a block storage devices) 2. Your plan, creating a fileset and using a raw disk image In either case, couple things you can do - * redundantmetadata=most - containers are throwaway, no real reason to have doubly redundant metadata when you can just pull the container back down anyway; wasted resources * primarycache=none (or metadata at most) - containers might be (probably imperceptably given you're on NVME and not SATA) slower to initially start, but once they do, the OS is already handling memory caching anyway, leaving ZFS often duplicating efforts (and using memory by doing so) * sync=standard - again, containers, who cares if it needs to be re-pulled * atime=off - (I do this for everything) I've got a whole whack of performance tuning notes lying around for one thing or another - if the forum ever gets markdown support, I'll post em up, maybe then can be helpful to someone else lol
  22. I think you probably meant to post that for the ZFS Master plugin? This one just adds the ZFS driver. Maybe there's a hover over help menu or something? Not sure. To the question at hand though, it looks like you're using the docker ZFS driver (using folders instead of .img) - personally, I'd recommend against that. The data within these directories are just the docker layers that get rebuilt each time you update a container. Doing it this way just makes managing ZFS a mess, as you end up with all sorts of unnecessary junk volumes and/or snapshots listed out every time you do a zfs list / zfs get / etc. Plus, it creates so danged many volumes that, once you get to a decent number of containers (or if you've automated snapshots created for them), filesystem latencies can get pretty stupid.
  23. After finding that my go file was getting out of hand, I ended up setting up a script that kicks off at first array start instead which walks through symlinking all the various stuff I had in there instead, just so I could keep the go file limited to things that actually have to be ran at init (or at least 'things that make the most sense at init' lol). At this point, it links my bash and tmux profiles, iterm shell integration, some cron, several other things - this way at least the UI is relatively tidy lol. I'd found some areas where just having everything on the NVME zpool and linking it afterwards just seemed to help with initial startup times and the like. Maybe if user scripts ever gets the ability to pul things in folders I'll change it up, but I spend most of my time in the cli anyway, so I guess we'll just see what happens 🤷‍♂️
  24. Honestly that'd be more than I'd want in the container, at least myself - you can manually do this a few ways, but if it's important to you, there are definitely a few options out there you can investigate and work through! 👍