BVD

Members
  • Posts

    191
  • Joined

  • Last visited

Recent Profile Visitors

1634 profile views

BVD's Achievements

Explorer

Explorer (4/14)

63

Reputation

  1. Your zpool wont be managed within the unraid system/UI - it's not yet a filesystem that's supported for array/cache pools, and will be managed entirely outside of the way you manage your unraid arrays storage (cli)
  2. For the first year, I didnt even use the unraid array. It was just 14 disks (eventually grown up to 29), all in zfs. I've been using it since sun released opensolaris way back when, and was just more comfortable with it. Many of us use zfs, and in a myriad of ways. If you can be a bit more specific about your desired implementation, we can probably answer your questions more explicitly though? If you've never used zfs before, just make sure you fully understand the implications before throwing all your data on it 👍
  3. I'd have to defer to the plugin creator for the answer to that one.
  4. Beats the snot out of what I was doing, recompiling the kernel with LXC and doing it all command-line. KUDOS!!! I'm still on 6.9.2, so I have to stick with what I've got for now
  5. This is controlled by the "Enhanced Log Viewer" plugin: I'd recommend requesting the author update their plugin for compatibility with 6.10 in the thread above - keep in mind though, the same author is responsible for the unassigned devices, open files, tips and tweaks, hotplug USB, and several other plugins, so his desire to focus efforts on those projects they're maintaining with higher benefit/impact may mean limited time for this (which I totally get!)
  6. Not to be pedantic, and I do really get where you're coming from (I've posted several times in the thread outlining justifications for this feature above), but changing your servers name, especially on a NAS, is kind of a big deal as it has lots of implications - windows even makes you reboot, for example. Even changing a ports IP address I'd consider to be a less disruptive task, as you could simply be changing one of any number of ports addresses, while the hostname impacts em all across the board.
  7. The problem insofar as I'm aware is/was related to automatically created snapshots of docker image data (each individual layer of each image), to the point that the filesystem would start encountering performance issues. Basically so many filesystems ended up getting created that the system would grow sluggish over time. Not everyone has enough layers/images that theyd encounter this or need to care, but in general, youd be best suited to have the docker img stored on a zvol anyway imo. Containers are meant to be disposable, and at a minimum, shouldnt need their snapshots of them cluttering up your snapshot list.
  8. It's dead, at least for now - I'd submit for a refund
  9. Seagate, WD, doesn't really matter to me, as long as it's their enterprise line - currently Seagate Exos x16 and WD Gold 16TB. Best $/GB with a 5 year warranty, and right at 1w each in standby. Get enough drives in any chassis going, and I've had poor luck with the damned every single manufacturer's consumer lines (at least as far as HDD's go.
  10. I'd guess it's well and truly finished... but I'd also guess that he probably shares my concern that publishing such, while still legal, could greatly impact limetech in a negative manner. I'd only just realized that the whole thing wasn't a completely proprietary blob a few weeks back when I was trying to recompile mdadm to 'behave normally' so I could test out an lvm cache for the unraid array and stop worrying about mover. For those who are truly in need of this, I do hope they find a way to implement it within the OS, but if not, they've always got the option to compile it - if they don't know how, but have a strong enough desire to warrant it, what a great opportunity to learn! ... I could just imagine someone seeing a guide they could copy/paste without ever knowing or understanding the herculean efforts the fellows have put into making this thing, reaping the benefits but without paying for it (in either sweat equity, or to the unraid devs). I'm all about open source, but I'm also pragmatic; most don't contribute back to the projects they use, and limetech's already got a business around this with people's family's depending on it for income. I'd hate to be the person whose actions resulted in slowing down development due to layoffs, or worse, them having to close up shop altogether.
  11. You can technically already do this by pulling the customized raid6 module from unraid and re-compiling md to use it - then you're just depending on whatever OS you're running to do the VM/container work.
  12. On Hiatus per @jonp, but it's sounding like it'll make a return!
  13. The limitation of IOMMU grouping is always on the side of the motherboard (I usually hesitate to make statements like this as fact, but I think it's pretty close here). The ability to segment to individual traces down is completely dependent upon the PCB design and BIOS implementation, which is on the MB side. The 'downside' to enabling the override is, at least in theory, stability - what we're doing is 'ignoring' what the motherboard says it should support. In reality, if you enable it and everything 'just works', it'll more than like continue to 'just work' indefinitely (barring any changes to the MB BIOS). Unfortunately IOMMU grouping and how it all actually works is beyond the scope of this tutorial, but I agree it's a subject that could use clarification. A lot of it boils down to hardware implementation and optionROM size the MB vendor implements into its design - most consumer boards, they only have enough space to load the fancy graphical BIOS they come with, where workstation/server boards still tend towards keyboard driven UI's (leaving more space for other operations).
  14. Whether or not PCIe ACS override is needed is widely system dependent - any time you hit an issue with IOMMU grouping, it's one of the options that are available, but unfortunately not a one size fits all. Glad you got it figured out!!