Jump to content

BVD

Members
  • Content Count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About BVD

  • Rank
    Newbie
  1. Another option instead of (though that works as well) - go over to nerd pack, and manually re-enable/re-install pigz (or just manually pull it from the slackware repo). I'm not super familiar with the plugin architecture, but if it allows you to do some checking/recording of packages it that it needed which were already available pre-install, that'd be cool... But then the nightmare of dependencies (i.e. what if they uninstall in a different order than install and leave behind components unintentionally) could be a freakin nightmare...
  2. I should also note (in case anyone comes across this later) - I believe the CPU isolation issue only applies to OS's which boot directly from ZFS. As unRAID's boot isolates the CPUs during it's boot, which is entirely in memory, and before ZFS is loaded, ZFS isn't even aware those CPUs exist. To test this theory, I edited my syslinux config to isolate half of each of my CPUs/threads from each socket, rebooted, then kicked off a scrub and added in a few CIFS transfers just to make sure the load was significant enough to expect all cores to be touched. Sure enough, those cores are at a complete standstill - no threads spawned, nodda. Maybe I'm just measuring it wrong, but it seems like a done issue for vms I've tested thus far at least thankfully!
  3. I don't know why I never thought about setting up a usb as disk1 That definitely frees up one 'problem', thanks! I'm definitely not expecting ZFS to replace the unRaid array option - I still think it has significant value, and could easily see being in a place where I'd want a mixed environment, with ZFS hosting my typical data, and the standard unRaid array being used for things like hosting data for FTP, syncthing, anything else that I doesn't need the same level of safety. For those that are, surviving two disk failures is important to me (mostly due to failures during expansion by replacing to larger drives, which happens every few years), and BTRFS just unfortunately isn't there yet, which unfortunately negates my ability to benefit from what it has to offer. A UPS is great for utility failures, but it won't save the filesystem from watchdog timers (and I just don't have the heart to go through recovering from such things anymore lol). With the current implementation (at least to my understanding anyway), the cache device/pool isn't really a cache, but something more akin to snapraid, right? So if anyone was in a position where they had only a single cache device, and that device died, any changes made since the last time mover ran would be lost. If someone wants to quickly get files to their server in a way that they know it's protected from hitting a single point of failure, they have to add a mirror to their pool, or not use the cache device, at least as my understanding goes... Anyway, the short of it is, I'd just like to know where the thinking is going as far as ZFS support, and what the roadmap looks like overall. If there's already an unRaid feature roadmap posted somewhere and I missed it, my apologies in advance!
  4. How likely are we to get ZFS fully integrated as part of 6.9 once it goes GA? And what level of integration is being considered? For instance, would this include managing ZFS snapshots, replication, volume creation/cloning, zpool creation/management, and so on in the GUI? Or at the opposite end of the spectrum, is this more just integrating the ZFS plugin that's already available through CA, and everything else would be manual? What would that integration look like? Honestly that's the only thing holding me back from pulling the trigger on purchasing at this point. I picked up unRAID as a trial to give it a shot after hearing about it through various outlets, with each and every one of those having noted utilizing ZFS, and figuring if it was so prevalent, I may as well check it out... But have found that given unRAID doesn't recognize *anything* I do with ZFS in the UI (since they're all on unassigned drives), the benefits of the platform just aren't ones that I'm able to fully experience, while instead having the detriment of needing to burn a chassis drive slot in order to start the array before I can even utilize the majority of features. As some further justification (as I know full well that that fully integrating ZFS in unRAID's management interface is a HUGE task): * One of the biggest groups of people that could be attracted to unRAID are FreeNAS users. All of their data is already on ZFS, and migrating data in order to use btrfs/etc (or any other for that matter) is enough of a barrier to entry to detract all but the most committed - and if they want to use unRAID's feature set, they really have no other option than to migrate, short of living in the command line and burning a drive (minimum 1). * Going through the feature request sub-forum, several highly commented requests are implemented as an ancillary benefit of ZFS integration - examples: - Multiple Arrays - Max share sizes (set a zfs quota - Snapshot and backup a share (zvols can be cloned, snapped, and sent) - (More I'm sure, this is just from a 2 minute review of the first page of that sub-forum) * Simplification - users needing to learn about mover, how to ensure VMs are on their cache drive (typically an unprotected location, which is terrifying to think about) so they're performant enough to be usable. I'd be willing to make the leap if I had any idea where the roadmap is on this (ZFS support) - an unRAID array is fine for many use cases, but so many of us just can't (or won't) move away from ZFS, and that means we're really missing out on so much of what unraid has to offer to simplify managing our homelabs/servers/businesses. We need this to allow us to really consider unraid as a viable alternative to our current systems.
  5. +1 from me as well. It's really the only thing giving me pause for pulling the trigger on actually purchasing once the trial is up - all my vms/containers/shares are on zfs anyway, so all having an array does for me in the first place is allow me to actually use all the things that have nothing to do with it. ... Which is probably why this will unfortunately never happen. If there's no array requirement, then with the current design, there's nothing stopping someone from just making a new bootable usb each time (copying over the config files from the last one) and never ponying up. Then there'd be nothing prodding folks to actually pay for a license, in the same tangible way at least (imo).
  6. I've got the Intel X540 working, but I have to warn you, it's a complete #$@ to do - you have to re-write the eeprom, and since my unraid server is a r720xd and the x540's a dotter card, it's unpleasant. I'd check out spaceinvader1's video on this topic (10Gb on MacOS) and pick up one of the SolarFlare cards for 20 bucks (shipping included if you can believe it) instead of using my solution. Dell's make everything hell for those of us not using mainstream "everything" (but I just can't get away from em )
  7. Could we also possibly get a new update to the plugin, to reflect the Dell changes you'd made to the source several months back by chance? I'm using the source to compile my own (as soon as I get my vm relocated), but it'd be helpful to include it as part of the built in CA updates packages. Thanks again!
  8. FYI the main plugin from CA for IPMI tool is pulling from a repository that only has the older versions of ipmitool/freeipmi/etc - looks like they're in a separate repository from the source, and for whatever reason, the source isn't updating the unRAID-plugins repository. I only figured it out after finding the (awesome) dell updates that were made earlier this year and wondering why I wasn't seeing that on the MB dropdown list. Thanks for all you do! EDIT: Might be this - looks like the libtool archive may've been corrupted in upload: root:~# upgradepkg --install-new ./libtool-2.4.6-x86_64-13.txz +============================================================================== | Installing new package ./libtool-2.4.6-x86_64-13.txz +============================================================================== Verifying package libtool-2.4.6-x86_64-13.txz. xz: (stdin): File format not recognized Unable to install ./libtool-2.4.6-x86_64-13.txz: tar archive is corrupt (tar returned error code 2)