Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. I think this is due to their upgrades breaking things. Because I have ZFS snapshots I was able to roll it back and force it to stay on an older version. Not what I really wanted to do, but saved me the time of fixing it up. So not much help, but it might help to point you in the right direction.
  2. I wouldn't be holding it up for that. There's still a ton of use cases. Cache drive mirrors is one and the functionality that provides for backups, Virtual Machines and dockers is immense. And also ZFS is better at telling you when there is corrupted data even in a single drive implementation of it, or when a dual drive or better it will repair it for you and let you know. I'd love to be able to convert my docker.img file to ZFS and have another option for a mirrored cache drive than btrfs. Well yes, but again why hold it up because it doesn't fit into unraid's main array? I'm running a ZFS mirror for my critical data alongside a standard unraid array and it's amazing. This is no longer an issue, the whole Gigabytes per terrabyte of disk is a completely incorrect formula that seems to live on through ether legend. It's been possible for a long time to run on very small amounts of memory. The main thing that trips people up is the ZIL, which slows everything down, eats memory if you do it wrong and should be disabled for most use cases. Which really is ZFS's main adoption problem - High entry criteria due to complex descriptions of what everything does. I mean they could have just called the ZIL a write cache and then explained why it's different and how it works compared to other caches. Yeah true. Each has primary advantages and a few disadvantages. Unraid's primary advantages are that it lets you use differing size disks and it lets you power down inactive disks due to it not writing in stripes. It's primary disadvantage is that it will only read from a single disk which results in quite a lot of performance degradation when compared to a standard raid array. But for the right use case, it's extremely effective e.g. media storage with a lot of streaming. ZFS advantages are it's self healing and the ton of nice features built in for VM's, dockers and backups and is relatively fast due to the way it reads and the differing raid options you can create depending on your needs (like most raid arrays). It's disadvantages in this case will be it won't spin down single drives, doesn't really let you use differing sized drives and adding disks (not increasing disk size) can't easily be done. Whether unraid allow a single ZFS disk in their unraid array is up to them, but I think the advantages for certain use cases in other areas are huge. This is why I have both. Unraid for storage of minor accessed files, ZFS for critical data, VM's and dockers. Sorry for long post - but didn't want ZFS to be misunderstood in this thread!
  3. Seems like it's time to get a better DVD software, I'm quite confident that is still possible. Or dual boot windows, or run linux, or just about anything. Anyway, I'm not trying to argue with your decisions so I can think of two solutions: 1 - I assume you will be able to use NFS in alternate to AFP if you don't wish to use SMB. 2 - If you want to add drivers and things, this is the way to do it.
  4. Of course, BTRFS / XFS / ZFS doesn't really mattter too much regarding performance if compared on the same hardware, but if putting your VM's on slow HDD's ZFS helps with that in the form of cache. My decision to switch to ZFS was due to three failures surrounding BTRFS, which have never happened with the same hardware on XFS or ZFS, and ZFS has the advantage of recovering from corrupted data as well as lost drives. Running VM's on HDD for me is just because my SSD is not mirrored and the HDD's have more space - so it works well as a solution for me. The L2ARC is amazing for that.
  5. Hey - yeah it's quite a lot to learn isn't it! I don't see l2arc benefitting your VM's if it's already entirely on an SSD and you don't have anything faster other than RAM. However, if you've got a lot of spare RAM, then the good news is you can increase your L1ARC (which is actually more efficient than L2 anyway) and use that. Essentially the arc, is a clever ZFS only read cache that performs a lot better than standard read caches. What this means is for things that are on disk and meet the eligibility criteria, they will be read from ram (or l2arc if you have it) instead of from disk. So in that sense, changing your VM's to ZFS on an SSD may get you a slightly greater performance, once they're up and running. I like the persistent l2arc idea, because this should also then apply to newly started VM's as well. For me I had an INTEL 128GB NVME drive and set up the l2arc to use that. To get stats on how well the arc is working you can use arc_stats or arc_summary in the console and check the cache hit ratio. Typically I find my l1arc to be at 100% and my l2arc to be at around 70% which is very decent. My setup is: 1TB INTEL Enterprise SSD x1 with ZFS (mostly docker and unimportant VM's) 2x8TB Seagate Enterprise Capacity HDD's in a ZFS Raid 1 pool (Various Data, VMS and important dockers) 1x128GB NVME drive (L2arc). If you come across something called a ZIL - ignore it, in my experience it's very unique cases and mostly it will actually slow your system down unless you know what you're doing. I also have znapzend for backups for the things on the SSD since I only have one of them. Znapzend is another plugin that utilises ZFS replication which is more performant and clever than even rsync. Hope that helps a little!
  6. Thanks, out of interest, are the ZFS versions built from master? I was reading that 8.4 wasn't supported on kernel 5.7 unless it's build from latest master. Though I can't personally confirm that myself other than the 8.4 page says supported to kernel 5.6.
  7. OK, so OpenZFS from here 0.8.4 is not supported on kernel 5.7, so you're using master branch. Master branch is newer? but has older version number. I have to say, that was unexpected, but thanks for clarifying!
  8. Yeah - I realised that the first time I built your container, I built it on the ZFS plugin, so I need to shift between the two. But as I said before the primary reason I started using this (and promoted it) was because we could built it ourselves for beta releases and not have to annoy developers to do it for us. I thought this would make it a lot easier even if parts of it we had to do manually. It's an amazing step towards that, I guess we may see optimisations come out over time. And for the NVIDIA plugin it's a giant leap over what was available before.
  9. I'm currently using this plugin on 8.0-1 . I'm not really concerned with 'trusting it' given ZFS itself is not in beta. @steini84am I reading right that your ZFS version is 8.0-1 when the latest stable is 0.8.4? And interestingly your plugin says it's on 8.2. This doesn't seem right. # zfs --version zfs-0.8.0-1 zfs-kmod-0.8.0-1 cat /sys/module/zfs/version 0.8.0-1
  10. I think it would be a simple matter of pointing your container at their download URL like you do for all other things the container uses though right? Yeah, it will be great if they add those things in - I strongly suspect there will always be a need for a community kernel you can compile yourself though, differing versions of things for example is one area only your container handles. I expect, none of these are going to be fast though. Quite excited to see PAM mentioned in the logs too.
  11. I've been googling this one to see what the joke is - please share! No, it's a Corsair H110i v2 - which did / does get pretty great reviews.
  12. Thanks for pointing that out - and now that you mention it, I've seen that! Perhaps since we're in beta we can convince @limetech to consider naming it something slightly more specific such as balance status, or somehow surpessing it if inactive. I can see that might not be particularly easy though. I assume the 'no stats available' under scrub status is a similar issue.
  13. Also, I have always assumed this is broken and not specific to me - so raising it here as it's probably a good time to do so, but could be wrong. Specifically: I assume there should not be 'no balance found' on BTRFS, and running the balance via either the GUI or the console does not change this. We've had issues before where two BTRFS devices did not actually create a redundant RAID-1 equivalent, (which was noted in changelog as fixed) but I still find it hard to trust it's working properly if there's no balance found. I also assume the 'no stats available' should be populated with something, but it isn't. Something to fix? Or something I don't understand?
  14. Great to see this newer kernel finally through isn't it - well done everyone! So I started using the new k10temp module in this kernel, and I'm now being told my CPU / MB temp is at 94 degrees C under load. As far as everything I've read, AMD threadripper 1950X doesn't really get that hot (I've got a triple FAN water cooler on it, and it runs under load 24x7 so I'm keen to see whether my temps before were wrong, or if the new ones are wrong or something in between. Anyone else comment on their experience with temps on threadripper? I am using the the system temp plugin though - I assume that's still required... ? Thanks.
  15. Hmmm, so I just went to do this - my docker runs on ZFS and apparently I need to have installed (or otherwise gotten) the new kernel first. Since docker doesn't start without ZFS for me, I can't build the new kernel. I'm sure there's a manual download somewhere I will try use that - but have a feeling there's some optimisation portential for the container here somewhere... Possibly I could install the other ZFS plugin, in order to build this ZFS kernel lol. Seems backwards but it'll work I'm sure - probably easiest thinking about it.
  16. I see beta 22 has been released here: So when I get a chance, I will install it / compile for it. Won't be able to try for another 4 hours at least though - @ich777 it will be interesting to see if this container handles it already or needs changes to make it work.
  17. The beta version of unraid is on a 5.x kernel too. It was released because unraid knew people needed that kernel. The beta is very stable (and their beta's usually are) so don't let it scare you too much, though obviously it is a beta. Either way, ZFS is not in beta, so that's going to be pretty safe anyway. I believe they're up to something like beta 12/13 behind the scenes so there is a jump coming soon - there are new multi-array features and such coming so it's a fairly big release for them. Anyway, if you want a newer kernel, see here.
  18. Persistent l2arc! I hadn't noticed that! It will be a great feature - I'm running the non-persistent one on an NVME drive and it does make a huge difference for VM's and such - actually combined with a decent L1Arc makes VM's and dockers on mechanical HDD's very usable again. ZFS sure is magic.
  19. Hey, welcome - I think QAT is built into ZFS since it was released in 2017 in the ZFS changelog, but also needs the QAT driver - which as far as I know is not included in the linux kernel (though you could just try that first). But unlike freenas, thinks like this are usually a bit easier in unraid. I'd suggest having a look at the community kernel and have a go at building the driver in there. The dev is quite helpful too, so I'm sure he'll give you some tips. Maybe, just maybe, he'll even include it as an option automatically since it's probably common to all INTEL processors and gives a performance boost.
  20. Hi, @efschu, welcome to the forum. As I understand it, your use case requires migration from an existing ZFS pool, currently installed on Proxmox, which is large and impractical to shift to unraid's 'Unraid' array. I can clearly see, this makes perfect sense for various reasons and on behalf of the unraid community, I feel I must apologise if any of our help seems unhelpful, people here are passionate about unraid and love to share their experience and opinions, please don't let that put you off! Actually I'd say the community is truly one of unraids best features! Anyway, getting a bit more technical - I believe the docker and virtual machine 'services' of unraid are hard linked to 'array start' from the perspective of just starting those services. I have no idea if that can be changed. That however absolutely does not mean you have to store virtual machines and docker containers on the unraid array - I've had mine running on unraid, unassigned devices and zfs to name a few. Currently, all my virtual machines and dockers are on ZFS for reasons I'm sure I don't have to tell you about. If there ever were any issues where the file system was suspected, it would be up to you to move it to a supported FS to rule it out. The likelihood of that however, is probably at an extreme number of leading zeros i.e. 000000001%. Just don't ask for ZFS support directly, there are specialist forums for that as you are no doubt aware. Speaking of ZFS, there are now two implementations of ZFS on unraid now, one as a plugin and one compiled into the kernel. It's also been mentioned by limetech they are looking at potentially including it officially in the future. Obviously don't count on that, but limetech are typically very good at introducing what the community discuss and vote for on here, in a sort of 'we not going to tell you' kind of way (ie no roadmap unfortunately). Yes, Unraid has better integration with KVM and so on than proxmox, but do understand proxmox (and others) have far more enterprise style features than unraid does. The unraid featureset is perfectly suited to the home / home enthusiast market and they do a fantastic job at it. I'd suggest you install the trial onto a usb and see if your ZFS array will import, if Unraid still sounds good. Marshalleq
  21. Very nice plugin! I think I have some ideas to expand on the ZFS section too.....
  22. I CAN subscribe to ZFS being more polished in freenas from a user interaction perspective (assuming you don't do console). I CAN'T subscribe to ZFS being more stable on freenas, there is nothing unstable about this plugin at all, in fact I'd say with absolute confidence it's more stable and robust than all other filesystems existing natively on unraid today. And if you like you can even run this in native kernel now anyway, not that it makes a difference to the stability of it. @steini84 has done an amazing job of bringing us a stable and robust option with this plugin and it has saved me a number of times already. I am extremely grateful for it. If you have some evidence around how ZFS on unraid is not stable, I'd certainly like to know about it so that I can re-asess my options. Thanks, Marshalleq
  23. Yes that's good advice when playing with kernels - (they're in /boot BTW).
×
×
  • Create New...