Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Nice find on the GPU-P - I hadn't realised we could do that now!
  2. Nice to hear @subivoodoo, yeah encryption that will do it! There's a video online somewhere about a guy doing something similar for multiple machines in his house. He configured steam data to be on a zfs pool for all computers and the the dedup meant he only had to store one copy. Cool idea - I would have thought the performance was bad, but apparently not. How's your RAM usage? @jortan I've seen you've replied, but I'm not going to read it sorry - I can see it's just more of the same and I don't see the value for everyone else of having a public argument. I get that differences of opinion get annoying and it feels good to be right, so lets just say you're right. Have a great day and don't stress about it.
  3. Sigh, yes, it absolutely is, the original poster declared that a home scenario was what they were working on and you seem to keep comparing it to disaster scenarios. No-one here is saying don't be careful, don't plan, backup your data or whatever applies, people need to be given some credit, they're not all morons. LOL, if I answer this it's going to get into a flame war, so I'm just going to leave it (and the remainder of the points). The poster has the information and two opinions on it. I have given actual evidence, you have given your experience, which I'm sure is also extensive. They can make their own decision as to whether this works for their lab, or whatever they end up doing. Thanks for the info, have a great day. Marshalleq.
  4. Just saw reply from @jortan (previous reply was just foreseeing some of the questions and trying to be helpful). None of my comments were directed at you, just directed at the misinformation lying around the web - which is what you find when you google and get old documents. Some of the newer stuff now is reflecting the newer state, but unfortunately also some of the newer stuff is still getting written by people whom haven't tried it for quite a while and are repeating out of date experiences - special vdevs in particular being the main case of change here. I do believe special vdevs hold all the DDT for the pool, it even says that on the page you linked. Except for when it's full of course. If you read that page a little deeper, it says this thrashing happens when the special vdev gets full, not 'constantly' as you say above - this is because it will start putting the DDT in the main pool instead of the special vdev once it starts getting full. Of course, this is talking about a busy corporate environment that is worrying about IOPS all the time, for the average person playing around at home (something that @subivoodoo seems to indicate is their scope i.e "My aproach is safe some space on clients... and play with IT stuff 😁 ") then this would not be an issue. In any case, I have a very high IOPS requirements and I am constantly marvelling at how well it does considering I'm just running Raidz1 on everything, have dedup on, my special vdev is running on great but older Intel SSD's that are actually quite slow, the mail server, the various web services, automation and undoubtedly a ton of misaligned cluster sizes which are killing it etc etc. It's under constant use and really it's incredible. Can we argue in a corporate environment we could get more performance? Absolutely, but if it were we wouldn't be running unraid, it wouldn't be all on one box and a whole bunch of other things. Sorry for the laborious post, but I think it's fair to say that the dedup scare mongering that's out there need some balance - again, not directed at you. Marshalleq. PS, I've tried that lancache, ran it for a few years, it works well sometimes, others not so much. Definitely worth a try though.
  5. Some stats on my setup to give you an indication: I run two configs 1 - 4x480G SSD's in RaidZ1 - this hosts docker and virtual machines. I have only 3 VM's at present totally about 50G. I have a bunch of dockers but only the VM's are deduped. There is 653G free and my dedup ratio across the whole pool is 1.11 (i.e. 11%). 2 - 4x16TB HDD's in Raidz1 with 2x 150G mirror for a special vdev with small blocks up to 32k enabled. Most of the Pool is unique data that cannot be deduped. There is 598G free on the array and 70G free on the special vdev. I dedup my backups folder, documents folders, isos, temp folder which totals about 530G, I am getting 1.14 dedup ratio across the whole pool which is about 14%. I think these numbers are pretty good. I thoroughly tested the memory usage before and after for the Raidz1 array as I was unsure if all or some of it would go to the special vdev. I noticed no difference at all. I did the same for the virtual machines on the array without the special vdev and while this was less scientific, also noticed no perceptible difference (I mention because so many people cry out that dedup uses too much RAM). Now, I do have 96G of RAM in this system, however before enabling dedup on anything the RAM usage was sitting around 93-96% full. It didn't change. I think this speaks well to the issue as I would have had big failures if it did use a lot of RAM. I've been running it like this for a long time now and no issues yet. I hope that helps!
  6. Yeah, I'm just using image files on unraid. I found great returns on virtual machines, especially when based on the same install iso - but even on different ones. I found reasonable returns on isos and documents. I probably have a few duplicated isos with not so obvious names so it saves having to sort that out. I don't think there's much benefit in dockers but could be wrong. And that's correct I don't use ZVOL's. I've tried them and found them them at best to be non-advantageous and a lot less flexible. I don't yet understand why anyone would use them really, except maybe for iscsi targets.
  7. I'm using dedup quite successfully. What I've learnt is that most people whom say it isn't worth it either haven't looked at it for a while (so are just continuing on old stories without checking) or are not applying it to the right type of data. In my case I'm running a special vdev. It works extremely well for the content that can be deduped (such as VM's). I've never noticed any extra memory being used either as I do believe this is handled by the special vdev. I'm using unraid - tried TrueNAS scale but it's containerisation is just awful - hopefully they figure out what market they're aiming for there and fix their strategy in a future version not too far away.
  8. I would suggest that you log this upstream as it sounds like a bug.
  9. I just put all mine in /mnt. I am not sure that you can have two mount points for one pool, but ZFS is very powerful so perhaps that's a feature I've not seen before. You can change the mount point of an existing ZFS pool with zfs set mountpoint=/myspecialfolder mypool. I suspect to get your drives to show up as zfs, that your restore has lost you the unassigned devices / plus plugin? Not that is not the same as the unassigned devices heading you have above. At least that's what I think I'm seeing in your screenshot.
  10. To be honest I'm not sure I'm following you so much on the smb and security side. Everything else was very well outlined though. I think you're saying you use a link from zfs to the unraid share which to me sounds absolutely horrible. So I only know the way I've done it, which is outlined below. SMB permissions with ZFS are done manually via smb-extra.conf (which is in the /boot/config/smb directory). The unraid smb GUI does not like anything outside of it's own array (I honestly don't know why you'd put in this artificial restriction, but they do) I've always preferred the console method anyway as it's more powerful. So the point here being, you're using the same SMB system that unraid uses, but you're bypassing their artificial restriction of the GUI. At least this is how I do it, someone else might have a better way. Here's a typical one, then a more advanced one to help you out. [isos] path = /mnt/Seagate48T/isos comment = ZFS isos Drive browseable = yes valid users = mrbloggs write list = mrbloggs vfs objects = [pictures] path = /mnt/Seagate48T/pictures comment = ZFS pictures Drive browseable = yes read only = no writeable = yes oplocks = yes dos filemode = no dos filemode = no dos filetime resolution = yes dos filetimes = yes fake directory create times = yes csc policy = manual veto oplock files = /*.mdb/*.MDB/*.dbf/*.DBF/ nt acl support = no create mask = 664 force create mode = 664 directory mask = 2775 force directory mode = 2775 guest ok = no vfs objects = fruit streams_xattr recycle fruit:resource = file fruit:metadata = netatalk fruit:locking = none fruit:encoding = private acl_xattr:ignore system acl valid users = mrbloggs write list = mrbloggs Also about the access denied, with ZFS on unraid you do have to go through and set nobody.users on each of these shares at the file level. So basically # chown nobody.users /zfs -Rfv Who knows, perhaps this is all you need to do to get your method to work. Good luck!
  11. Well, I particularly liked the user centric approach of the Time Machine style - but the more options the better!
  12. Does anyone else think this is a very very cool idea? I'm surprised nobody thought about it before, but perhaps it's just a sign of the reach ZFS has had since it's all opened up in the last 12 months. Short Description - Time Machine for ZFS Original Reddit Post Github Link How hard would it be to make this a plugin? This could be a real killer feature if unraid officially integrated it. It might even stop me complaining so much about how a basic thing like backups is not included in unraid, making me want to run off to TrueNAS SCALE (which is not ready either TBH). Thoughts? Marshalleq
  13. So it works well for sabnzbd, qbittorrent and SurfShark. It didn't work so well for the *darr apps. Seemed to break after a few seconds. I'm not sure why, but anyway, those support proxy so just switched to that. Really nice find thank you!
  14. Just watched it - I dislike video instructions a bit (having to go through 8 minutes of pain for a simple 2 lines sets my impatience on fire lol), but his videos are very good. So it's simply attaching one docker image to another. Awesome, thanks for the tip - I agree that's much better. Thanks.
  15. Ah thanks, I wondered if there was another way, I'll take a look! In the meantime I tried qbittorrent, which is now working after quite some challenges with the unraid container and webui implementation - all good now though! Thanks again.
  16. Thanks, so linking those with this, they both support using a proxy? I've been using all in one containers up til now, but some recent changes in the binhex one has made me look elsewhere.
  17. So the container installed easily enough. What are people doing for a torrent application? I'd rather have this on unraid than on my local machine - any votes for best container? Thanks.
  18. I don't think unraid supports zfs in a cache pool. If it does, I suspect the warning will persist. However the warning does go away if you use an official Unraid mirror for the cache pool. I used to run that for the same reason, and that's when I started getting BTRFS issues. Regarding other questions, I do believe I had issues with both mirrors and non mirrors with BTRFS. I ended up running XFS but ultimately got annoyed having another attempt at BTRFS, also failed (probably did three different spurts over 12 months, each time with issues after a month or so of use). There's probably something specific to the way I was using it, like perhaps I overfilled it a few times or something but no filesystem should crap itself just because of how I was using it. There was also problems with Unraid's implementation of BTRFS in the GUI which didn't help - initially denied, then later corrected to do with how it created the mirror, metadata and balancing if I recall correctly. All in all not a good experience. I will never ever use BTRFS again, there is very clearly something wrong with it and you can see those reports if you look around. I often wondered why unraid didn't just offer a standard mirror array i.e. with mdadm, that would have been better than btrfs.
  19. My experience was to avoid BTRFS for your cache if you value that data. I therefore used XFS. At the time ZFS wasn't an option in the cache, I heard a rumour that it might be now, but I still have my doubts as unraid still don't officially support it. The thing was I never had a failed cache drive, but I did have multiple BTRFS file system failures once I moved to a BTRFS mirror. So that defeats the whole point of a mirror which was meant to keep the data safe. So it ended up being safer to have a single XFS cache. There are lots of other people whom had a similar experience and lots that say that they haven't OR more likely haven't had the issue fixing the BTRFS file system afterward. That was the thing that got me the most, the filesystem was unrepairable. That basically doesn't happen on ZFS. For the video scrub pool, using the zfs special metadata device is a good idea - you can pretty much run the scrub pool of standard HDD's then because all the file system info for searching and so on is in the SSD - something to consider anyway. My system is 100% ZFS now and because of that it doesn't really even need a cache of the style unraid has.
  20. Wow this looks really good. Thankyou so much for thinking out of the box a little - makes such a difference to have the various VPN providers on the list!
  21. I think I'm going to have to bail on this container. I really like the extra attention to leaking of tracker info or whatever it was, but it broke my container and a month or two of it not working and being no clearer what the issue is, it's actually just easier to go to one that isn't so hard to set up. I run a custom VPN, which may be the difference (Surfshark), but - nothing changed, only the docker image changed. I could roll back but I don't like that either. Perhaps I'm feeling lazy (second beer in just now) - yep I'm feeling lazy. But I will say, that if a change is made, and it breaks something for a lot of people the fix would normally be obvious by now. Will see how the competitor containers are - maybe that will encourage me to come back.
  22. Seems like someone has been having a few trying discussions! Two more: No ZFS doesn't eat your RAM, some distro's just haven't configured the memory limits properly Yes dedup is useful and no it doesn't eat your RAM if applied properly, especially with the special metadata vdev. We should make a list of the FUD BTW, my entire system is now ZFS and has been for quite some time. I won't be going back, the benefits are just too huge. The speedup of the system was quite noticeable also.
  23. I found when restoring data from Crashplan there were a few tricks to it where you could speed it up. I think I posted notes about it - let me see. Yep here. I got a 20x speed improvement. There's another tip or two in the comments also.
  24. I came here because I'm getting a lot of performance issues / stability using virtio-net. In particular even at the console it freezes all the time. I don't know this is the net driver, but I suspect it is - I've known about it for a long time but have been too lazy to have the discussion. Now I'm trying to set up a VM I need for an Agile board and the web sockets aren't working, I assume for the same reason. I've tried re-installing the VM, I've tried CentOS and Ubuntu. I see in the latest beta there are now four options - virtio, virtio-net, vmxnet3 and e1000. virtio used to work for me, with all the dockers I have, but I don't know neither option has worked well for a very long time. And virtio-net is actually not just a little slow, it's insanely slow to the point I can't really use it for anything other than a basic web page. By contrast running this up in TrueNAS I haven't had this problem. They offer virtio and e1000. I'm running virtio with no problems, but I'm not running a lot of dockers there, those are now going to be migrated to a VM as I've come to the conclusion that it's better for me. Avoids all these dramas and makes them a lot more portable. So I'm going to try e1000 and hope that is better - though it's a very old driver I think.
  25. I think the PGID default port is written incorrectly as 99. I assume thats just in the definition and isn’t actually the default?