Marshalleq

Members
  • Content Count

    749
  • Joined

  • Last visited

Community Reputation

91 Good

About Marshalleq

  • Rank
    Member
  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

1790 profile views
  1. I've just run this script, and now everything is crawling slow. I guess because lancache probably wasn't working before, or because to function efficiently it really needs sendfile. I'm going to have to disable it if I can't fix it. I remember this slow speed happened a long time ago so perhaps it hasn't been working since then at all and it's just how it is. As a guide, I'm on Gigabit fibre and my initial download starts off fast e.g. 60 MB/s then get's down to kb/s and in fact steam is telling me at one point it will be completed in 'years' lol. It bounces up again, but n
  2. @ConnectivIT @Josh.5 Hey, so can't we just mount the /etc/ folder into local server e.g. /mnt/SSDPool1/docker/lancache/etc? I assume it's resetting the sendfile setting because the container get's updated or something? I haven't tested if this impacts me or not, but I believe it will be since I have the same kind of setup. And while looking into it, I noted there is not persistent store for lancache configuration, only logs and data. Thanks.
  3. There is no issue with ZFS 2.0.0-1 running the docker image on ZFS, it's only when you get to the next version and above. I think I saw someone else talking about this lancache issue and I do run it myself, so should really look into it. Thanks.
  4. Already did that some time back. I don't expect it to come anytime soon given their stance on ZFS.
  5. Yeah, shouldn't. Nevertheless, I do recall in the logs from joly0 that it was the loopback process that was stuck on 100% cpu while this issue was happening. Not that I investigated it deeply, that's just what I remember.
  6. Also, I think it would be a good idea to keep zfs > 2.0.1 out of main until this is resolved?
  7. For clarity, I do believe someone posted previously that 2.0.3 still exhibits this issue and the workaround is to ensure your docker.img file is not placed on a ZFS partition. I'm stuck on 2.0.1 because I no longer have a filesystem other than ZFS (and don't want one). We seem to be at a deadlock where we require some help from someone knowledgeable about the inner workings of docker on unraid. @ich777 Perhaps with your kernel helper there would be a way to build ZFS support into docker as a workaround as per Squid's comment below? Potentially we could then get rid of
  8. Probably this thread Or this one
  9. @Squid Is there a particular developer you could recommend that knows the ins and outs of docker that might be able to steer us on this issue? It would be great to have it sorted out so that @limetech doesn't have this as a blocker for any future zfs implementation. We are all looking forward to that and it would be disappointing to have another reason to delay our chances. 😡
  10. @squid For those whom want the quick summary, this thread ends with, "There's different drivers for docker to be on different filesystems. AFAIK, there's only the 2 included in the OS -> btrfs and xfs. You could probably do a feature req to have the driver for zfs included for docker"
  11. Good question - I should be more specific. I just found that there were files that I 'assume' are core to docker functioning on the boot drive. My bad for assuming, but I deleted them because I assumed they should not be there given I point docker at a different drive. That deletion removed my docker config and it was recreated, which I 'assume again sorry' is back in the boot drive. I should check that. So unless it's obvious to you, I assume docker requires some files in /boot somewhere additional to those specified in the docker config GUI page which I did double check was c
  12. I've tried that. It doesn't work. Apparently there's something in Unraid that requires ZFS compiled in for it to work. Sounded strange but the explanation looked solid. That response is written either in here or in the general forum, probably the forum. Logged under me.
  13. @Joly0 To do what I did for the GitHub ticket, I set up another pool on a spare NVME drive (figuring a pool would be more unraid workable than an unassigned disk). And noting it's a single NVME drive with no other drives in the pool, not even a parity. I pointed docker directly at that new pool - but for some reason I today find out that it put a lot of core docker files on my USB boot drive. I even double checked and it was definately still configured to be pointed at the NVME drive. I found this out because I thought it was a hangover from something else and deleted the usb
  14. I actually forgot all about this - have been away for work. My last post when it was working was January 7. I see it's been working well until January 18, upon which it stopped again (that's 7 days ago). @steini84 Is there some way to turn on extra logging to a file? Thanks.
  15. Yes, I don' know what the cause is but a few days ago I changed to q35-5.0 and that seemed to fix it. Before that, it didn't lock up but it was painfully slow - but it looked like a lockup if you weren't prepared to wait 20 minutes for your vm to boot. So possibly you could get to 5.0 as well. Edit - 5.0 still had slowness issues, just less than 5.1 - trying 4.2 again. Looking through the changes here, there's not a lot that's changed - so it shouldn't be too hard to pin down.