Jump to content
We're Hiring! Full Stack Developer ×

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. You have a functioning docker 'folder' on ZFS filesystem, rather than a docker folder on BTRFS / XFS and then on ZFS?
  2. There was some time ago, issue with something on custom docker networks, should probably disable that and check. I've updated my test system now and it 'seems' to be OK with latest ZFS and latest unraid. If it is, I will try adding those few docker containers in common and see if there's an impact.
  3. Hey thanks so much for your help Arragon, this is great! So if Joly0 has those 3 dockers in common, of those I have nextcloud and mariadb. There are plenty of others that I have in common with Arragon though, but I'll leave them out as they seem to be irrelevant. Also, I have a test box that has unraid but is basically only used for encoding so has nothing in common with what either of you have listed in regards to dockers used, so I will upgrade it all to latest shortly and see if it's impacted. As far as ZFS setups, on my test box I have: AMD Threadripper 1950X with 128G RAM RaidZ1 3x8TB Seagate HDD's SATA Mirror 4x240G M.2 intel SSD's in a mirror (e.g 480 total space) SATA XFS 128G Intel NVME for testing docker.img. NVME The other box is similar with Intel Xeon E5-2620 v3 with 96GB ECC RAM Raidz1 4x16TB Seagates SATA Mirror 2x150G+2x480G Intels SATA Mirror + 1x960G as a standalone ZFS. SATA This box has some of the disks on Dell Perc H310 in IT mode. Both of these boxes have exhibited the issue. Both systems have docker.img on mirrored zfs shared with active docker configs. One thing I don't see listed above is fstrim, which I do have enabled on all my ssd's. I also have xattr set to sa and I am using the new compression type zstd-1. I have attached output of zpool get all and one zfs get all from one snapshot Also screenshot of my docker settings below: What is your host access to custom networks set to? That's something else I've enabled from default. ZpoolGetAll.rtf ZFSGetAll.rtf
  4. I wonder if this should be asked on ZFS forums first to confirm what kind of activity happens at how often? It may be that e.g. some kind of metadata check happens every 10 mins or something. I used to wonder about this too, but I've recently upgraded to 16TB seagate that have some kind of magic 'nearly the same as spindown' at idle thing so am not so concerned now. It's a good question though!
  5. Found the problem - apparently unraid will do this when you have added a docker container not via the unraid gui. Perhaps you could get around it by adding something else into the docker container via the GUI, but deleting the other docker containers got around the issue. Go figure.
  6. Yeah, so that issue will be over once it's baked in which is exciting. And yes again, DVD hoarders lol, I remember guys that were like that, whole walls of spent money on CD's and such. The thing I don't get is there's a whole market there in that file and print space (and even as a home market) that unraid almost completely flips the bird at, maybe they don't want to be compared with FreeNAS / TrueNAS (which is looking pretty exciting at the moment). But, unraid still wins or has the up and coming tag on it when you add in the community support - unraid has that area nailed and I doubt anyone will ever be able to even compete with it. Actually I don't think I've ever seen it on any product online ever. Other thing on my wishlist would be a proper software network stack - it doesn't work well when you start swapping around network cards (to much disbelief by others that have looked at my message one day when I decided to rant lol). One day I'll document it down properly - when I have the will to go up against the man again.
  7. Ive upgrade to 6.9 final on another machine today - I notice some of the docker s have my previous pinning on - however I'm not able to set it on 4 dockers I'm trying to limit. Haven't tried the others yet. When I say I'm not able to, what actually happens is I can select the threads I want, I can apply them, it looks like it applies after I have pressed apply, but if I refresh the screen / go out, they're all gone again. It's back to how it was originally. Perhaps it's some kind of permissions issue, it's behaving like it's read only but doesn't know it. I can confirm it's working on my main machine on latest 6.9 so it's likely not 6.9 specific. I can also confirm it is working for VM's fine. Weird.
  8. Limetech wasn't expressing frustration with BTRFS as far as I know, they did however hear some others frustration with BTRFS, which in one interview was referred to as a 'certain group whom remain vocal about the issues' or words to that effect. One has to wonder how many failures a file system has to go through before you will agree it's problematic (there are plenty recorded on these forums) and in my world one failure is one too many. Anyway, to answer your question, I do believe LimeTech said that they are most definitely looking at bringing it in, most definitely not in this latest version, but in the future. No date, nor release has been provided and no guarantee either, just a good whole hearted we're looking at it - which is very exciting and hopeful sounding. I doubt it's very hard to be honest. My one hope is that it's not limited to existing within their own unraid array as it's main benefits would largely be unavailable. And that things like docker and vm's will run on it. Personally I think the future of unraid lies in it's unraid driver being for mass storage and zfs for everything else. However I've migrated completely away from the unraid raid driver, I found it to make my system too sluggish. But there are many advantages to unraid, over and above it's disk setup, so it's still worth it for those I think. I do still think about the enterprise features that are missing though. The basic file and print, backup, user accounts, directory management etc would be a great start. Hopefully some day in the future.
  9. That link takes me to a page which does not have any reference to SMB that I can see - where exactly is the workaround on SMB? Many thanks.
  10. @Arragon what @steini84 is suggesting is to mount your docker to a folder rather than a .img file. I'm not sure if the bit above is new since the previous beta though, but it would pay to check. If it works, that could solve the problem and enable us to run later versions of ZFS. The problem I see is we still don't know why .img files won't run on ZFS filesystem above 2.0.0 and seemikly this is specific to Unraid. No reports from any other system. I'll try it later when I have some time. Also, I'm not sure if that zfs storage driver refers to zfs as an underlying file system, or within the docker container - I assume the latter.
  11. I've just run this script, and now everything is crawling slow. I guess because lancache probably wasn't working before, or because to function efficiently it really needs sendfile. I'm going to have to disable it if I can't fix it. I remember this slow speed happened a long time ago so perhaps it hasn't been working since then at all and it's just how it is. As a guide, I'm on Gigabit fibre and my initial download starts off fast e.g. 60 MB/s then get's down to kb/s and in fact steam is telling me at one point it will be completed in 'years' lol. It bounces up again, but not more than about 2MB/s. Also noticing problems with my MS teams client for some reason, not sure if it's related but it's a mighty coincidence.
  12. @ConnectivIT @Josh.5 Hey, so can't we just mount the /etc/ folder into local server e.g. /mnt/SSDPool1/docker/lancache/etc? I assume it's resetting the sendfile setting because the container get's updated or something? I haven't tested if this impacts me or not, but I believe it will be since I have the same kind of setup. And while looking into it, I noted there is not persistent store for lancache configuration, only logs and data. Thanks.
  13. There is no issue with ZFS 2.0.0-1 running the docker image on ZFS, it's only when you get to the next version and above. I think I saw someone else talking about this lancache issue and I do run it myself, so should really look into it. Thanks.
  14. Already did that some time back. I don't expect it to come anytime soon given their stance on ZFS.
  15. Yeah, shouldn't. Nevertheless, I do recall in the logs from joly0 that it was the loopback process that was stuck on 100% cpu while this issue was happening. Not that I investigated it deeply, that's just what I remember.
  16. Also, I think it would be a good idea to keep zfs > 2.0.1 out of main until this is resolved?
  17. For clarity, I do believe someone posted previously that 2.0.3 still exhibits this issue and the workaround is to ensure your docker.img file is not placed on a ZFS partition. I'm stuck on 2.0.1 because I no longer have a filesystem other than ZFS (and don't want one). We seem to be at a deadlock where we require some help from someone knowledgeable about the inner workings of docker on unraid. @ich777 Perhaps with your kernel helper there would be a way to build ZFS support into docker as a workaround as per Squid's comment below? Potentially we could then get rid of the docker.img entirely, which seems to be some issue with running loop device on ZFS > 2.0.1. @joly0 Just thinking about your report that the problem process was loopback - I wonder if we could test a number of different loopback devices to see if there's anything in that.
  18. Probably this thread Or this one
  19. @Squid Is there a particular developer you could recommend that knows the ins and outs of docker that might be able to steer us on this issue? It would be great to have it sorted out so that @limetech doesn't have this as a blocker for any future zfs implementation. We are all looking forward to that and it would be disappointing to have another reason to delay our chances. 😡
  20. @squid For those whom want the quick summary, this thread ends with, "There's different drivers for docker to be on different filesystems. AFAIK, there's only the 2 included in the OS -> btrfs and xfs. You could probably do a feature req to have the driver for zfs included for docker"
  21. Good question - I should be more specific. I just found that there were files that I 'assume' are core to docker functioning on the boot drive. My bad for assuming, but I deleted them because I assumed they should not be there given I point docker at a different drive. That deletion removed my docker config and it was recreated, which I 'assume again sorry' is back in the boot drive. I should check that. So unless it's obvious to you, I assume docker requires some files in /boot somewhere additional to those specified in the docker config GUI page which I did double check was correctly configured to NOT point at the boot drive. It seemed logical that since my configuration was pointed elsewhere and deleting these files broke docker config, that these were core to docker (on unraid) functioning.
  22. I've tried that. It doesn't work. Apparently there's something in Unraid that requires ZFS compiled in for it to work. Sounded strange but the explanation looked solid. That response is written either in here or in the general forum, probably the forum. Logged under me.
  23. @Joly0 To do what I did for the GitHub ticket, I set up another pool on a spare NVME drive (figuring a pool would be more unraid workable than an unassigned disk). And noting it's a single NVME drive with no other drives in the pool, not even a parity. I pointed docker directly at that new pool - but for some reason I today find out that it put a lot of core docker files on my USB boot drive. I even double checked and it was definately still configured to be pointed at the NVME drive. I found this out because I thought it was a hangover from something else and deleted the usb drive files (which were just in an app data folder). I honestly don't know what is wrong with unraid here. Perhaps it's always had these files on the usb, but I've never noticed them before. I'll recreate on an unassigned disk and try again. After a day or so though, otherwise docker is working if the docker image is not placed on a ZFS drive as per your experiment.
  24. First point - (and forgive me if I misread, you may know already but just the way I read it I had to point it out) - You can't just change your cache drive to XFS like that. You first have to copy the data off that you need (which is probably where your app data is - it's often there by default), then reformat the drive. If you've already reformatted the drive, then you'll have to create a new app data. - Assuming I'm right about the above, it may be that: a). You're pointing it at your unformatted XFS drive, in which case you might be able to change it back to BTRFS and be lucky b) It's the unraid bug (never figured it out myself) where sometimes you just have to type the path into that box manually. For some reason the browser doesn't always work. But for reference, mine IS currently working for external devices. I'm sorry if this is not helpful, but it's all I can think of right now.
  25. This shouldn't really need to be 'handled' at all. Unraid forces the use of nobody.users and I can see why TBH. But if they're going to do that, they should really default all the built in writing to their chosen permissions format. File sharing has never been the strong suit of unraid unfortunately, but with a bit of tweaking it can work OK.
×
×
  • Create New...