Jump to content

S1dney

Members
  • Content Count

    65
  • Joined

  • Last visited

Community Reputation

26 Good

1 Follower

About S1dney

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. What are you trying to accomplish exactly, docker containers on the default bridge are able to locate each other on IP details. If you want to have them be able to reach each other by name, then you have to use a custom network. Ref to Docker Docs Now for pihole, I've put mine on the custom: br0 bridge and gave it an IP in my own segment, since this allows my to use port 80 of the host and still have pihole available on that same port. No VLAN was required at all for this. Note that I do use VLANs, but that's mainly to be able to reach the host from a docker container. I've created a macvlan driver network on that VLAN, without giving the host an IP in that same VLAN, this allows me to reach the unRAID host on its usual address from the newly created network.
  2. Ok understood, I'll give that a go. Sad that it had to go tho. Thanks for the alternative way! Appreciate it.
  3. Hey Guys, I was wondering what happened to the docker-compose command? Rebooted my server after 39 days today and the nerdpack apparently got an update, but this left me without docker-compose. Running on 6.8.0rc7, I did notice some commits on the Github less then 39 days ago. My docker setup greatly depends on docker-compose, so I'd like to have it back 🙂 Appreciate the effort! Best regards. EDIT: Noticed that it was still listed at "https://github.com/dmacias72/unRAID-NerdPack/tree/master/packages/6.6", but not on "https://github.com/dmacias72/unRAID-NerdPack/tree/master/packages/6.7" and above. So I was able to get it for now by putting version 6.6 in the NerdPackHelper script. I imagine this was a mistake? Thanks again!
  4. Yeah, I know. It's in /mnt/cache/system/, as is the docker.img file. Then that img file is mounted onto (/var/lib/docker in docker's case)., so that when the docker deamon writes to its /var/lib directory (new container images or some logging data for example), it writes that into the file on the cache instead, so that it survives a reboot. That way of working was causing the writes at docker's end, as creating a symlink to a location on the cache and disabling the loopdevice makes it stop. I haven't really spent any time on the hypervisor, but it looks like the libvirt image (located at /mnt/cache/system/libvirt/libvirt.img), mounts itself on /etc/libvirt. Now looking through the files there this doesn't seem like a directory that is being written too much, as it just contains some XML, conf and non-volatile ram files. Also I just noticed your new post, that's crazy!! I must admit that I thought I pinned this down a bit, but now users are starting to report these numbers on the hypervisor side also, I'm really starting to doubt if docker even has something to do with it. Like said before, it would be worth checking to see what happens if you copy all files within /etc/libvirt (while the image is mounted) to a directory on the cache, create a symlink at /etc/libvirt to point to that directory (/mnt/cache/somedir) and then modify the rc.libvirt file so that the start_libvirtd() doesn't check for a mounted image. That is what stopped the writes on the docker side. Or...... Have some of the devs hop on board here, as this is all unsupported and not to recommend of source (although a great way to spend some hours)
  5. That looks identical indeed. On the contrast, I don't see any dockerd commands in that output. From what I saw when I was troubleshooting this the writes by loop2 would go up a lot if dockerd commands started to show, I assumed that a container was doing writes at that time having docker interact with the loop device, which would in turn crank up writes on/in there as well. In my case with docker I was quite certain that it was the loop device's implementation, since bypassing the loop device solved the amount of writes. Now in your case I don't see any loop device so that makes we wonder if we're on the wrong track here. I'm not really sure how to track the location where that writes are going exactly, but the loop2 process eating up storage was a good indication for blaming the loop device. I guess you'd have to test with a system and also mount libvirt's directly onto the cache to find out. Thanks for checking though 👍
  6. Some docker containers tend to misbehave occasionally. Stop all of them and start them one by one to see which one increases writes out of proportion, iotop -ao is easy to use for this. That's interesting. I've always assumed this was a combination of docker with the loopdevice implementation, libvirt's image is also mounted in the same manner, but that image is usually smaller so I'd expect that it written to less. Not sure though, my processor (i3 9100) seems to take a big hit from just running 1 VM so I never took the VM route. Starting so much as a browser inside a Windows VM cranks up all cores to 100%. Tried numerous things like messing with settings and passing iGPU to the VM but all the same. Eventually decided to stick with docker. What's the top 10 disk writers when running iotop -ao for 30 mins?
  7. That's interesting. That must be an option implemented after version 6.8.0-rc7. I'm still running that version, cause I needed a newer kernel and I don't felt comfortable with the warnings issued at 6.9 beta 1. While I know the form-based authentication has some security issues, I prefer holding, since my server is local LAN only. Can you sent me the contents of 6.8.3's version of the rc.docker file in a PM (let's not go off topic too much on this bug report so rather PM). The start_docker() function must have been adjusted to include some settings on the docker daemon before starting it. Got me curious.
  8. It behaves as expected. All your docker containers are downloaded into the docker image (located in the system folder somewhere on the cache, docker.img is the file I think). After the changes you’ve made, you’re not mounting that anymore, but have docker targeted to a different directory. Docker will create the needed working directories upon service start, meaning that all container are still inside the docker.img file. I initially re-created them, using the templates from the dockerMan GUI this isn’t too much work and all persistent data should not reside in the docker.img anyways or you might lose it if the docker.img gets corrupted. I guess you could also copy all data over before implementing the script that mounts the cache directory but I would recreate the containers if I were you. You should also recreate the docker.img image if you’re done with everything, so that when something changes in future unRAID versions which potentially breaks this, you’ll notice that you have no containers after a reboot and know the docker.img file is mounted or something else is wrong :-)
  9. By this I think you’re essentially moving the docker image (and thus the mount on /var/lib/docker) onto the array. So these writes should not go to the cache anymore, I guess. Docker will keep you array up non-stop though, which kind op defeats unRAID selling point in being able to spin down disks. When you combine this with the unassigned disks plugin, you might be able to put it on a single disk for now (I think, haven’t used the plugin before) and have the array still fall asleep. Good suggestion for some people that are not into making unsupported CLI/script tweaks, thanks for sharing! Also as @chanrc is reporting, this really looks to be btrfs related, which is sad, cause it’s your only option if you want to have a redundant cache.
  10. That seems to be exactly the issue I was facing indeed. A container that wrote a lot, would just bump up writes a lot faster but in general every write docker makes seemed to just get multiplied by who knows what factor So that could potentially rule out encryption and pooling of disks and would leave the combination between BTRFS and a loop device, I don't believe XFS was affected by this (I heard/read some users reporting XFS did not had these issues).
  11. You're right, which is why @itimpi's suggestion of having another category in between sounds like a good one. The unRAID file system is not the issue here. To get the writes down you have to take the loopdevice (loop2) out of the equation by mounting the docker directory in the filesystem (e.g. creating a symlink from /var/lib/docker to a location on disk/cache). The error seems to be docker in combination with the loopdevice, reading though some comments it's not entirely clear where (and if) btrfs (with or without encryption and/or pooling) has a relation with this problem also, my guess is yes. There has been a lot of reports of certain docker containers (like the official Plex) writing a lot also, so it's easy to mistake that and this bug, also since it's possible you're affected by both of them To have this solved you'd just need some devs on board. Someone that spins up a test machine and who is able to understand how this relates to page flushes etc (sadly this goes beyond my knowledge). Then again I'm sure the devs have other issues also which might have more priority at the moment, although priority is usually driven by community calls right? So this might shift
  12. Agreed! "Urgent" might be to generic in that it sums up "Server crash", "data loss" and "showstopper" under one caller. For now it seems to be a showstopper for a bunch of people so it's still accurate. If a new category is created, let me know and I'll adjust 👍
  13. Changed Priority to Urgent >> Since I noticed this thread getting more and more attention lately, and more and more people urging it to be urgent instead of minor, I'll raise priority on this one. Just an FYI, I made/kept it minor initially cause I had a workable workaround that I felt satisfied with. If the Command Line Interface isn't really your thing or you have any other reason to not tweak the OS in an unsupported way I can fully understand this frustration. In the end... The community decides priority. Also updated the title to version 6.8.3 as requested. Cheers
  14. Seems you're just hitting this bug and for it, this is expected behavior I think It seems like every write docker does on the cache multiplies by 10 (at least). I recall seeing similar behavior, whenever docker starts to write, it hammers big time.
  15. The devs were involved in this topic already, but this has not yielded any results yet. No proper solution yet. Only way to get around those excessive writes (that I found) is by putting the loop device out of the equation. I've described a (non-supported) way to do so, which works well for me. You're running in a BTRFS pool if I understand correctly? Unencrypted? Thought this was solely based on encrypted pools. What you could to is install the "Nerd Pack" from the Community Applications, then head over to Setting -> "Nerd Pack" and install "iotop". Then from the commandline run "iotop -ao" (shows an accumulated view of all processes that actually read and write). See if one partical item (besides loop2) stand out in a big way, this could indicate one container is running wild, I have found topics were certain databases would write tremendously on the cache.