S1dney

Members
  • Posts

    98
  • Joined

  • Last visited

Everything posted by S1dney

  1. Hahahah you went down the rabbit hole on this one didn’t you. I’ve spent a fair amount of time of this as well, but at some point thought those few gigabytes weren’t worth my time (also became a father in the meantime so less time to play 🤣👊🏻). Having that said, aren’t you better off writing a guide to cut down on these writes and post them in a separate thread? This thread has become so big that it will scare off anyone accidentally coming here via Google or something. Would be a waste of knowledge that you have gained (that others could benefit from) if you ask me. Cheers 🤙🏻
  2. Just add those lines to download docker-compose in the go file and have the user scripts plug-in run a few lines of docker-compose commands after array start? I haven’t set it up tho as I manually unlock my encrypted array anyways. Cheers
  3. So coming back on this bug report. I have upgraded to 6.9 on March 2nd and also wiped the cache to take advantage of the new partition alignment (I have Samsung EVO's and perhaps a portion of OCD 🤣). Waited a bit to get historic data. Pre 6.9 TBW on 19-02-2021 23:57:01 --> 15.9 TB, which is 16313.7 GB. TBW on 20-02-2021 23:57:01 --> 16.0 TB, which is 16344.9 GB. TBW on 21-02-2021 23:57:01 --> 16.0 TB, which is 16382.8 GB. TBW on 22-02-2021 23:57:01 --> 16.0 TB, which is 16419.5 GB. -> Writes somewhere on 34/35GB's average a day. 6.9 TBW on 05-03-2021 23:57:01 --> 16.6 TB, which is 16947.4 GB. TBW on 06-03-2021 23:57:01 --> 16.6 TB, which is 16960.2 GB. TBW on 07-03-2021 23:57:01 --> 16.6 TB, which is 16972.8 GB. TBW on 08-03-2021 23:57:01 --> 16.6 TB, which is 16985.3 GB. -> Writes round 12/13GB's a average a day So I would say 6.9 (and reformatting) made a very big improvement. I think most of these savings are due to the new partition alignment as I was running docker directly on the cache already and recall made a few tweaks suggested here (adding mount options, cannot remember which exactly). Thanks @limetech and all other devs for the work put into this. This bug report is Closed 👍
  4. Without proper tooling the layers from the BTRFS will likely inflate and the image data won't be usable. With proper tooling this may/could work I think but just wiping is way easier 😂
  5. The GO (and service.d file) modifications are not needed anymore in unraid 9.2 as they basically created a way to do exactly that from the GUI. The GUI should now allow you to host docker files in their own directory directly on the cache (which what the workaround did via scripts instead of the GUI). Haven't moved there myself yet as I still see enough issues with rc2 for now. I believe that reformatting the cache is also advisable cause they have made some changes in the partition alignment there as well (you may or may not get lesser rights because of it, I'm using Samsung EVO's so I consider wiping and reformatting mine after the upgrade). Cheers!
  6. I've looked at my go file, I still have the pip3 way in place and it's working nice: # Since the NerdPack does not include this anymore we need to download docker-compose outselves. # pip3, python3 and the setuptools are all provided by the nerdpack # See: https://forums.unraid.net/index.php?/topic/35866-unRAID-6-NerdPack---CLI-tools-(iftop,-iotop,-screen,-kbd,-etc.&do=findComment&comment=838361) pip3 install docker-compose The second way via curl also works well as I equipped my duplicati container with it. You don't have to setup any dependencies that pip3 needs, so for simplicity reasons this may be more perferable. Cheers!
  7. Best wishes to you too 👍 Whether to use docker-compose or Unraid’s own DockerMan, depends on what you want and/or need. The DockerMan templates give you an easy option to redeploy containers from the GUI, you’ll also be notified when a docker container has an update available. I like Unraid’s tools so I default to these and create most containers from the GUI, however certain scenarios cannot be setup via DockerMan (at least not when I checked), like linking the network interfaces of containers together (so that all the container’s traffic flows through one of them). Therefore I use docker-compose for those and also have some containers I’ve built myself that I want to setup with some dependencies. I update them via a bash script I wrote and redeploy them afterwards, as the GUI is not able to modify their properties (which answers your second question, containers deployed outside of the DockerMan GUI do not get a template and cannot be managed/updated, you can start and stop them tho). My advise would be: stay with the Unraid default tooling unless you find a limitation which might make you want to deviate from them.
  8. I know haha, but I'm waiting for the upgraded kernel I'm still on the latest build that had the 5+ kernel included.
  9. Wow these releases are PACKED!! Can't wait for this to reach a stable release. Thanks for the improvements 🙏
  10. You're welcome. Hahah well you're basically answering your self. If you was exposing the services to the outside world it would make sense to send the traffic through a reverse proxy so you would only have to open up one port. Another use case for that reverse proxy would be hosting two containers on the host's address that require the same port to function (like the common 80 or 443), the reverse proxy would be able to route traffic to those ports based on hostnames and allow you to use that port for the client application that expects the server to be available on that port. I have also looked at (or actually implemented it) the nginx reverse proxy, but decided just to put the container on a different IP and call it a day. My todo list still has Traefik on it hahah, but too much on there atm Also, I can so much relate to this statement hahah: That's why unraid is so much fun! Cheers man.
  11. Why don't you put the docker containers on br0 and assign their unique IP address so DNS is able to distinct based on IP. If you want to route traffic to a different port on the same IP you would have to inspect the DNS address queried and route accordingly, which is where a reverse proxy would come into play. The easiest solution for you (that does not require you to dive into the reverze proxy stuff as a networking n00b )
  12. You're right, sorry about that, went through here before I went through my bug report. I do think that the casual user will benefit more from the loopback approach indeed since it's less complex and requires less maintenance. Have a great day 👍
  13. Surely, you will loose it when you upgrade the containers also so you'll find out soon enough. Wiping out the directory is essentially recreating the docker image so that's fine. Also I understand that you're trying to warn people and agree with you that for most users taking the loopback approach will work better and causes less confusion. It's great that we can decide this ourselves though, unRAID is so flexible, which is something I like about it.
  14. Agreed, which is why having options for both the loopback image and the folder is best of both worlds. Also if I ever wanted to move the data I would just remove the entire folder and recreate it anyways since it's non-persistent data.
  15. Noticed that serious efforts have been taken to place docker in its own directory instead of the loop device. Very very happy that I can keep using the folder based approach, as it just works very well. Thanks so much for listening to the community @limetech!!
  16. This is great!! I have been running docker on its own (non-exported) share on a btrfs partition from December last year on, very happy with it. I thought that when the docker/btrfs write issue was going to be solved I would have to revert to a docker image again, but being able to keep using my share in a supported way from the GUI is just perfect. I would the folder approach over a loop device any day! I'll keep an eye on this when it makes it out of beta, for now, keep up the good work, very much appreciated 😄
  17. Very interesting indeed. This got me thinking... I noticed that writing directly onto the BTRFS cache reduced writes by a factor or roughly 10. Now I did felt like this was still on the high side, as it's still writing 40GB a day. What if.... this is still amplified by a factor of 10 also. Could this mean the a BTRFS formatted image on a BTRFS formatted partition results in 10x10=100 times write amplification? If I recall correctly someone pointed out a 100x write amplification number earlier in the thread? I think this is well suited for a test 🙂 I've just recreated the loopimage formatted on XFS. I'll test my TB's written in a few minutes and check again after an hour. EDIT: Just noticed your comment @tjb_altf4 The default seems to work already, XFS is formatted nicely with the correct options: root@Tower:/# docker info Client: Debug Mode: false Server: Containers: 21 Running: 21 Paused: 0 Stopped: 0 Images: 35 Server Version: 19.03.5 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true According to the docker docs this should be fine, which xfs_info /var/lib/docker seems to confirm: root@Tower:/# xfs_info /var/lib/docker meta-data=/dev/loop2 isize=512 agcount=4, agsize=1310720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 I'm just testing it out, cause I'm curious if it matters. EDIT2, after 1 hour of runtime: TBW on 28-06-2020 13:44:11 --> 11.8 TB, which is 12049.6 GB. TBW on 28-06-2020 14:44:38 --> 11.8 TB, which is 12057.8 GB. 1 hour of running on XFS formatted loopdevice equals 8,2GB written, this would translate into 196,8GB a day. This would most likely be a bit more due to backup tasks at night. It's still on the high side, compared to running directly onto the BTRFS filesystem, which results in 40GB a day. In December 2019 I was seeing 400GB's a day though (running without modifications), and my docker count has increased a bit, so 200 is better. Haven't tried any other options, like the mount options specified. I expect these will bring the writes down regardless of the loopdevice being used since they're ran on the entire BTRFS mount, so the amplification with the loopdevice is likely to occur with them also. Still kind of sad though, I would have expected to see a very minor write amplification instead of 5 times. Guess that theory of 10x10 doesn't check out then.. Rolling back to my previous state, as I'd take 40GB over 200 any day 😄 EDIT3: Decided to remount the cache with space_cache=v2 set, running directly on the cache this gave me 9GB of writes in the last 9 hours. When the new unRAID version drops I'll reformat my cache with the new alignment settings. For now that space_cache=v2 setting does its magic
  18. Thank you both for this, communication is very much appreciated, as well as your efforts! Most of us know how busy you all have been so don’t worry about it 🙂 I have not read anyone reporting this on HDD’s (read all comments actively), @TexasUnraid has been shifting data around a lot, you tried regular harddrives and btrfs by any chance? @jonp hope you’ll find a good challenge here, and also, happy birthday in advance! 🥳
  19. You cannot fix this by changing a couple of docker containers, cause docker containers are not the root cause of the problem, a busy container will just show this problem more. The only "fixes" that have been working for other were: 1) Formatting to XFS (works always) 2) Remounting the BTRFS cache with the nospace_cache options, see @johnnie.black's https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9431 (seems to work for everyone so far) 3) Putting docker directly onto the cache (some have reported no decreased writes, although some have, this is the one I'm using and it's working for me) I may have missed one, but suggestion 2 is your quickest option here.
  20. I must say that I was reluctant in believing these statements, I have been testing writing stuff to the BTRFS cache devices in the beginning, could not notice the write amplification there. Now going back to the fact that my SMART data still shows my drives writing 40GB a day, this does seem quite a lot on second hand. TBW on 14-06-2020 23:57:01 --> 12.1 TB, which is 12370.2 GB. TBW on 15-06-2020 23:57:01 --> 12.1 TB, which is 12392.6 GB. TBW on 16-06-2020 23:57:01 --> 12.1 TB, which is 12431.4 GB. TBW on 17-06-2020 23:57:01 --> 12.2 TB, which is 12469.0 GB. TBW on 18-06-2020 23:57:01 --> 12.2 TB, which is 12507.4 GB. TBW on 19-06-2020 23:57:01 --> 12.3 TB, which is 12547.5 GB. I'm not really complaining though cause this writes are neglectable on 300TBW warranty drives. However.... Since docker lives directly on the BTRFS mountpoint this might as well be lower since my containers aren't that busy ones. Still considerably lower though then the 300/400GB daily writes while still using the docker.img file. TBW on 11-11-2019 23:59:02 --> 3.8 TB, which is 3941.2 GB. TBW on 12-11-2019 23:59:01 --> 4.2 TB, which is 4272.1 GB. TBW on 13-11-2019 23:59:01 --> 4.5 TB, which is 4632.5 GB. TBW on 14-11-2019 23:59:01 --> 4.9 TB, which is 5044.0 GB. TBW on 15-11-2019 23:59:01 --> 5.2 TB, which is 5351.3 GB. TBW on 16-11-2019 23:59:01 --> 5.3 TB, which is 5468.8 GB. TBW on 17-11-2019 23:59:01 --> 5.5 TB, which is 5646.1 GB.
  21. I think @tjb_altf4 sums it up quite well. Not sure about the CoW setting though, I remember having played with that initially (it's a while ago). Every write is indeed amplified, definitely seems related to the loopdevice implementation cause mounting directly on the btrfs mountpoint makes it stop. Also it doesn't appears to be just a reporting issue as the drive's lifespan decreases. Nevertheless a very very very big salute on the work here, I can only imagine the amount of work that went in! 🤯 People tend to complain when a bug isn't fixed in 2 days, but forget the amount of work that is being done behind the scenes. I don't have a test server to test this on, so I'm waiting patiently on the RC or stable builds. 🍺🍺🍺🍺 EDIT: Just noticed @johnnie.black's comments on the thread, that it might be solved by something that has been changed in this build. As I'm skipping this one I'm curious for the results other people are seeing 🙂
  22. There is no other write-up on the subject, as far as I know. I improvised in this one to find a solution that would not destroy my SSD's Well no docker container should contain persistent data. Persistent data should always be mapped to a location on the host. Updating a docker container destroys it too, and if setup correctly this doesn't cause you to loose data, by design. You're right though, if this gets fixed in a newer release, you would have to redownload all image files, but because of docker's nature, this will only take some time to download (depending on your line speed) and (again if setup correctly) will not cause you to lose any data. The user scripts plugin is indeed able to run scripts when the array comes up, you don't have to press any button or so, it runs like the names says on array startup (I guess straight after). Taking the approach from page two makes this persistent though, and reverting back to default in a feature upgrade would just require you to remove the added lines from the /boot/config/go file and docker will mount its image file again.
  23. Well the instructions from page 2 (you meant these right?) are meant to make the changes persistent by editing the go files and injecting a file from flash into the OS on boot. If you want a solution that is temporary you should follow the steps to create a directory on the cache and edit the start_docker() function in the /etc/rc.docker/rc.docker file. Then stop docker from the docker tab and start it again. Docker will now write directly into the btrfs mountpoint. One thing here though, is that you end up with no docker container. To recreate every one of them, go to add container and select the containers from the user-templates one by one (assuming all your containers were created via the GUI). This downloads the images again and all the persistent mappings are restored. If you want to revert, simple reboot and voila. Easy way to find where the investigation you did stand against this, kind of curious also 🙂 Cheers.
  24. Totally understandable. 3TB on my two Samsung Evo 860 1TB drives are neglectable though, since their warranty will void after 300TB written hahaha. They should last 100 years 😛 It would feel strange however that moving to n XFS volume would still have btrfs-transacti "do stuff". If I interpret my quick Google query correctly this is snapshotting at work, which is a btrfs, but not an XFS process. Also, indeed, only mess with unRAID's config if you confident about doing it 🙂 Great thing about unRAID in these cases is that a reboot sets you back to default with a default go file and no other scripts running, at least on the OS side.
  25. There you go! 10 minutes into "iotop -ao", btrfs-transacti produced nearly 60MB of writes. My /var/lib/docker is symlinked to /mnt/cache/docker, so all writes that should go into the image, go straightly on the btrfs mountpoint. Note that running iotop initially gave me Gigabytes of data, After several months I'm still one happy redundant btrfs camper 😁 Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 25615 be/4 root 0.00 B 13.98 M 0.00 % 0.25 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 25612 be/4 root 0.00 B 12.67 M 0.00 % 0.23 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 23465 be/4 root 0.00 B 12.51 M 0.00 % 0.21 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 24974 be/4 root 0.00 B 11.56 M 0.00 % 0.21 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 25613 be/4 root 0.00 B 8.93 M 0.00 % 0.16 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 23449 be/4 root 0.00 B 8.39 M 0.00 % 0.15 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 24682 be/4 root 0.00 B 7.35 M 0.00 % 0.12 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 25959 be/4 root 0.00 B 7.48 M 0.00 % 0.12 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 23468 be/4 root 0.00 B 6.84 M 0.00 % 0.12 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 6192 be/4 root 0.00 B 6.37 M 0.00 % 0.12 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 25616 be/4 root 0.00 B 4.96 M 0.00 % 0.10 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 16693 be/4 65534 0.00 B 6.95 M 0.00 % 0.09 % sqlservr 23464 be/4 root 0.00 B 4.44 M 0.00 % 0.09 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 23441 be/4 root 0.00 B 2.51 M 0.00 % 0.05 % dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error 5523 be/4 root 0.00 B 58.58 M 0.00 % 0.04 % [btrfs-transacti] 26247 be/4 root 0.00 B 2.19 M 0.00 % 0.03 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 27636 be/4 root 0.00 B 1952.00 K 0.00 % 0.03 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 25702 be/4 root 0.00 B 2.00 M 0.00 % 0.03 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 26773 be/4 root 0.00 B 1460.00 K 0.00 % 0.02 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 29889 be/4 root 0.00 B 1272.00 K 0.00 % 0.02 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 7181 be/4 root 0.00 B 1192.00 K 0.00 % 0.02 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 30421 be/4 root 0.00 B 992.00 K 0.00 % 0.01 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 14571 be/4 root 0.00 B 2.86 M 0.00 % 0.01 % [kworker/u8:1-btrfs-endio-write] 26949 be/4 root 0.00 B 2.88 M 0.00 % 0.01 % [kworker/u8:4-btrfs-endio-write] 12185 be/4 root 0.00 B 1600.00 K 0.00 % 0.01 % [kworker/u8:2-bond0] 32373 be/4 root 0.00 B 2016.00 K 0.00 % 0.01 % [kworker/u8:5-btrfs-endio-write] 24913 be/4 root 0.00 B 588.00 K 0.00 % 0.01 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 30128 be/4 root 0.00 B 2.38 M 0.00 % 0.01 % [kworker/u8:3-btrfs-endio-write]