Jump to content

S1dney

Members
  • Content Count

    89
  • Joined

  • Last visited

Community Reputation

36 Good

1 Follower

About S1dney

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. You're welcome. Hahah well you're basically answering your self. If you was exposing the services to the outside world it would make sense to send the traffic through a reverse proxy so you would only have to open up one port. Another use case for that reverse proxy would be hosting two containers on the host's address that require the same port to function (like the common 80 or 443), the reverse proxy would be able to route traffic to those ports based on hostnames and allow you to use that port for the client application that expects the server to be available on that port. I have also looked at (or actually implemented it) the nginx reverse proxy, but decided just to put the container on a different IP and call it a day. My todo list still has Traefik on it hahah, but too much on there atm Also, I can so much relate to this statement hahah: That's why unraid is so much fun! Cheers man.
  2. Why don't you put the docker containers on br0 and assign their unique IP address so DNS is able to distinct based on IP. If you want to route traffic to a different port on the same IP you would have to inspect the DNS address queried and route accordingly, which is where a reverse proxy would come into play. The easiest solution for you (that does not require you to dive into the reverze proxy stuff as a networking n00b )
  3. You're right, sorry about that, went through here before I went through my bug report. I do think that the casual user will benefit more from the loopback approach indeed since it's less complex and requires less maintenance. Have a great day ๐Ÿ‘
  4. Surely, you will loose it when you upgrade the containers also so you'll find out soon enough. Wiping out the directory is essentially recreating the docker image so that's fine. Also I understand that you're trying to warn people and agree with you that for most users taking the loopback approach will work better and causes less confusion. It's great that we can decide this ourselves though, unRAID is so flexible, which is something I like about it.
  5. Agreed, which is why having options for both the loopback image and the folder is best of both worlds. Also if I ever wanted to move the data I would just remove the entire folder and recreate it anyways since it's non-persistent data.
  6. Noticed that serious efforts have been taken to place docker in its own directory instead of the loop device. Very very happy that I can keep using the folder based approach, as it just works very well. Thanks so much for listening to the community @limetech!!
  7. This is great!! I have been running docker on its own (non-exported) share on a btrfs partition from December last year on, very happy with it. I thought that when the docker/btrfs write issue was going to be solved I would have to revert to a docker image again, but being able to keep using my share in a supported way from the GUI is just perfect. I would the folder approach over a loop device any day! I'll keep an eye on this when it makes it out of beta, for now, keep up the good work, very much appreciated ๐Ÿ˜„
  8. Very interesting indeed. This got me thinking... I noticed that writing directly onto the BTRFS cache reduced writes by a factor or roughly 10. Now I did felt like this was still on the high side, as it's still writing 40GB a day. What if.... this is still amplified by a factor of 10 also. Could this mean the a BTRFS formatted image on a BTRFS formatted partition results in 10x10=100 times write amplification? If I recall correctly someone pointed out a 100x write amplification number earlier in the thread? I think this is well suited for a test ๐Ÿ™‚ I've just recreated the loopimage formatted on XFS. I'll test my TB's written in a few minutes and check again after an hour. EDIT: Just noticed your comment @tjb_altf4 The default seems to work already, XFS is formatted nicely with the correct options: root@Tower:/# docker info Client: Debug Mode: false Server: Containers: 21 Running: 21 Paused: 0 Stopped: 0 Images: 35 Server Version: 19.03.5 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true According to the docker docs this should be fine, which xfs_info /var/lib/docker seems to confirm: root@Tower:/# xfs_info /var/lib/docker meta-data=/dev/loop2 isize=512 agcount=4, agsize=1310720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 I'm just testing it out, cause I'm curious if it matters. EDIT2, after 1 hour of runtime: TBW on 28-06-2020 13:44:11 --> 11.8 TB, which is 12049.6 GB. TBW on 28-06-2020 14:44:38 --> 11.8 TB, which is 12057.8 GB. 1 hour of running on XFS formatted loopdevice equals 8,2GB written, this would translate into 196,8GB a day. This would most likely be a bit more due to backup tasks at night. It's still on the high side, compared to running directly onto the BTRFS filesystem, which results in 40GB a day. In December 2019 I was seeing 400GB's a day though (running without modifications), and my docker count has increased a bit, so 200 is better. Haven't tried any other options, like the mount options specified. I expect these will bring the writes down regardless of the loopdevice being used since they're ran on the entire BTRFS mount, so the amplification with the loopdevice is likely to occur with them also. Still kind of sad though, I would have expected to see a very minor write amplification instead of 5 times. Guess that theory of 10x10 doesn't check out then.. Rolling back to my previous state, as I'd take 40GB over 200 any day ๐Ÿ˜„ EDIT3: Decided to remount the cache with space_cache=v2 set, running directly on the cache this gave me 9GB of writes in the last 9 hours. When the new unRAID version drops I'll reformat my cache with the new alignment settings. For now that space_cache=v2 setting does its magic
  9. Thank you both for this, communication is very much appreciated, as well as your efforts! Most of us know how busy you all have been so donโ€™t worry about it ๐Ÿ™‚ I have not read anyone reporting this on HDDโ€™s (read all comments actively), @TexasUnraid has been shifting data around a lot, you tried regular harddrives and btrfs by any chance? @jonp hope youโ€™ll find a good challenge here, and also, happy birthday in advance! ๐Ÿฅณ
  10. You cannot fix this by changing a couple of docker containers, cause docker containers are not the root cause of the problem, a busy container will just show this problem more. The only "fixes" that have been working for other were: 1) Formatting to XFS (works always) 2) Remounting the BTRFS cache with the nospace_cache options, see @johnnie.black's https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9431 (seems to work for everyone so far) 3) Putting docker directly onto the cache (some have reported no decreased writes, although some have, this is the one I'm using and it's working for me) I may have missed one, but suggestion 2 is your quickest option here.
  11. I must say that I was reluctant in believing these statements, I have been testing writing stuff to the BTRFS cache devices in the beginning, could not notice the write amplification there. Now going back to the fact that my SMART data still shows my drives writing 40GB a day, this does seem quite a lot on second hand. TBW on 14-06-2020 23:57:01 --> 12.1 TB, which is 12370.2 GB. TBW on 15-06-2020 23:57:01 --> 12.1 TB, which is 12392.6 GB. TBW on 16-06-2020 23:57:01 --> 12.1 TB, which is 12431.4 GB. TBW on 17-06-2020 23:57:01 --> 12.2 TB, which is 12469.0 GB. TBW on 18-06-2020 23:57:01 --> 12.2 TB, which is 12507.4 GB. TBW on 19-06-2020 23:57:01 --> 12.3 TB, which is 12547.5 GB. I'm not really complaining though cause this writes are neglectable on 300TBW warranty drives. However.... Since docker lives directly on the BTRFS mountpoint this might as well be lower since my containers aren't that busy ones. Still considerably lower though then the 300/400GB daily writes while still using the docker.img file. TBW on 11-11-2019 23:59:02 --> 3.8 TB, which is 3941.2 GB. TBW on 12-11-2019 23:59:01 --> 4.2 TB, which is 4272.1 GB. TBW on 13-11-2019 23:59:01 --> 4.5 TB, which is 4632.5 GB. TBW on 14-11-2019 23:59:01 --> 4.9 TB, which is 5044.0 GB. TBW on 15-11-2019 23:59:01 --> 5.2 TB, which is 5351.3 GB. TBW on 16-11-2019 23:59:01 --> 5.3 TB, which is 5468.8 GB. TBW on 17-11-2019 23:59:01 --> 5.5 TB, which is 5646.1 GB.
  12. I think @tjb_altf4 sums it up quite well. Not sure about the CoW setting though, I remember having played with that initially (it's a while ago). Every write is indeed amplified, definitely seems related to the loopdevice implementation cause mounting directly on the btrfs mountpoint makes it stop. Also it doesn't appears to be just a reporting issue as the drive's lifespan decreases. Nevertheless a very very very big salute on the work here, I can only imagine the amount of work that went in! ๐Ÿคฏ People tend to complain when a bug isn't fixed in 2 days, but forget the amount of work that is being done behind the scenes. I don't have a test server to test this on, so I'm waiting patiently on the RC or stable builds. ๐Ÿบ๐Ÿบ๐Ÿบ๐Ÿบ EDIT: Just noticed @johnnie.black's comments on the thread, that it might be solved by something that has been changed in this build. As I'm skipping this one I'm curious for the results other people are seeing ๐Ÿ™‚
  13. There is no other write-up on the subject, as far as I know. I improvised in this one to find a solution that would not destroy my SSD's Well no docker container should contain persistent data. Persistent data should always be mapped to a location on the host. Updating a docker container destroys it too, and if setup correctly this doesn't cause you to loose data, by design. You're right though, if this gets fixed in a newer release, you would have to redownload all image files, but because of docker's nature, this will only take some time to download (depending on your line speed) and (again if setup correctly) will not cause you to lose any data. The user scripts plugin is indeed able to run scripts when the array comes up, you don't have to press any button or so, it runs like the names says on array startup (I guess straight after). Taking the approach from page two makes this persistent though, and reverting back to default in a feature upgrade would just require you to remove the added lines from the /boot/config/go file and docker will mount its image file again.
  14. Well the instructions from page 2 (you meant these right?) are meant to make the changes persistent by editing the go files and injecting a file from flash into the OS on boot. If you want a solution that is temporary you should follow the steps to create a directory on the cache and edit the start_docker() function in the /etc/rc.docker/rc.docker file. Then stop docker from the docker tab and start it again. Docker will now write directly into the btrfs mountpoint. One thing here though, is that you end up with no docker container. To recreate every one of them, go to add container and select the containers from the user-templates one by one (assuming all your containers were created via the GUI). This downloads the images again and all the persistent mappings are restored. If you want to revert, simple reboot and voila. Easy way to find where the investigation you did stand against this, kind of curious also ๐Ÿ™‚ Cheers.
  15. Totally understandable. 3TB on my two Samsung Evo 860 1TB drives are neglectable though, since their warranty will void after 300TB written hahaha. They should last 100 years ๐Ÿ˜› It would feel strange however that moving to n XFS volume would still have btrfs-transacti "do stuff". If I interpret my quick Google query correctly this is snapshotting at work, which is a btrfs, but not an XFS process. Also, indeed, only mess with unRAID's config if you confident about doing it ๐Ÿ™‚ Great thing about unRAID in these cases is that a reboot sets you back to default with a default go file and no other scripts running, at least on the OS side.