S1dney

Members
  • Content Count

    93
  • Joined

  • Last visited

Community Reputation

39 Good

1 Follower

About S1dney

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've looked at my go file, I still have the pip3 way in place and it's working nice: # Since the NerdPack does not include this anymore we need to download docker-compose outselves. # pip3, python3 and the setuptools are all provided by the nerdpack # See: https://forums.unraid.net/index.php?/topic/35866-unRAID-6-NerdPack---CLI-tools-(iftop,-iotop,-screen,-kbd,-etc.&do=findComment&comment=838361) pip3 install docker-compose The second way via curl also works well as I equipped my duplicati container with it. You don't have to setup any dependencies that pip3 needs, s
  2. Best wishes to you too 👍 Whether to use docker-compose or Unraid’s own DockerMan, depends on what you want and/or need. The DockerMan templates give you an easy option to redeploy containers from the GUI, you’ll also be notified when a docker container has an update available. I like Unraid’s tools so I default to these and create most containers from the GUI, however certain scenarios cannot be setup via DockerMan (at least not when I checked), like linking the network interfaces of containers together (so that all the container’s traffic flows through one of them).
  3. I know haha, but I'm waiting for the upgraded kernel I'm still on the latest build that had the 5+ kernel included.
  4. Wow these releases are PACKED!! Can't wait for this to reach a stable release. Thanks for the improvements 🙏
  5. You're welcome. Hahah well you're basically answering your self. If you was exposing the services to the outside world it would make sense to send the traffic through a reverse proxy so you would only have to open up one port. Another use case for that reverse proxy would be hosting two containers on the host's address that require the same port to function (like the common 80 or 443), the reverse proxy would be able to route traffic to those ports based on hostnames and allow you to use that port for the client application that expects the server to be available on that
  6. Why don't you put the docker containers on br0 and assign their unique IP address so DNS is able to distinct based on IP. If you want to route traffic to a different port on the same IP you would have to inspect the DNS address queried and route accordingly, which is where a reverse proxy would come into play. The easiest solution for you (that does not require you to dive into the reverze proxy stuff as a networking n00b )
  7. You're right, sorry about that, went through here before I went through my bug report. I do think that the casual user will benefit more from the loopback approach indeed since it's less complex and requires less maintenance. Have a great day 👍
  8. Surely, you will loose it when you upgrade the containers also so you'll find out soon enough. Wiping out the directory is essentially recreating the docker image so that's fine. Also I understand that you're trying to warn people and agree with you that for most users taking the loopback approach will work better and causes less confusion. It's great that we can decide this ourselves though, unRAID is so flexible, which is something I like about it.
  9. Agreed, which is why having options for both the loopback image and the folder is best of both worlds. Also if I ever wanted to move the data I would just remove the entire folder and recreate it anyways since it's non-persistent data.
  10. Noticed that serious efforts have been taken to place docker in its own directory instead of the loop device. Very very happy that I can keep using the folder based approach, as it just works very well. Thanks so much for listening to the community @limetech!!
  11. This is great!! I have been running docker on its own (non-exported) share on a btrfs partition from December last year on, very happy with it. I thought that when the docker/btrfs write issue was going to be solved I would have to revert to a docker image again, but being able to keep using my share in a supported way from the GUI is just perfect. I would the folder approach over a loop device any day! I'll keep an eye on this when it makes it out of beta, for now, keep up the good work, very much appreciated 😄
  12. Very interesting indeed. This got me thinking... I noticed that writing directly onto the BTRFS cache reduced writes by a factor or roughly 10. Now I did felt like this was still on the high side, as it's still writing 40GB a day. What if.... this is still amplified by a factor of 10 also. Could this mean the a BTRFS formatted image on a BTRFS formatted partition results in 10x10=100 times write amplification? If I recall correctly someone pointed out a 100x write amplification number earlier in the thread? I think this is well suited for a tes
  13. Thank you both for this, communication is very much appreciated, as well as your efforts! Most of us know how busy you all have been so don’t worry about it 🙂 I have not read anyone reporting this on HDD’s (read all comments actively), @TexasUnraid has been shifting data around a lot, you tried regular harddrives and btrfs by any chance? @jonp hope you’ll find a good challenge here, and also, happy birthday in advance! 🥳
  14. You cannot fix this by changing a couple of docker containers, cause docker containers are not the root cause of the problem, a busy container will just show this problem more. The only "fixes" that have been working for other were: 1) Formatting to XFS (works always) 2) Remounting the BTRFS cache with the nospace_cache options, see @johnnie.black's https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9431 (seems to work for everyone so far) 3) Putting docker directly onto t
  15. I must say that I was reluctant in believing these statements, I have been testing writing stuff to the BTRFS cache devices in the beginning, could not notice the write amplification there. Now going back to the fact that my SMART data still shows my drives writing 40GB a day, this does seem quite a lot on second hand. TBW on 14-06-2020 23:57:01 --> 12.1 TB, which is 12370.2 GB. TBW on 15-06-2020 23:57:01 --> 12.1 TB, which is 12392.6 GB. TBW on 16-06-2020 23:57:01 --> 12.1 TB, which is 12431.4 GB. TBW on 17-06-2020 23:57:01 --> 12.2 TB, which is 12469.0 GB. TBW on