S1dney

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by S1dney

  1. No, you will indeed do a lot of this from the terminal as you can't interact from the unRAID dockerman (GUI). And you can go as deep as you want with unRAID, it doen't really hide anything for you, which is what I love about it.
  2. You don't really need to compose a docker neither. Just download/curl the build script, run the commands and when you run ./bitwarden.sh start it will issue the compose commands needed to start anything. And the override setting will allow you to override the default settings.
  3. You need to find the nerdpack plugin, it has docker-compose included, cause you are right indeed that bitwarden uses docker-compose to start everything up. Then with that installed just follow the steps mentioned in the Bitwarden docs. You have to complete: - step 1 - step 2 is already done by running unraid (docker onboard) and installing the nerdpack for docker-compose - create a directory in step 3 (I have mine on /mnt/user/appdata/bitwarden since the appdata resides on the cache only). Note that I did not create a bitwarden user at the time, may do that later for additional security, this has been running for 3 years+ lol. - Get your key in step 4 - Install Bitwarden in step 5 by curling the build script into the directory you have created and then invoke some commands. - Add the overrides I gave you in the mentioned post in step 6 so it will actually be able to start on unRAID - Then start it and enjoy Bitwarden. GL!
  4. I think I still have Bitwarden running using the override file specified in my comment: I don’t recall touching that since and it had been working very well for me. You cannot choose another SQL version, this is what the template from Bitwarden pulls for you. Good luck!
  5. Hmm, did not get a notification on these replies. Guess I wasn’t following my own topic yet lol. Following now. Good suggestions tho! I’ll check those out and report back later on. Cheers!
  6. Hey Y'all! I'm using a custom built bash script to update all my self built docker containers every once in a while. It still requires me to login to the GUI and press the "Update All" button from the Docker tab tho.. I'm running these updates from the terminal so would rather have the code trigger this as well. Have been searching through the php files etc to see if I can see how this is done but no luck yet. I also have a script that allows my UPS to shutdown unraid in a clean manner which basically uses the CSRF token to control it. echo 'Stopping unraid array.' CSRF=$(cat /var/local/emhttp/var.ini | grep -oP 'csrf_token="\K[^"]+') # Send this command to the backgroup so we can check the status of the unmount on the array (last step unraid takes) # We send everything to /dev/null cause we're not interested in nohup's output nohup curl -k --data "startState=STARTED&file=&csrf_token=${CSRF}&cmdStop=Stop" https://localhost:5443/update.htm >/dev/null 2>&1 & I assume that something similar must be possible for the Update All button? Anyone able to assist here? Appreciate the efforts! Best regards, Sidney
  7. Hahahah you went down the rabbit hole on this one didn’t you. I’ve spent a fair amount of time of this as well, but at some point thought those few gigabytes weren’t worth my time (also became a father in the meantime so less time to play 🤣👊🏻). Having that said, aren’t you better off writing a guide to cut down on these writes and post them in a separate thread? This thread has become so big that it will scare off anyone accidentally coming here via Google or something. Would be a waste of knowledge that you have gained (that others could benefit from) if you ask me. Cheers 🤙🏻
  8. Just add those lines to download docker-compose in the go file and have the user scripts plug-in run a few lines of docker-compose commands after array start? I haven’t set it up tho as I manually unlock my encrypted array anyways. Cheers
  9. So coming back on this bug report. I have upgraded to 6.9 on March 2nd and also wiped the cache to take advantage of the new partition alignment (I have Samsung EVO's and perhaps a portion of OCD 🤣). Waited a bit to get historic data. Pre 6.9 TBW on 19-02-2021 23:57:01 --> 15.9 TB, which is 16313.7 GB. TBW on 20-02-2021 23:57:01 --> 16.0 TB, which is 16344.9 GB. TBW on 21-02-2021 23:57:01 --> 16.0 TB, which is 16382.8 GB. TBW on 22-02-2021 23:57:01 --> 16.0 TB, which is 16419.5 GB. -> Writes somewhere on 34/35GB's average a day. 6.9 TBW on 05-03-2021 23:57:01 --> 16.6 TB, which is 16947.4 GB. TBW on 06-03-2021 23:57:01 --> 16.6 TB, which is 16960.2 GB. TBW on 07-03-2021 23:57:01 --> 16.6 TB, which is 16972.8 GB. TBW on 08-03-2021 23:57:01 --> 16.6 TB, which is 16985.3 GB. -> Writes round 12/13GB's a average a day So I would say 6.9 (and reformatting) made a very big improvement. I think most of these savings are due to the new partition alignment as I was running docker directly on the cache already and recall made a few tweaks suggested here (adding mount options, cannot remember which exactly). Thanks @limetech and all other devs for the work put into this. This bug report is Closed 👍
  10. Without proper tooling the layers from the BTRFS will likely inflate and the image data won't be usable. With proper tooling this may/could work I think but just wiping is way easier 😂
  11. The GO (and service.d file) modifications are not needed anymore in unraid 9.2 as they basically created a way to do exactly that from the GUI. The GUI should now allow you to host docker files in their own directory directly on the cache (which what the workaround did via scripts instead of the GUI). Haven't moved there myself yet as I still see enough issues with rc2 for now. I believe that reformatting the cache is also advisable cause they have made some changes in the partition alignment there as well (you may or may not get lesser rights because of it, I'm using Samsung EVO's so I consider wiping and reformatting mine after the upgrade). Cheers!
  12. I've looked at my go file, I still have the pip3 way in place and it's working nice: # Since the NerdPack does not include this anymore we need to download docker-compose outselves. # pip3, python3 and the setuptools are all provided by the nerdpack # See: https://forums.unraid.net/index.php?/topic/35866-unRAID-6-NerdPack---CLI-tools-(iftop,-iotop,-screen,-kbd,-etc.&do=findComment&comment=838361) pip3 install docker-compose The second way via curl also works well as I equipped my duplicati container with it. You don't have to setup any dependencies that pip3 needs, so for simplicity reasons this may be more perferable. Cheers!
  13. Best wishes to you too 👍 Whether to use docker-compose or Unraid’s own DockerMan, depends on what you want and/or need. The DockerMan templates give you an easy option to redeploy containers from the GUI, you’ll also be notified when a docker container has an update available. I like Unraid’s tools so I default to these and create most containers from the GUI, however certain scenarios cannot be setup via DockerMan (at least not when I checked), like linking the network interfaces of containers together (so that all the container’s traffic flows through one of them). Therefore I use docker-compose for those and also have some containers I’ve built myself that I want to setup with some dependencies. I update them via a bash script I wrote and redeploy them afterwards, as the GUI is not able to modify their properties (which answers your second question, containers deployed outside of the DockerMan GUI do not get a template and cannot be managed/updated, you can start and stop them tho). My advise would be: stay with the Unraid default tooling unless you find a limitation which might make you want to deviate from them.
  14. I know haha, but I'm waiting for the upgraded kernel I'm still on the latest build that had the 5+ kernel included.
  15. Wow these releases are PACKED!! Can't wait for this to reach a stable release. Thanks for the improvements 🙏
  16. You're welcome. Hahah well you're basically answering your self. If you was exposing the services to the outside world it would make sense to send the traffic through a reverse proxy so you would only have to open up one port. Another use case for that reverse proxy would be hosting two containers on the host's address that require the same port to function (like the common 80 or 443), the reverse proxy would be able to route traffic to those ports based on hostnames and allow you to use that port for the client application that expects the server to be available on that port. I have also looked at (or actually implemented it) the nginx reverse proxy, but decided just to put the container on a different IP and call it a day. My todo list still has Traefik on it hahah, but too much on there atm Also, I can so much relate to this statement hahah: That's why unraid is so much fun! Cheers man.
  17. Why don't you put the docker containers on br0 and assign their unique IP address so DNS is able to distinct based on IP. If you want to route traffic to a different port on the same IP you would have to inspect the DNS address queried and route accordingly, which is where a reverse proxy would come into play. The easiest solution for you (that does not require you to dive into the reverze proxy stuff as a networking n00b )
  18. You're right, sorry about that, went through here before I went through my bug report. I do think that the casual user will benefit more from the loopback approach indeed since it's less complex and requires less maintenance. Have a great day 👍
  19. Surely, you will loose it when you upgrade the containers also so you'll find out soon enough. Wiping out the directory is essentially recreating the docker image so that's fine. Also I understand that you're trying to warn people and agree with you that for most users taking the loopback approach will work better and causes less confusion. It's great that we can decide this ourselves though, unRAID is so flexible, which is something I like about it.
  20. Agreed, which is why having options for both the loopback image and the folder is best of both worlds. Also if I ever wanted to move the data I would just remove the entire folder and recreate it anyways since it's non-persistent data.
  21. Noticed that serious efforts have been taken to place docker in its own directory instead of the loop device. Very very happy that I can keep using the folder based approach, as it just works very well. Thanks so much for listening to the community @limetech!!
  22. This is great!! I have been running docker on its own (non-exported) share on a btrfs partition from December last year on, very happy with it. I thought that when the docker/btrfs write issue was going to be solved I would have to revert to a docker image again, but being able to keep using my share in a supported way from the GUI is just perfect. I would the folder approach over a loop device any day! I'll keep an eye on this when it makes it out of beta, for now, keep up the good work, very much appreciated 😄
  23. Very interesting indeed. This got me thinking... I noticed that writing directly onto the BTRFS cache reduced writes by a factor or roughly 10. Now I did felt like this was still on the high side, as it's still writing 40GB a day. What if.... this is still amplified by a factor of 10 also. Could this mean the a BTRFS formatted image on a BTRFS formatted partition results in 10x10=100 times write amplification? If I recall correctly someone pointed out a 100x write amplification number earlier in the thread? I think this is well suited for a test 🙂 I've just recreated the loopimage formatted on XFS. I'll test my TB's written in a few minutes and check again after an hour. EDIT: Just noticed your comment @tjb_altf4 The default seems to work already, XFS is formatted nicely with the correct options: root@Tower:/# docker info Client: Debug Mode: false Server: Containers: 21 Running: 21 Paused: 0 Stopped: 0 Images: 35 Server Version: 19.03.5 Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true According to the docker docs this should be fine, which xfs_info /var/lib/docker seems to confirm: root@Tower:/# xfs_info /var/lib/docker meta-data=/dev/loop2 isize=512 agcount=4, agsize=1310720 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=5242880, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=2560, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 I'm just testing it out, cause I'm curious if it matters. EDIT2, after 1 hour of runtime: TBW on 28-06-2020 13:44:11 --> 11.8 TB, which is 12049.6 GB. TBW on 28-06-2020 14:44:38 --> 11.8 TB, which is 12057.8 GB. 1 hour of running on XFS formatted loopdevice equals 8,2GB written, this would translate into 196,8GB a day. This would most likely be a bit more due to backup tasks at night. It's still on the high side, compared to running directly onto the BTRFS filesystem, which results in 40GB a day. In December 2019 I was seeing 400GB's a day though (running without modifications), and my docker count has increased a bit, so 200 is better. Haven't tried any other options, like the mount options specified. I expect these will bring the writes down regardless of the loopdevice being used since they're ran on the entire BTRFS mount, so the amplification with the loopdevice is likely to occur with them also. Still kind of sad though, I would have expected to see a very minor write amplification instead of 5 times. Guess that theory of 10x10 doesn't check out then.. Rolling back to my previous state, as I'd take 40GB over 200 any day 😄 EDIT3: Decided to remount the cache with space_cache=v2 set, running directly on the cache this gave me 9GB of writes in the last 9 hours. When the new unRAID version drops I'll reformat my cache with the new alignment settings. For now that space_cache=v2 setting does its magic
  24. Thank you both for this, communication is very much appreciated, as well as your efforts! Most of us know how busy you all have been so don’t worry about it 🙂 I have not read anyone reporting this on HDD’s (read all comments actively), @TexasUnraid has been shifting data around a lot, you tried regular harddrives and btrfs by any chance? @jonp hope you’ll find a good challenge here, and also, happy birthday in advance! 🥳
  25. You cannot fix this by changing a couple of docker containers, cause docker containers are not the root cause of the problem, a busy container will just show this problem more. The only "fixes" that have been working for other were: 1) Formatting to XFS (works always) 2) Remounting the BTRFS cache with the nospace_cache options, see @johnnie.black's https://forums.unraid.net/bug-reports/stable-releases/683-docker-image-huge-amount-of-unnecessary-writes-on-cache-r733/?do=findComment&comment=9431 (seems to work for everyone so far) 3) Putting docker directly onto the cache (some have reported no decreased writes, although some have, this is the one I'm using and it's working for me) I may have missed one, but suggestion 2 is your quickest option here.