S1dney

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by S1dney

  1. On second thought you're right. I know that DockerMan/GUI is the way to interact with docker on unRAID, but I didn't know (yet) how to pull a docker container that could not be located from the Apps (or DockerHub within Apps) section. Therefore I thought to include it if anyone end up here with the same questions. I've edited my post cause adding a container from scratch using "Add container" might even be better, this is the way it's meant to be used I guess 😅, I always found any container in the Apps or DockerHub section so never had the need to look into this more. Thanks for time taken to elaborate/help though
  2. Hahaha saw this after I hit apply. Thanks, I will also check it out! Until the time some developer wants to take ownership of publishing the MSSQL container, my way described above also works I guess
  3. Well, I don't think that's a stupid question at all. I would assume someone has to take ownership on maintaining it, like the ones @binhex is maintaining? (Not sure how it works exactly, but thanks anyway for your work! ) Now @BoxOfSnoo's comment got me thinking. Modifying the user-template can be done more easily then copying the XML files on the flashdrive via the command line. So in case anyone else wants to download the image from the new Microsoft Container Repository I would take these steps: Search for mssql in the Community Apps (make sure that searching the DockerHub is enabled in the settings). And click "Click Here To Get More Results From DockerHub". Install the mssql server linux container (allthough it can be any container of choice, doesn't has to be mssql either) Make sure "Advanced" view is selected, this will show you which Repository and Docker Hub URL you're downloading the image. Change the following values: Name: Name of your container obviously Repository: mcr.microsoft.com/mssql/server Docker Hub URL: https://hub.docker.com/_/microsoft-mssql-server Icon URL: If you want a custom icon. Description: To something descriptive (although not really necessary). Then add the required Ports, Paths and Variables from the GUI. After clicking Apply the docker is created from the new Microsoft Repository and the template is written to Flash, without even using the command line. In my case it wrote (my-TEST-MSSQL-BasedOnMCR.xml to /boot/config/plugins/dockerMan/templates-user/). Easier then messing around with XML's from flash EDIT: Downloading a new container isn't even necessary, as you can just click "Add Container" from the docker tab. Just making sure the "Advanced" button is selected gives you the exact same options.
  4. Did you download the MSSQL image from the Apps section? This only gives you the image from the Microsoft Hub that is not being updated anymore and is more then a year old (see post) By manually constructing the user-template xml you download the newest image of the Microsoft Container Repository by pointing it to the repo/image, which the Community Apps isn't able to find (yet). Nice to see this is resolved, good addition to dockerland on unRAID 🙂
  5. Good to hear! You're welcome You would indeed map a volume to the container pointing to a location on the host. This is were it crashes according to multiple sources though, so I would be curious how that goes for you. You would add a PATH from the DockerMan page and map /var/opt/mssql/ or /var/opt/mssql/data to a path on your host to make the data persistent. Cheers!
  6. I think you're right on this one. I only picked that container as an example. According to the Community Apps GitHub, the Apps page does a lookup to the Docker Hub with this command: shell_exec("curl -s -X GET 'https://registry.hub.docker.com/v1/search?q=$filter&page=$pageNumber'"); I first noticed that the Apps searchbar filters special character so I thought I try to hack the URL itself: Somthing like: - 'https://registry.hub.docker.com/v1/search?q=mcr.microsoft.com' - 'https://registry.hub.docker.com/v1/search?q=mssql/server' Tried all kinds of variations but nothing works. The problem appears to be not with the community apps however, but with Microsoft moving it's container to their own repo. See here: https://dbafromthecold.com/2019/02/22/displaying-the-tags-within-the-sql-server-docker-repository/ I think that to get this to work you need to have some PHP skills as it probably requires building/modifying the community apps search_dockerhub function. See lines 318 till 372 here: https://github.com/Squidly271/community.applications/blob/master/source/community.applications/usr/local/emhttp/plugins/community.applications/include/exec.php ............Now there might be another way Download an official image you CAN download, I've downloaded pihole/pihole for example. This gives you a template to work with: cat /boot/config/plugins/dockerMan/templates-user/my-pihole.xml You can copy it and modify the values you need. After all settings are changed you can deploy the template from the docker tab -> Add Container -> User-Templates. I've created a short MSSQL image briefly using this template below and it pulls the image just fine: root@Tower:/# cat /boot/config/plugins/dockerMan/templates-user/my-mssql.xml <?xml version="1.0"?> <Container version="2"> <Name>mssql</Name> <Repository>mcr.microsoft.com/mssql/server</Repository> <Registry>https://hub.docker.com/_/microsoft-mssql-server</Registry> <Network>bridge</Network> <MyIP/> <Shell>sh</Shell> <Privileged>false</Privileged> <Support>https://hub.docker.com/_/microsoft-mssql-server</Support> <Project/> <Overview>The official MSSQL container from the new mcr.microsoft.com repository. Converted By Community Applications Always verify this template (and values) against the dockerhub support page for the container</Overview> <Category/> <WebUI/> <TemplateURL/> <Icon>/plugins/dynamix.docker.manager/images/question.png</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1575321381</DateInstalled> <DonateText/> <DonateLink/> <Description>The official MSSQL container from the new mcr.microsoft.com repository. Converted By Community Applications Always verify this template (and values) against the dockerhub support page for the container</Description> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>1433</HostPort> <ContainerPort>1433</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data/> <Environment/> <Labels/> <Config Name="TCP_1433" Target="1433" Default="" Mode="tcp" Description="TCP port for SQL Communication" Type="Port" Display="always" Required="false" Mask="false">1433</Config> </Container> When you deploy your container from this image you can modify all other settings (such a paths and ports) from the DockerMan GUI, which is a lot easier. Note that this will also save the changes to the template so this appears to be a one time action Be aware though that you need to feed additional stuff (like accepting EULA) to the container to get it to work, but this should give you a starting point. Also another thing to be aware of: https://github.com/Microsoft/mssql-docker/issues/407. Mentioned in the thread: "Unfortunately, we've had quite a few issues reported with Unraid and it's not something we've tested or supported so far. You can see a list of the issues with Unraid in this query: https://github.com/Microsoft/mssql-docker/issues?utf8=✓&q=is%3Aissue+is%3Aopen+unRAID" MAC OSX has a known bug (listed on the MS Docs also) which appears similar. Maybe unRAID on slackware is hitting the same? Strangely though for me is that my MSSQL bitwarden container just works... So it should be possible right. I don't use the MSSQL docker container other then for Bitwarden at the moment. I'm just curious.
  7. The one in the screenshot is actually the docker from Microsoft's official docker hub: (Expand the search to DockerHub) (Search Microsoft's official image, I guess you should be able to view on Publisher to get a better listing) Now it will be converted into a DockerMan template that @trurl is referring about. You just need to figure out which specific image name you're pulling and and which repo it came from. If you use docker pull, DockerMan has no template to work with. Hope this clears is up.
  8. Who not? If searched via the docker hub you can just insert your own paths and variables? You would add Paths to make it write data to your hosts appdata.. Not really sure what else you're searching for The Microsoft article actually describes this behavior: The link describing data persistence point you to the -v options, which is what adding a path lets DockerMan feed docker under the hood. Running docker pull bypasses DockerMan so it'll show up as an orphaned container when it's off.
  9. Ah, that’s great to hear. Thanks for completing that statement. I might be browsing that part of the forums one of these days 😃
  10. The docker container is downloaded to a subdirectory in /var/lib/docker, which is actually a loop device. The loop device is the docker.img image being mounted to that location. The docker image itself is located in /mnt/user/system/docker, which is most likely on your cache (if present). Any changes will therefore be stored inside docker.img (on the array/cache) but will be probably be lost upon upgrading the container (cause it get's overwritten) or rebuilding the docker.img file. That's why you would normally mount local folders to folder in the container, to have data persist. If you want to make it an actual docker, you'd download it from the community applications (https://synaptiklabs.com/posts/setup-unraid-to-pull-from-docker-hub/) You can download every container from the docker hub directly (takes on extra click in the apps section), but certain ones that are more easily found there have been made into a template (if I'm not mistaking) by fellow unraid enthusiasts. This would ask you to create certain paths instead of you having to look up the docker hub for options to specifiy. I usually prefer using the one directly from the developer's docker repository, but that's personal. Cheers!
  11. I agree that I felt it to be a real waste, but the product is working so no need to mark it urgent and alert the dev team as they're busy with the 6.8 release 🍻 I have found a fix for my own host though, I was planning to automate it, but working on backing up my array data offsite on regular base so this take precedence now. However, a short while ago I wrote a very simple script that monitors the TBW of the cache disks so I could get a grip on the numbers: TBW on 02-11-2019 23:59:01 --> 3.0 TB # Started measuring here TBW on 03-11-2019 23:59:01 --> 3.0 TB TBW on 04-11-2019 23:59:01 --> 3.1 TB, which is 3147.1 GB. TBW on 05-11-2019 23:59:01 --> 3.3 TB, which is 3385.6 GB. TBW on 06-11-2019 23:59:01 --> 3.4 TB, which is 3442.8 GB. TBW on 07-11-2019 23:59:01 --> 3.4 TB, which is 3520.2 GB. TBW on 08-11-2019 23:59:01 --> 3.5 TB, which is 3562.2 GB. TBW on 09-11-2019 23:59:01 --> 3.5 TB, which is 3618.7 GB. TBW on 10-11-2019 23:59:01 --> 3.6 TB, which is 3721.0 GB. TBW on 11-11-2019 23:59:02 --> 3.8 TB, which is 3941.2 GB. # All containers were on here for a few days, nearly 400 GB daily! TBW on 12-11-2019 23:59:01 --> 4.2 TB, which is 4272.1 GB. TBW on 13-11-2019 23:59:01 --> 4.5 TB, which is 4632.5 GB. TBW on 14-11-2019 23:59:01 --> 4.9 TB, which is 5044.0 GB. TBW on 15-11-2019 23:59:01 --> 5.2 TB, which is 5351.3 GB. # Turned off most containers here, to save writes TBW on 16-11-2019 23:59:01 --> 5.3 TB, which is 5468.8 GB. TBW on 17-11-2019 23:59:01 --> 5.5 TB, which is 5646.1 GB. TBW on 18-11-2019 23:59:01 --> 5.6 TB, which is 5695.8 GB. TBW on 19-11-2019 23:59:02 --> 5.6 TB, which is 5738.7 GB. # Applied fix here, all containers are running TBW on 20-11-2019 23:59:01 --> 5.6 TB, which is 5757.7 GB. TBW on 21-11-2019 23:59:01 --> 5.6 TB, which is 5778.7 GB. TBW on 22-11-2019 23:59:01 --> 5.7 TB, which is 5800.2 GB. TBW on 23-11-2019 23:59:01 --> 5.7 TB, which is 5822.8 GB. I'm now at 22GB of writes on a daily basis, which is perfect! In short, what I did was: - Created a share named docker (which has to be cache only or this will break after the mover kicks in!) - Modified the start_docker() function in /etc/rc.d/rc.docker to: # Start docker start_docker(){ if is_docker_running; then echo "$DOCKER is already running" return 1 fi if mountpoint $DOCKER_ROOT &>/dev/null; then echo "Image is mounted, will attempt to unmount it next." umount $DOCKER_ROOT 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Image still mounted at $DOCKER_ROOT, cancelling cause this needs to be a symlink!" exit 1 else echo "Image unmounted succesfully." fi fi # In order to have a soft link created, we need to remove the /var/lib/docker directory or creating a soft link will fail if [[ -d $DOCKER_ROOT ]]; then echo "Docker directory exists, removing it so we can use it for the soft link." rm -rf $DOCKER_ROOT if [[ -d $DOCKER_ROOT ]]; then echo "$DOCKER_ROOT still exists! Creating a soft link will fail thus refusing to start docker." exit 1 else echo "Removed $DOCKER_ROOT. Moving on." fi fi # Now that we know that the docker image isn't mounted, we want to make sure the symlink is active if [[ -L $DOCKER_ROOT && -d $DOCKER_ROOT ]]; then echo "$DOCKER_ROOT is a soft link, docker is allowed to start" else echo "$DOCKER_ROOT is not a soft link, will try to create it." ln -s /mnt/cache/docker /var/lib 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Soft link could not be created, refusing to start docker!" exit 1 else echo "Soft link created." fi fi echo "starting $BASE ..." if [[ -x $DOCKER ]]; then # If there is an old PID file (no docker running), clean it up: if [[ -r $DOCKER_PIDFILE ]]; then if ! ps axc|grep docker 1>/dev/null 2>&1; then echo "Cleaning up old $DOCKER_PIDFILE." rm -f $DOCKER_PIDFILE fi fi nohup $UNSHARE --propagation slave -- $DOCKER -p $DOCKER_PIDFILE $DOCKER_OPTS >>$DOCKER_LOG 2>&1 & fi } This basically checks for a mount of the docker.img file and unmounts it. Then it removes the directory /var/lib/docker and replaces it for a soft link to /mnt/cache/docker (has to be /mnt/cache though because /mnt/user/docker is not a BTRFS filesystem so the BTRFS driver option passed to dockerd will complain). Afterwards docker is started as normally and eveything works just as within the image, you can still control everything from DockerMan (GUI). All actions are echoed into /var/log/syslog, so you're able to trace what start_docker() does there. The rc.docker file will be back to default upon server reboot by design, so my next task was to have it automated but like said I have other priorities at the moment. The automating script will also contain (most likely) sed commands to have the DockerSettings.page modified to not include btrfs commands, cause sometimes the docker settings page loads very slow because the commands don't work on /var/lib/docker, since it's not a mountpoint to a btrfs filesystem anymore. Hope this helps you out until that time. I know I was quite frustrated so wanted to give you a head start.
  12. Ok man, good luck! For now I start things up manually, but when my scripts for backups etc are done I’ll be writing a script that runs when the array is started and that’ll start all necessary docker containers. That will use a mixture of docker start and docker-compose commands or maybe I just out everything in a compose file, haven’t decided yet. Cheers!
  13. Well, in my case it runs perfectly fine with just the ulimit changes. I checked my configfile for Bitwarden and it seems to map to appdata also (as this is were my bwdata directory resides, ../mssql maps it to /mnt/appdata/bitwarden/mssql) See attached screenshot. And mounting it to a random other directory on appdata? So create /mnt/appdata/test and mount it in there? Mine seems to be on version 1.32.0, might also be worth checking that specific version. Cheers!
  14. Oh wow, I actually noticed this during my tests on the excessive amounts of writes by the docker container. Noticed metadata and system was in RAID1 on Debian (which I created as a test), but not on my unRAID box. I was actually planning on raising a topic once my docker issue was solved, since I wasn't 100% sure whether it was a bug or it wasn't. Thank you very much for pointing this out! I will start conversion tonight.
  15. That was what I initially thought also, but the exact same containers running out of the docker.img straightly on the cache are using only 1% of the writes compared to what the same containers running in the docker.img do. I wasn’t able to track any files that were heavily modified either (details in the report :-) ), so this supports that statement also. Cheers!
  16. For those coming here for the same, I've done quite some testing lately and was able to pinpoint it to the loopdevice / use of image file on top of the cache. If interested, all the details in the report. I hope this will get any support from the dev team. Thank for replying here though.
  17. EDIT (March 9th 2021): Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum. ### Hey Guys, First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release). Hardware and software involved: 2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool. ### TLDR (but I'd suggest to read on anyway 😀) The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly. This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure). Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem. Possible idea for implementation proposed on the bottom. Grateful for any help provided! ### I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below. So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active. Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either. This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM). After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. Finally I decided to put the whole image out of the equation and took the following steps: - Stopped docker from the WebGUI so unRAID would properly unmount the loop device. - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint - Created a share on the cache for the docker files - Created a softlink from /mnt/cache/docker to /var/lib/docker - Started docker using "/etc/rd.d/rc.docker start" - Started my BItwarden containers. Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore. I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!) Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁) I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out. Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down. Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue. I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk. In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes. I'm not attaching diagnostic files since they would probably not point out the needed. Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section. Thanks though for this great product, have been using it so far with a lot of joy! I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick, Cheers!
  18. Exactly, something I'm keen on knowing as well. It just seems strange that tooling (like the file activity plugin monitoring the cache) isn't showing these writes and/or the writes appear to vanish in thin air. If something's really hammering on the cache it would at least show up I would expect? Leaving writes that BTRFS does internally..
  19. Thanks again for the replies. I actually tried putting two SATA's in RAID on the motherboard (to have RAID without unRAID knowing it), but the logical drive could not be seen from the OS. Using RAID on the mobo is a crappy idea to start with is what I'm reading online, but it was worth a try. Now I reformatted the drives to unencrypted btrfs, but with some dockers running I'm seeing similar results. I'm now encrypting them again, so I can't say anything about the longer term (the containers were started a while when I checked though). One thing I did notice from the iotop -ao output is that everytime the loop2 process climbes fast in writes, a line as the one below is shown: -> dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error I might need to figure out more on docker under the hood to give this any meaning but I have the feeling this might indicate excessive logging indeed. I will dig further into this, maybe I'll find out the cause. Cheers!
  20. I understand. But how would I track it down if neither the file activity plugin, nor lsof seem to point out a candidate? I appreciate the efforts!! 🍻
  21. Ah, you're right, forgot to include all the details. I am running encryption on the cache and also have a cache pool with two SSD's. Initially I thought this was only happening with the mysql running, but it seems to come randomly, some moments the writes go up a lot faster then other moments. If something writing to the docker image, I must be able to trace it right? 3 TB written in one month for just Bitwarden and three other small containers seems like a bug? Unfortunately I'm missing the kind of lowlevel expertise this issue might require. PS: I might try disabling encryption later on, but since it's my vault data on there I'd rather not Thanks again!
  22. Hey Guys, So recently I noticed a big number of writes to my cache, while I really don't do a lot on there except for running a few docker containers. I have found some other threads that did not really got a conclusive answer, but hoping posting here will point me in the (or a) direction. The process loop2 seems to be the one responsible for the writes, the advances docker settings point out this maps to the docker image. The screenshot below shows 2GB written in just a couple of minutes: Now lsof on the pid doesn't really give a lot of open files on that PID. I used the file activity plugin, enabling the cache directory by modifying the config file, but this also shows minor file activity on the cache. On another thread someone suggested that mysql databases (and more specifically the mode their in) is to blame, but when only Bitwarden is active I also see the loop2 process go bananas (Bitwarden uses mssql instead of mysql). I know SSD's nowadays have increased warranties covering a decent amount of TBW, but still it feels like a big waste (and maybe even a bug). Anyone found out whats actually causing this load (and how to stop it)? Or else has an idea how to find out what the hell loop2 is writing to, cause it really seems like it's just writing in thin air... 😵 Any help is greatly appreciated. Cheers!
  23. Yeah I know. This seems like a really nasty tradeoff. Security vs SSD's wearing out quick. I was quite happy with the encryption on all disks, but feel that it's a real waste of the SSD's to write TB's of data to them each month.
  24. +1 on this. I'm seeing similar issues (high writes on the SSD cache). However since my docker containers are hosting my bitwarden vault, I'm not really keen on unencrypting it. Bug in BTRFS? Or something we can tweak?
  25. I see you have it figured out in the meantime. All other parameters are managed by the Compose file in my case so didn't really apply. Great to see you have it up and running. And thanks for sharing back your own findings, might definitely help someone in the future. Cheers man!