Jump to content

S1dney

Members
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

8 Neutral

About S1dney

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hahah thanks, trial and error though this time You're right. It was a long night I will adjust it right away, I meant the mountpoint of the cache indeed instead of a user share. Yeah I was aware of this, I wasn't affected by the bug though so I haven't dug into this too deep. I might try a few versions later to switch back to the /mnt/user shares, but until that time for me this is a perfect workaround.
  2. Well isn't that great, of course the Bitwarden version is crashing. I just have to wait until they decide to upgrade the SQL container to a newer version. I did notice the MCR version of SQL doesn't require the ulimits override anymore so that's nice. I've collected my Bitwarden MSSQL logging and rolled back to 6.7.2, everything's fine now (so said cause I liked the new features). Thanks for confirming this. EDIT: Now that were an additional four hours well spend. So apparently the combination of the newer docker build with the MSSQL container trips over mounting volumes relatively to /mnt/user/appdata (so the unRAID filesystem). The docker version shipped with unRAID version 6.7.2 (that was 18.x?) doesn't seem to care but now on 19.x this becomes a problem all of a sudden. The reason why your container worked (I assume) was that you mounted directly onto the btrfs (orwhichever filesystem you're using) mountpoint on /mnt/cache. Docker seems to now like this better. Although this would break if the mover kicks in I don't really mind cause my appdata is cache only anyways. I've updated the docker-compose.override.yml file to map them differently and the container is running fine now. # # Override file for the auto-generated docker-compose.yml file provided # by Bitwarden # This file mounts the volumes inside the MSSQL container directly onto # the btrfs mountpoint instead of a relative path in the unRAID # filesystem as the MSSQL container fails otherwise. # The file also sets ulimits on the same MSSQL container, because with the # ulimit stack size set to ulimited systemwide (as is the case on unRaid), # the container refuses to start. # ######################################################################### version: '3' services: mssql: volumes: - /mnt/cache/appdata/bitwarden/bwdata/mssql/data:/var/opt/mssql/data - /mnt/cache/appdata/bitwarden/bwdata/logs/mssql:/var/opt/mssql/log - /mnt/cache/appdata/bitwarden/bwdata/mssql/backups:/etc/bitwarden/mssql/backups ulimits: stack: soft: "8192000" hard: "8192000" Cheers! 🍻
  3. Oh darn, so much irony. I updated to 6.8 rc7 yesterday and this seemed to have killed my bitwarden mssql container. Stuck in a restart loop, and the errors are kind of not getting me anywhere. The ones of you actually using the mssql container already running a 6.8 build?
  4. On second thought you're right. I know that DockerMan/GUI is the way to interact with docker on unRAID, but I didn't know (yet) how to pull a docker container that could not be located from the Apps (or DockerHub within Apps) section. Therefore I thought to include it if anyone end up here with the same questions. I've edited my post cause adding a container from scratch using "Add container" might even be better, this is the way it's meant to be used I guess 😅, I always found any container in the Apps or DockerHub section so never had the need to look into this more. Thanks for time taken to elaborate/help though
  5. Hahaha saw this after I hit apply. Thanks, I will also check it out! Until the time some developer wants to take ownership of publishing the MSSQL container, my way described above also works I guess
  6. Well, I don't think that's a stupid question at all. I would assume someone has to take ownership on maintaining it, like the ones @binhex is maintaining? (Not sure how it works exactly, but thanks anyway for your work! ) Now @BoxOfSnoo's comment got me thinking. Modifying the user-template can be done more easily then copying the XML files on the flashdrive via the command line. So in case anyone else wants to download the image from the new Microsoft Container Repository I would take these steps: Search for mssql in the Community Apps (make sure that searching the DockerHub is enabled in the settings). And click "Click Here To Get More Results From DockerHub". Install the mssql server linux container (allthough it can be any container of choice, doesn't has to be mssql either) Make sure "Advanced" view is selected, this will show you which Repository and Docker Hub URL you're downloading the image. Change the following values: Name: Name of your container obviously Repository: mcr.microsoft.com/mssql/server Docker Hub URL: https://hub.docker.com/_/microsoft-mssql-server Icon URL: If you want a custom icon. Description: To something descriptive (although not really necessary). Then add the required Ports, Paths and Variables from the GUI. After clicking Apply the docker is created from the new Microsoft Repository and the template is written to Flash, without even using the command line. In my case it wrote (my-TEST-MSSQL-BasedOnMCR.xml to /boot/config/plugins/dockerMan/templates-user/). Easier then messing around with XML's from flash EDIT: Downloading a new container isn't even necessary, as you can just click "Add Container" from the docker tab. Just making sure the "Advanced" button is selected gives you the exact same options.
  7. Did you download the MSSQL image from the Apps section? This only gives you the image from the Microsoft Hub that is not being updated anymore and is more then a year old (see post) By manually constructing the user-template xml you download the newest image of the Microsoft Container Repository by pointing it to the repo/image, which the Community Apps isn't able to find (yet). Nice to see this is resolved, good addition to dockerland on unRAID 🙂
  8. Good to hear! You're welcome You would indeed map a volume to the container pointing to a location on the host. This is were it crashes according to multiple sources though, so I would be curious how that goes for you. You would add a PATH from the DockerMan page and map /var/opt/mssql/ or /var/opt/mssql/data to a path on your host to make the data persistent. Cheers!
  9. I think you're right on this one. I only picked that container as an example. According to the Community Apps GitHub, the Apps page does a lookup to the Docker Hub with this command: shell_exec("curl -s -X GET 'https://registry.hub.docker.com/v1/search?q=$filter&page=$pageNumber'"); I first noticed that the Apps searchbar filters special character so I thought I try to hack the URL itself: Somthing like: - 'https://registry.hub.docker.com/v1/search?q=mcr.microsoft.com' - 'https://registry.hub.docker.com/v1/search?q=mssql/server' Tried all kinds of variations but nothing works. The problem appears to be not with the community apps however, but with Microsoft moving it's container to their own repo. See here: https://dbafromthecold.com/2019/02/22/displaying-the-tags-within-the-sql-server-docker-repository/ I think that to get this to work you need to have some PHP skills as it probably requires building/modifying the community apps search_dockerhub function. See lines 318 till 372 here: https://github.com/Squidly271/community.applications/blob/master/source/community.applications/usr/local/emhttp/plugins/community.applications/include/exec.php ............Now there might be another way Download an official image you CAN download, I've downloaded pihole/pihole for example. This gives you a template to work with: cat /boot/config/plugins/dockerMan/templates-user/my-pihole.xml You can copy it and modify the values you need. After all settings are changed you can deploy the template from the docker tab -> Add Container -> User-Templates. I've created a short MSSQL image briefly using this template below and it pulls the image just fine: root@Tower:/# cat /boot/config/plugins/dockerMan/templates-user/my-mssql.xml <?xml version="1.0"?> <Container version="2"> <Name>mssql</Name> <Repository>mcr.microsoft.com/mssql/server</Repository> <Registry>https://hub.docker.com/_/microsoft-mssql-server</Registry> <Network>bridge</Network> <MyIP/> <Shell>sh</Shell> <Privileged>false</Privileged> <Support>https://hub.docker.com/_/microsoft-mssql-server</Support> <Project/> <Overview>The official MSSQL container from the new mcr.microsoft.com repository. Converted By Community Applications Always verify this template (and values) against the dockerhub support page for the container</Overview> <Category/> <WebUI/> <TemplateURL/> <Icon>/plugins/dynamix.docker.manager/images/question.png</Icon> <ExtraParams/> <PostArgs/> <CPUset/> <DateInstalled>1575321381</DateInstalled> <DonateText/> <DonateLink/> <Description>The official MSSQL container from the new mcr.microsoft.com repository. Converted By Community Applications Always verify this template (and values) against the dockerhub support page for the container</Description> <Networking> <Mode>bridge</Mode> <Publish> <Port> <HostPort>1433</HostPort> <ContainerPort>1433</ContainerPort> <Protocol>tcp</Protocol> </Port> </Publish> </Networking> <Data/> <Environment/> <Labels/> <Config Name="TCP_1433" Target="1433" Default="" Mode="tcp" Description="TCP port for SQL Communication" Type="Port" Display="always" Required="false" Mask="false">1433</Config> </Container> When you deploy your container from this image you can modify all other settings (such a paths and ports) from the DockerMan GUI, which is a lot easier. Note that this will also save the changes to the template so this appears to be a one time action Be aware though that you need to feed additional stuff (like accepting EULA) to the container to get it to work, but this should give you a starting point. Also another thing to be aware of: https://github.com/Microsoft/mssql-docker/issues/407. Mentioned in the thread: "Unfortunately, we've had quite a few issues reported with Unraid and it's not something we've tested or supported so far. You can see a list of the issues with Unraid in this query: https://github.com/Microsoft/mssql-docker/issues?utf8=✓&q=is%3Aissue+is%3Aopen+unRAID" MAC OSX has a known bug (listed on the MS Docs also) which appears similar. Maybe unRAID on slackware is hitting the same? Strangely though for me is that my MSSQL bitwarden container just works... So it should be possible right. I don't use the MSSQL docker container other then for Bitwarden at the moment. I'm just curious.
  10. The one in the screenshot is actually the docker from Microsoft's official docker hub: (Expand the search to DockerHub) (Search Microsoft's official image, I guess you should be able to view on Publisher to get a better listing) Now it will be converted into a DockerMan template that @trurl is referring about. You just need to figure out which specific image name you're pulling and and which repo it came from. If you use docker pull, DockerMan has no template to work with. Hope this clears is up.
  11. Who not? If searched via the docker hub you can just insert your own paths and variables? You would add Paths to make it write data to your hosts appdata.. Not really sure what else you're searching for The Microsoft article actually describes this behavior: The link describing data persistence point you to the -v options, which is what adding a path lets DockerMan feed docker under the hood. Running docker pull bypasses DockerMan so it'll show up as an orphaned container when it's off.
  12. Ah, that’s great to hear. Thanks for completing that statement. I might be browsing that part of the forums one of these days 😃
  13. The docker container is downloaded to a subdirectory in /var/lib/docker, which is actually a loop device. The loop device is the docker.img image being mounted to that location. The docker image itself is located in /mnt/user/system/docker, which is most likely on your cache (if present). Any changes will therefore be stored inside docker.img (on the array/cache) but will be probably be lost upon upgrading the container (cause it get's overwritten) or rebuilding the docker.img file. That's why you would normally mount local folders to folder in the container, to have data persist. If you want to make it an actual docker, you'd download it from the community applications (https://synaptiklabs.com/posts/setup-unraid-to-pull-from-docker-hub/) You can download every container from the docker hub directly (takes on extra click in the apps section), but certain ones that are more easily found there have been made into a template (if I'm not mistaking) by fellow unraid enthusiasts. This would ask you to create certain paths instead of you having to look up the docker hub for options to specifiy. I usually prefer using the one directly from the developer's docker repository, but that's personal. Cheers!
  14. I agree that I felt it to be a real waste, but the product is working so no need to mark it urgent and alert the dev team as they're busy with the 6.8 release 🍻 I have found a fix for my own host though, I was planning to automate it, but working on backing up my array data offsite on regular base so this take precedence now. However, a short while ago I wrote a very simple script that monitors the TBW of the cache disks so I could get a grip on the numbers: TBW on 02-11-2019 23:59:01 --> 3.0 TB # Started measuring here TBW on 03-11-2019 23:59:01 --> 3.0 TB TBW on 04-11-2019 23:59:01 --> 3.1 TB, which is 3147.1 GB. TBW on 05-11-2019 23:59:01 --> 3.3 TB, which is 3385.6 GB. TBW on 06-11-2019 23:59:01 --> 3.4 TB, which is 3442.8 GB. TBW on 07-11-2019 23:59:01 --> 3.4 TB, which is 3520.2 GB. TBW on 08-11-2019 23:59:01 --> 3.5 TB, which is 3562.2 GB. TBW on 09-11-2019 23:59:01 --> 3.5 TB, which is 3618.7 GB. TBW on 10-11-2019 23:59:01 --> 3.6 TB, which is 3721.0 GB. TBW on 11-11-2019 23:59:02 --> 3.8 TB, which is 3941.2 GB. # All containers were on here for a few days, nearly 400 GB daily! TBW on 12-11-2019 23:59:01 --> 4.2 TB, which is 4272.1 GB. TBW on 13-11-2019 23:59:01 --> 4.5 TB, which is 4632.5 GB. TBW on 14-11-2019 23:59:01 --> 4.9 TB, which is 5044.0 GB. TBW on 15-11-2019 23:59:01 --> 5.2 TB, which is 5351.3 GB. # Turned off most containers here, to save writes TBW on 16-11-2019 23:59:01 --> 5.3 TB, which is 5468.8 GB. TBW on 17-11-2019 23:59:01 --> 5.5 TB, which is 5646.1 GB. TBW on 18-11-2019 23:59:01 --> 5.6 TB, which is 5695.8 GB. TBW on 19-11-2019 23:59:02 --> 5.6 TB, which is 5738.7 GB. # Applied fix here, all containers are running TBW on 20-11-2019 23:59:01 --> 5.6 TB, which is 5757.7 GB. TBW on 21-11-2019 23:59:01 --> 5.6 TB, which is 5778.7 GB. TBW on 22-11-2019 23:59:01 --> 5.7 TB, which is 5800.2 GB. TBW on 23-11-2019 23:59:01 --> 5.7 TB, which is 5822.8 GB. I'm now at 22GB of writes on a daily basis, which is perfect! In short, what I did was: - Created a share named docker (which has to be cache only or this will break after the mover kicks in!) - Modified the start_docker() function in /etc/rc.d/rc.docker to: # Start docker start_docker(){ if is_docker_running; then echo "$DOCKER is already running" return 1 fi if mountpoint $DOCKER_ROOT &>/dev/null; then echo "Image is mounted, will attempt to unmount it next." umount $DOCKER_ROOT 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Image still mounted at $DOCKER_ROOT, cancelling cause this needs to be a symlink!" exit 1 else echo "Image unmounted succesfully." fi fi # In order to have a soft link created, we need to remove the /var/lib/docker directory or creating a soft link will fail if [[ -d $DOCKER_ROOT ]]; then echo "Docker directory exists, removing it so we can use it for the soft link." rm -rf $DOCKER_ROOT if [[ -d $DOCKER_ROOT ]]; then echo "$DOCKER_ROOT still exists! Creating a soft link will fail thus refusing to start docker." exit 1 else echo "Removed $DOCKER_ROOT. Moving on." fi fi # Now that we know that the docker image isn't mounted, we want to make sure the symlink is active if [[ -L $DOCKER_ROOT && -d $DOCKER_ROOT ]]; then echo "$DOCKER_ROOT is a soft link, docker is allowed to start" else echo "$DOCKER_ROOT is not a soft link, will try to create it." ln -s /mnt/cache/docker /var/lib 1>/dev/null 2>&1 if [[ $? -ne 0 ]]; then echo "Soft link could not be created, refusing to start docker!" exit 1 else echo "Soft link created." fi fi echo "starting $BASE ..." if [[ -x $DOCKER ]]; then # If there is an old PID file (no docker running), clean it up: if [[ -r $DOCKER_PIDFILE ]]; then if ! ps axc|grep docker 1>/dev/null 2>&1; then echo "Cleaning up old $DOCKER_PIDFILE." rm -f $DOCKER_PIDFILE fi fi nohup $UNSHARE --propagation slave -- $DOCKER -p $DOCKER_PIDFILE $DOCKER_OPTS >>$DOCKER_LOG 2>&1 & fi } This basically checks for a mount of the docker.img file and unmounts it. Then it removes the directory /var/lib/docker and replaces it for a soft link to /mnt/cache/docker (has to be /mnt/cache though because /mnt/user/docker is not a BTRFS filesystem so the BTRFS driver option passed to dockerd will complain). Afterwards docker is started as normally and eveything works just as within the image, you can still control everything from DockerMan (GUI). All actions are echoed into /var/log/syslog, so you're able to trace what start_docker() does there. The rc.docker file will be back to default upon server reboot by design, so my next task was to have it automated but like said I have other priorities at the moment. The automating script will also contain (most likely) sed commands to have the DockerSettings.page modified to not include btrfs commands, cause sometimes the docker settings page loads very slow because the commands don't work on /var/lib/docker, since it's not a mountpoint to a btrfs filesystem anymore. Hope this helps you out until that time. I know I was quite frustrated so wanted to give you a head start.
  15. Ok man, good luck! For now I start things up manually, but when my scripts for backups etc are done I’ll be writing a script that runs when the array is started and that’ll start all necessary docker containers. That will use a mixture of docker start and docker-compose commands or maybe I just out everything in a compose file, haven’t decided yet. Cheers!