benediktleb

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

benediktleb's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. Is there any danger/possibility that this would damage dockers? I updated my mariadb docker yesterday and somehow managed to corrupt the database (probably because another docker was still accessing it). Now I'm very cautious as to automatically shutting own dockers.
  2. Thank you! I'll shut others down, first, and then the mariadb docker. Do you use unRAIDs UI to shut it down or do you use the command line of that particular docker?
  3. I updated the docker yesterday and that led to the install being corrupted (couldn't log in anymore, ibdata1 was pretty much fried). I didn't have any mysql backups (which is stupid, I know ;-)) but luckily I also just set up my server last weekend. I didn't lose much I can't quickly recreate. My question becomes: what's the best way of updating this docker? Should I first shut down all the other dockers that use MariaDB before applying the update? I will have /appdata/ backups from now on, but that's of course no solution to an update that causes corruption. Thanks so much ahead. All the best, Benedikt
  4. Hi all, I've read through the docker FAQ and forum guides but I'm still a little confused as to how to properly pass information to the docker. I want to install radicale, to self-host WebDAV/CardDAV, and to do so I activated the DockerHub integration into Community Applications. My docker of choice is tomsquest/docker-radicale. I can bring it to run by passing a port to it. So far so good. However, this is only the very basic installation which does not allow remote access (even from another machine in the same network, it's limited to localhost). This is the minimal instruction (see docker hub). However, I want to pass some of the production-grade instructions on: docker run -d --name radicale \ -p 127.0.0.1:5232:5232 \ --read-only \ --init \ --security-opt="no-new-privileges:true" \ --cap-drop ALL \ --cap-add CHOWN \ --cap-add SETUID \ --cap-add SETGID \ --cap-add KILL \ --pids-limit 50 \ --memory 256M \ --health-cmd="curl --fail http://localhost:5232 || exit 1" \ --health-interval=30s \ --health-retries=3 \ -v ~/radicale/data:/data \ -v ~/radicale/config:/config:ro \ tomsquest/docker-radicale I'm confused as to how I should decide what I should pass as a variable, what as a key, etc. I also tried to map /mnt/user/appdata/radicale/data/ (host path) to /radicale/data/ (container path) but it did nothing else then create the directories. The docker is running, but not any different than before (when I only passed the port through). I'm utterly sorry to be yet another person trying to understand this. I watched Spaceinvaders video on it, too, but that didn't help much, it explained the very basics and relied only on CA. Great video nonetheless, but sadly it didn't help me much. Once I figure this out I'm happy to submit it to CA and also maintain it. It'll be a learning curve, for sure, but a good calendar/contacts sync server is missing from CA (running a nextcloud instance for this alone is too much for me.) Thanks so much. All the best, Benedikt
  5. Hi Marc, just what I needed, thank you so much! I was able to get the docker to boot, although it only worked with leaving server name blank (wouldn't take my server's IP). But that's fine. What I'm having problems with is I think permissions: the WebDAV WebUI (but also when connected through a client) shows no files, but "Container Path: /var/lib/dav" points towards one that is populated with many files and folders. I already ran the Docker Permission tool but that didn't help. Did you run into the same problem by any chance? I'm looking into it now, but wanted to leave this comment here already. Thanks so much again! All the best, Benedikt
  6. Works like a charm. Could have thought of that myself, too, what a simple answer. Thanks so much!
  7. Hi all, thanks for this great plugin. I have WireGuard set up with the peer being "remote access to server". This works well, but I need some help with my configuration. What I want to do: I want to connect all my devices to the NAS using the same IP address, regardless of whether I'm on my local network or not and regardless of whether WireGuard is on (when on local network). Current problem: My unRAID is on 192.169.1.116 (local network). The standard "local tunnel address" was somewhere in the 10.xx.xx.xx range. This created the following problem: I have my NAS connected via samba under 192.169.1.116 (local address). Once I leave the house and I turn on WireGuard, that address cannot be used anymore and I need to add another server using the 10.xx.xx.xx address instead of the 192.169.1.116. That's of course not what I want. So I changed the "local tunnel address" to 192.168.1.116 (and the tunnel to 192.168.1.1/24) which allows me to connect via WireGuard using the "local IP" BUT once I'm back on my local network and WireGuard is still activated, I cannot access the NAS. This makes sense, but I don't know what the solution could be. Question: Can anyone help me with my setup? I want to connect to the NAS using the same IP, regardless of whether I'm on my local network or not (and for when I'm at home, regardless of whether WireGuard is running). Help is much appreciated, thanks so much! All the best, Benedikt
  8. Thank you so much! I know this thread is old, but I'm sure people will eventually have the problem on an Asrock J4125B board, too. Removing the dash worked.