spatial_awareness

Members
  • Posts

    4
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

spatial_awareness's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Pulled it down, it looks like the logging isn't as frantic, these files aren't growing at the same rate. I don't understand what to adjust the variables to or where to adjust the variables, I checked here. I still see weird entries on the tails of the .json log in the container, and the supervisord log now. (I have a script that deletes the supervisord.XXX logs files every hour, it's why we don't see that directory grow) root@tank:/mnt/user/appdata/binhex-delugevpn-movies# docker image ls REPOSITORY TAG IMAGE ID CREATED SIZE binhex/arch-delugevpn latest 1b4be30790e3 2 hours ago 1.34GB < ---- limetech/plex latest 622fc6d98c10 2 weeks ago 514MB binhex/arch-delugevpn <none> edb42194fcfd 2 weeks ago 1.34GB root@tank:/mnt/user/appdata/binhex-delugevpn-movies# docker container ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9dbeb52ff92c binhex/arch-delugevpn "/usr/bin/tini -- /b…" 14 minutes ago Up 14 minutes 0.0.0.0:8113->8112/tcp, 0.0.0.0:8119->8118/tcp, 0.0.0.0:58847->58846/tcp, 0.0.0.0:58947->58946/tcp, 0.0.0.0:58947->58946/udp binhex-delugevpn-tv c52761545533 binhex/arch-delugevpn "/usr/bin/tini -- /b…" 17 minutes ago Up 5 minutes 0.0.0.0:8111->8112/tcp, 0.0.0.0:8117->8118/tcp, 0.0.0.0:58845->58846/tcp, 0.0.0.0:58945->58946/tcp, 0.0.0.0:58945->58946/udp binhex-delugevpn-movies < --- b02e3b1f816f limetech/plex "/sbin/my_init" 11 days ago Up 3 days PlexMediaServer root@tank:/mnt/user/appdata/binhex-delugevpn-movies# ls *log 91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log* deluge-web.log deluged.log supervisord.log root@tank:/mnt/user/appdata/binhex-delugevpn-movies# tail *.log && tail /var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87-json.log ==> 91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log <== {"log":"rW\n","stream":"stdout","time":"2018-12-24T03:45:13.11357649Z"} {"log":"2018-12-23 22:45:13,113 DEBG fd 16 closed, stopped monitoring \u003cPOutputDispatcher at 22958395464448 for \u003cSubprocess at 22958395463512 with name watchdog-script in state STOPPING\u003e (stderr)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.113860971Z"} {"log":"2018-12-23 22:45:13,113 DEBG fd 11 closed, stopped monitoring \u003cPOutputDispatcher at 22958395464160 for \u003cSubprocess at 22958395463512 with name watchdog-script in state STOPPING\u003e (stdout)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.114088118Z"} {"log":"2018-12-23 22:45:13,114 INFO stopped: watchdog-script (terminated by SIGTERM)\n","stream":"stdout","time":"2018-12-24T03:45:13.114323505Z"} {"log":"2018-12-23 22:45:13,114 DEBG received SIGCLD indicating a child quit\n","stream":"stdout","time":"2018-12-24T03:45:13.114558589Z"} {"log":"2018-12-23 22:45:13,114 DEBG killing start-script (pid 138) with signal SIGTERM\n","stream":"stdout","time":"2018-12-24T03:45:13.114910293Z"} {"log":"2018-12-23 22:45:13,115 DEBG fd 8 closed, stopped monitoring \u003cPOutputDispatcher at 22958395463584 for \u003cSubprocess at 22958395463368 with name start-script in state STOPPING\u003e (stdout)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.115406481Z"} {"log":"2018-12-23 22:45:13,115 DEBG fd 10 closed, stopped monitoring \u003cPOutputDispatcher at 22958395463872 for \u003cSubprocess at 22958395463368 with name start-script in state STOPPING\u003e (stderr)\u003e\n","stream":"stdout","time":"2018-12-24T03:45:13.115557003Z"} {"log":"2018-12-23 22:45:13,115 INFO stopped: start-script (terminated by SIGTERM)\n","stream":"stdout","time":"2018-12-24T03:45:13.115660248Z"} {"log":"2018-12-23 22:45:13,115 DEBG received SIGCLD indicating a child quit\n","stream":"stdout","time":"2018-12-24T03:45:13.115793026Z"} ==> deluge-web.log <== [INFO ] 12:33:59 configmanager:70 Setting config directory to: /config [INFO ] 12:33:59 ui:124 Deluge ui 1.3.15 [INFO ] 12:33:59 ui:127 Starting web ui.. [INFO ] 12:33:59 server:666 Starting server in PID 835. [INFO ] 12:33:59 server:679 Serving on 0.0.0.0:8112 view at http://0.0.0.0:8112 [INFO ] 12:33:59 client:217 Connecting to daemon at localhost:58846.. [INFO ] 12:33:59 client:121 Connected to daemon at 127.0.0.1:58846.. ==> deluged.log <== [INFO ] 12:46:37 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume [INFO ] 12:46:37 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume [INFO ] 12:47:08 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume [INFO ] 12:47:08 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume [INFO ] 12:47:17 torrentmanager:756 Saving the state at: /config/state/torrents.state [INFO ] 12:48:10 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume [INFO ] 12:48:10 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume [INFO ] 12:49:47 torrentmanager:800 Successfully loaded fastresume file: /config/state/torrents.fastresume [INFO ] 12:49:47 torrentmanager:846 Saving the fastresume at: /config/state/torrents.fastresume [INFO ] 12:50:37 torrentmanager:756 Saving the state at: /config/state/torrents.state ==> supervisord.log <== 2019-01-03 12:51:17,334 DEBG 'start-script' stdout output: Rw 2019-01-03 12:51:17,334 DEBG 'start-script' stdout output: RwRwRwRwRwRwrWrWrWrWrWrW 2019-01-03 12:51:17,335 DEBG 'start-script' stdout output: rWrWrWrWRwrWrWrWr 2019-01-03 12:51:17,335 DEBG 'start-script' stdout output: W 2019-01-03 12:51:17,338 DEBG 'start-script' stdout output: RwRw {"log":"2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.334700502Z"} {"log":"Rw\n","stream":"stdout","time":"2019-01-03T17:51:17.334716594Z"} {"log":"2019-01-03 12:51:17,334 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.334981836Z"} {"log":"RwRwRwRwRwRwrWrWrWrWrWrW\n","stream":"stdout","time":"2019-01-03T17:51:17.334992136Z"} {"log":"2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.335304911Z"} {"log":"rWrWrWrWRwrWrWrWr\n","stream":"stdout","time":"2019-01-03T17:51:17.335319143Z"} {"log":"2019-01-03 12:51:17,335 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.335495444Z"} {"log":"W\n","stream":"stdout","time":"2019-01-03T17:51:17.335506838Z"} {"log":"2019-01-03 12:51:17,338 DEBG 'start-script' stdout output:\n","stream":"stdout","time":"2019-01-03T17:51:17.338673675Z"} {"log":"RwRw\n","stream":"stdout","time":"2019-01-03T17:51:17.338696095Z"} root@tank:/var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87# ls -lh total 836M -rw-r----- 1 root root 836M Jan 3 18:34 c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87-json.log < ---- drwx------ 1 root root 0 Jan 3 12:10 checkpoints/ -rw------- 1 root root 4.0K Jan 3 12:33 config.v2.json -rw-r--r-- 1 root root 1.5K Jan 3 12:33 hostconfig.json -rw-r--r-- 1 root root 13 Jan 3 12:33 hostname -rw-r--r-- 1 root root 223 Jan 3 12:33 hosts drwx------ 1 root root 6 Jan 3 12:10 mounts/ -rw-r--r-- 1 root root 176 Jan 3 12:33 resolv.conf -rw-r--r-- 1 root root 71 Jan 3 12:33 resolv.conf.hash root@tank:/var/lib/docker/containers/c52761545533f86c6044179e673d51a1184385e37d7d12901349fcb6e3726b87# cd /mnt/user/appdata/binhex-delugevpn-movies/ root@tank:/mnt/user/appdata/binhex-delugevpn-movies# ls -lh *.log* -rwxrwxrwx 1 root root 416K Dec 23 23:09 91dd38ef8c6a1cefa8e07f26b7966360663eeef56e5a00f13f5e9b8e886a650b-json.log* -rw-rw-rw- 1 nobody users 450 Jan 3 12:33 deluge-web.log -rw-rw-rw- 1 nobody users 157K Jan 3 18:34 deluged.log -rw-r--r-- 1 root root 3.7M Jan 3 18:35 supervisord.log -rw-r--r-- 1 root root 11M Jan 3 18:30 supervisord.log.1 -rw-r--r-- 1 root root 11M Jan 3 18:17 supervisord.log.2 -rw-r--r-- 1 root root 11M Jan 3 18:04 supervisord.log.3 -rw-r--r-- 1 root root 11M Jan 3 17:52 supervisord.log.4
  2. root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# ls -lh total 8.8G -rw-r----- 1 root root 8.8G Jan 3 10:00 c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc-json.log < ---- drwx------ 1 root root 0 Dec 31 14:20 checkpoints/ -rw------- 1 root root 4.0K Jan 2 14:22 config.v2.json -rw-r--r-- 1 root root 1.5K Jan 2 14:22 hostconfig.json -rw-r--r-- 1 root root 13 Jan 2 14:22 hostname -rw-r--r-- 1 root root 224 Jan 2 14:22 hosts drwx------ 1 root root 6 Dec 31 14:20 mounts/ -rw-r--r-- 1 root root 176 Jan 2 14:22 resolv.conf -rw-r--r-- 1 root root 71 Jan 2 14:22 resolv.conf.hash root@tank:/var/lib/docker/containers/c5cb99ad5b0690aa3a26d92382da049710ad08f8641c76f0432891a00cff80bc# docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c5cb99ad5b06 binhex/arch-delugevpn "/usr/bin/tini -- /b…" 2 days ago Up 20 hours 0.0.0.0:8111->8112/tcp, 0.0.0.0:8117->8118/tcp, 0.0.0.0:58845->58846/tcp, 0.0.0.0:58945->58946/tcp, 0.0.0.0:58945->58946/udp binhex-delugevpn-movies < ---- b02e3b1f816f limetech/plex "/sbin/my_init" 10 days ago Up 2 days PlexMediaServer 6e1a05fe9fe2 binhex/arch-delugevpn "/usr/bin/tini -- /b…" 10 days ago Up 2 days 0.0.0.0:8113->8112/tcp, 0.0.0.0:8119->8118/tcp, 0.0.0.0:58847->58846/tcp, 0.0.0.0:58947->58946/tcp, 0.0.0.0:58947->58946/udp binhex-delugevpn-tv
  3. Hi Binhex, I have two instances of this plugin going one for movies, another for TV shows. PROBLEM DESCRIPTION The Movies instance, writes data to the logs inside the container until the docker.img is filled. I have a few hundred torrents going since I perma-seed. I've attached the .json log. Most of the messages aren't useful. It seems to be working fine. Stuff is being downloaded, it just spams the logs until docker.img fills and crashes
  4. This is a known issue with Docker on BTRFS, which is a supported filesystem. I hit this after adding and deleting a couple dozen containers. I can confirm it takes about 10 minutes to resolve by disabling docker, deleting the docker.img file, re-enabling Docker, and having the apps reinstall. It's not unique to unRAID, it's because this command fails: Technically, this isn't a docker problem, BTRFS should do its job and delete the subvolumes, but BTRFS is a cutting edge filesystem, something Docker requires. Anyway, on another OS, you'd have to follow a bunch of steps (see above for examples) but here in the world of unRAID with container and install orchestration it's a few clicks. Hopefully BTRFS fixes this.