Mr_Jay84

Members
  • Posts

    141
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Mr_Jay84's Achievements

Apprentice

Apprentice (3/14)

13

Reputation

  1. The resolver part is for the docker network. I did make a mistake though as $upstream_app should have had the container ID, fixed now. Amending the "Server name" and "Public_baseurl" resulted in a non functional server. I changed the bind address at 290 to the docker IP. No change in described behaviour. Homeserver.log & .db continue to fill up. Very strange.
  2. Having some issues with the federation here guys and looking for some advice. Issue 1 I can browse public rooms in element however joining them takes a good five minutes at which point I usually get a "failed to join room notification", then it strangely joins the room. Leaving also take five mins but does eventually leave. Sending a message takes about the same time. There's obviously a federation issue here as the homeserver.log is full of federation errors. I ahve attached the various logs. Issue 2 The homeserver.log and homeserver.db fills up dramatically 30M an hour, any way of limiting this? homeserver.log homeserver.yaml matrix.subdomain.conf
  3. I've removed Parity Check Tuning plugin as of now. I've got a rather large collection of containers, it's mainly the databases and PVR ones that are large. It's always been around 70G and is fairly constant. Having had the issue you described last year it usually just stops the docker service from running. In this case the UI completely crashes, I can't even reset by SSH, I need to use IMPI to reset the machine. I'll start going through the containers just to check the oaths anyway. Here's the command output root@Ultron:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 63G 1.7G 62G 3% / tmpfs 32M 5.9M 27M 19% /run /dev/sda1 7.2G 948M 6.3G 13% /boot overlay 63G 1.7G 62G 3% /lib/modules overlay 63G 1.7G 62G 3% /lib/firmware devtmpfs 63G 0 63G 0% /dev tmpfs 63G 264K 63G 1% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 3.4M 125M 3% /var/log tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes /dev/md1 5.5T 2.0T 3.5T 37% /mnt/disk1 /dev/md2 5.5T 40G 5.5T 1% /mnt/disk2 /dev/sdb1 224G 187G 38G 84% /mnt/cache-docker /dev/sdf1 11T 5.2T 5.8T 48% /mnt/cache-downloads /dev/sdi1 932G 264G 669G 29% /mnt/cache-files /dev/sdd1 932G 6.6G 925G 1% /mnt/cache-media /dev/sde1 932G 6.6G 925G 1% /mnt/cache-tv shfs 11T 2.1T 8.9T 19% /mnt/user0 shfs 11T 2.1T 8.9T 19% /mnt/user /dev/loop2 100G 69G 30G 70% /var/lib/docker /dev/loop3 1.0G 6.1M 903M 1% /etc/libvirt //200.200.1.244/GusSync 43T 37T 6.3T 86% /mnt/remotes/200.200.1.244_GusSync //200.200.1.244/Mel Drive 43T 37T 6.3T 86% /mnt/remotes/200.200.1.244_Mel Drive //200.200.1.244/Public 43T 37T 6.3T 86% /mnt/remotes/200.200.1.244_Public //200.200.1.243/Public 43T 37T 6.3T 86% /mnt/remotes/200.200.1.243_Public root@Ultron:~#
  4. Yeah I'm aware but that's not the issue at hand mate.
  5. It's been on the internal network for years, never got around to switching everything over.
  6. Happened again today randomly at roughly <77>1 2021-09-21T18:49:01+01:00 Ultron ultron-diagnostics-20210921-1950.zip
  7. Did you ever find a solution? I have the same issue. Some containers are correct, others aren't regardless off if there's a TZ variable or not.
  8. I still have a working template of you want mate? Place this in your flash drive directory /boot/config/plugins/dockerMan/templates-user/ my-rutorrent.xml
  9. I'm a bit lost here mate as the "Username" and "Token" isn't there. Do these translate to "Client ID" and "Client Secret" under the OAuth2 page?
  10. I had to reinstall the docker as for some reason the forced update didn't work. Now selected and functioning, thanks again mate! Excellent work!
  11. Yeah I thought as much. I'll keep them spun up for now and post back when another crash happens. Thanks for responding so far mate.
  12. Yeah I know about that issue. Some of these disks (particular model) don't like being spun down, that's only a recent change though as this strange crash was happening before I set the to spin down. I have a post on the HD subject