Jump to content

Kees Fluitman

Members
  • Posts

    44
  • Joined

  • Last visited

Everything posted by Kees Fluitman

  1. Im guessing. since this plugin hasn't been updated the last few months, it is most likely starting to break?
  2. I completely cleaned my sata cache pool, nvme cache drive and reformatted the pool. After deleting the docker image and rebuilding, it seems ive been error free again the last 10 hours. Let's see how long it holds. Ill keep this topic updated.
  3. Am I correct to note that it does not backup appdata of containers that are currently not created? Thus for instance, I have docker-compose files of stacks that I do not always wish to use. But I do want to have their Appdata part of the most recent backup. I guess it'd be much smarter to have a script run at night, stop docker, copy the complete appdata folder to my backup location, and restart docker?
  4. I've fixed the networks i think. but i do run a macvlan. i need it for my adguardhome instance. I followed the troubleshooting guide to turn off bridging. I changed motherboard and CPU, since I got several crashes, and couldnt even boot with more than 16GB of memory. It runs smoothly now, but still got a crash just before (that was probably due to my weird macvlan settings that I'd changed to test, now it should be fine.) I do also need to run a parity test, since all the crashes have kind off mixed up the parity. But before that, i saw checksum and scrub errors on my cachepool: May 4 15:57:38 server kernel: BTRFS warning (device loop2): checksum error at logical 9563754496 on dev /dev/loop2, physical 10645884928, root 414, inode 996, offset 12288, length 4096, links 1 (path: usr/lib/apt/methods/gpgv) May 4 15:57:38 server kernel: BTRFS warning (device loop2): checksum error at logical 9563754496 on dev /dev/loop2, physical 10645884928, root 413, inode 996, offset 12288, length 4096, links 1 (path: usr/lib/apt/methods/gpgv) May 4 15:57:38 server kernel: BTRFS warning (device loop2): checksum error at logical 9563754496 on dev /dev/loop2, physical 10645884928, root 412, inode 996, offset 12288, length 4096, links 1 (path: usr/lib/apt/methods/gpgv) These errors are probably due to the crash. But I just wonder what these files are responsible for. Maybe docker or the VMs? then i'd recreate the docker image. I just want to avoid any further crashes. So i can easily run the parity check. I think it's from docker. I started the array, started VMs, nothing...Then started docker. then i got the errors again May 4 16:44:49 server kernel: BTRFS error (device loop3): tree first key mismatch detected, bytenr=778682368 parent_transid=253 key expected=(5040,84,18446612688220248328) has=(5040,84,3585323317) But dmesg does not show corresponding error messages this time. So that's a good thing i guess? Now Dmesg does show occassional similar errors [Sat May 4 17:28:11 2024] BTRFS error (device loop3): tree first key mismatch detected, bytenr=778682368 parent_transid=253 key expected=(5040,84,18446612688220248328) has=(5040,84,3585323317) Would you suggest rebuilding my docker image?
  5. I probably have some motherboard problem. Crashes were returning, up to a point where booting failed. CPU stress tests were fine, memtest was fine. But I could only boot again and do those tests after only using 2x8GB memory modules. So I'll replace the mainboard and see. The error you see here is what returns more frequently atm. Dunno if that is any issue. I also had some harddrive meta data errors, but those were solved after moving my files properly on the drives (cache pools and array). May 1 19:54:25 server kernel: WARNING: CPU: 3 PID: 0 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x8c/0x7d1 [nf_nat] server-diagnostics-20240501-1956.zip
  6. it was set to host. Doesn't work both ways. either bridge or host... Also, nothing is listening on port 8200 on my unraid server when the container is running... [2024/04/08 21:59:54] scanner.c:820: warn: Scanning /media finished (6871 files)! [2024/04/08 21:59:54] playlist.c:135: warn: Parsing playlists... [2024/04/08 21:59:54] playlist.c:269: warn: Finished parsing playlists. [2024/04/08 22:01:37] minidlna.c:1134: warn: Starting MiniDLNA version 1.3.3. [2024/04/08 22:01:37] minidlna.c:394: warn: Creating new database at /config/files.db [2024/04/08 22:01:37] minissdp.c:132: error: bind(udp): Address already in use [2024/04/08 22:01:37] getifaddr.c:110: error: Network interface eth0 not found [2024/04/08 22:01:37] minissdp.c:848: error: connect("/var/run/minissdpd.sock"): No such file or directory[2024/04/08 22:01:37] minidlna.c:1170: fatal: Failed to connect to MiniSSDPd. EXITING[2024/04/08 22:01:37] scanner.c:731: warn: Scanning /media [2024/04/08 22:03:10] minidlna.c:1134: warn: Starting MiniDLNA version 1.3.3. [2024/04/08 22:03:10] minidlna.c:394: warn: Creating new database at /config/files.db [2024/04/08 22:03:10] minissdp.c:132: error: bind(udp): Address already in use [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 192.168.1.165 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 10.253.0.1 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 192.168.1.223 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 192.168.122.1 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 100.91.251.7 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 172.23.0.1 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 172.22.0.1 [2024/04/08 22:03:10] minissdp.c:84: error: setsockopt(udp, IP_ADD_MEMBERSHIP): Bad file descriptor [2024/04/08 22:03:10] minissdp.c:198: warn: Failed to add multicast membership for address 172.18.0.1 [2024/04/08 22:03:10] scanner.c:731: warn: Scanning /media [2024/04/08 22:03:11] minissdp.c:816: error: sendto(udp_shutdown=7): Required key not available [2024/04/08 22:03:11] minissdp.c:816: error: sendto(udp_shutdown=7): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:11] minissdp.c:324: error: sendto(udp_notify=7, 10.253.0.1): Required key not available [2024/04/08 22:03:12] minissdp.c:848: error: connect("/var/run/minissdpd.sock"): No such file or directory[2024/04/08 22:03:12] minidlna.c:1170: fatal: Failed to connect to MiniSSDPd. When i set network interface to br0. I dont get that error anymore, but still get this [2024/04/08 22:09:18] minissdp.c:132: error: bind(udp): Address already in use Ah ic. I think it's because jellyfin is already using port 1900. Ill check.
  7. why cant i get it to work? Simply ran the container, cant get to 8200, nor can i get vlc or my denon heos app to find it. I tried adding port 8200 manually, but to no avail. Logs: 2024-04-08 21:39:05.019086 [info] Host is running unRAID 2024-04-08 21:39:05.045991 [info] System information Linux server 6.1.79-Unraid #1 SMP PREEMPT_DYNAMIC Fri Mar 29 13:34:03 PDT 2024 x86_64 GNU/Linux 2024-04-08 21:39:05.081450 [info] PUID defined as '99' 2024-04-08 21:39:05.139614 [info] PGID defined as '100' 2024-04-08 21:39:05.217453 [info] UMASK defined as '000' 2024-04-08 21:39:05.249474 [info] Permissions already set for '/config' 2024-04-08 21:39:05.285858 [info] Deleting files in /tmp (non recursive)... 2024-04-08 21:39:05.320576 [info] SCAN_ON_BOOT defined as 'yes' 2024-04-08 21:39:05.355345 [info] SCHEDULE_SCAN_DAYS defined as '06' 2024-04-08 21:39:05.384852 [info] SCHEDULE_SCAN_HOURS defined as '02' 2024-04-08 21:39:05.422548 [info] Starting Supervisor... 2024-04-08 21:39:05,735 INFO Included extra file "/etc/supervisor/conf.d/minidlna.conf" during parsing 2024-04-08 21:39:05,735 INFO Set uid to user 0 succeeded 2024-04-08 21:39:05,738 INFO supervisord started with pid 7 2024-04-08 21:39:06,742 INFO spawned: 'crond' with pid 62 2024-04-08 21:39:06,744 INFO spawned: 'start' with pid 63 2024-04-08 21:39:06,745 INFO reaped unknown pid 8 (exit status 0) 2024-04-08 21:39:06,902 DEBG fd 11 closed, stopped monitoring <POutputDispatcher at 22398605550800 for <Subprocess at 22398605392464 with name start in state STARTING> (stdout)> 2024-04-08 21:39:06,903 DEBG fd 15 closed, stopped monitoring <POutputDispatcher at 22398603333648 for <Subprocess at 22398605392464 with name start in state STARTING> (stderr)> 2024-04-08 21:39:06,903 INFO success: start entered RUNNING state, process has stayed up for > than 0 seconds (startsecs) 2024-04-08 21:39:06,903 INFO exited: start (exit status 0; expected) 2024-04-08 21:39:06,903 DEBG received SIGCHLD indicating a child quit 2024-04-08 21:39:07,904 INFO success: crond entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
  8. crashes suddenly started again. Only reason i can think of, is a new docker stack I've started running (possible too high load on mem or cpu?). Or the update...Maybe the hardware, motherboard/cpu, are slowly showing their time. Specs: Asus X99-A Intel® Core™ i7-5820K CPU @ 3.30GHz Memory: 48 GiB DDR4 Syslog shows nothing. Added it. Ill down my Logging stack (prometheus, grafana, etc.) see if under less load, it doesnt happen. syslog-till-crash.txt
  9. i agree. for most stacks. but sometimes i have single app stacks in the compose manager, which I would like to auto update. (they are less critical). But ill manage just doing it manually weekly i guess.
  10. Is it possible to auto update stacks in compose.manager?
  11. I've got 52 running containers with 107GB in use. It isn't growing except for whilst running updates. Most containers that use a lot of space, are the ones that already have large Images. How would you honestly stay below the 20GB? Isnt the complete installation of the image also in the docker image? The executables?
  12. After the update. I had no issues whatsoever anymore. It's been online for 7 days now. So ill turn off "mirror syslog to flashdrive" again. I only saw a few of these errors in my logs:
  13. Wow thanks man. I was banging my head into the wall for this. I've got a script setup on array start now: #!/bin/bash DAEMON_CONFIG="/etc/docker/daemon.json" NEW_CONFIG='{ "log-driver": "loki", "log-opts": { "loki-url": "http://localhost:3100/loki/api/v1/push", "loki-batch-size": "400" } }' update_config() { jq --argjson newConfig "$NEW_CONFIG" '. += $newConfig' "$DAEMON_CONFIG" > tmp.$$ && mv tmp.$$ "$DAEMON_CONFIG" } if [ -f "$DAEMON_CONFIG" ]; then update_config else echo "$NEW_CONFIG" > "$DAEMON_CONFIG" fi /etc/rc.d/rc.docker stop while pgrep -x docker > /dev/null; do sleep 1; done /etc/rc.d/rc.docker start Let's see how it will work over time. At least docker is starting again :). Next issue, is that all containers need to be recreated, for them to actually run their logs through loki. Maybe if I make sure loki gets created first. It will work properly.
  14. I could stress my server more to see if the crash occurs, but as of the update yesterday, it has been very stable, with enough free memory, no slowdowns of GUI or apps, etc. I will turn on my monitoring stack and observe how more stress is handled by my server. It's been running clean for 24 hours now though.
  15. I will do that right now. There are other signs of struggle before the crash comes as well. Generally slow down of applications, etc. But not always either. I will keep this topic up to date. After the update just now, it seems good. Im sending syslogs to an external server and mirroring the files to flash. One thing i noticed this morning was that htop showed a lot more activity from Crowdsec than generally. I also had more frequent crashes when i had my diagnose stack (grafana, loki, etc.) running. But I already turned that off untill I figured out a more efficient way to handle diagnosis. I find they use quite a lot of resources for just Diagnosis/Observations. They pushed my RAM to the limit (having no more Free space and everything used by Cache)...but as far as i know, Linux just uses cache freely, so having 2GB less cache than usual, isnt all that important when you have 47GB in total. So if i understand correctly, mirroring them, will not only have the syslog in RAM, but also on the flashdrive...so you can actually debug it better?
  16. crashed again shortly after. Got this from my monitor, but otherwise the server is unresponsive to any input from my keyboard. I must say, the crashes started after the 6.12.6 upgrade, so now I'll try to upgrade to 6.12.8 and see again. I added the image. The logs again show nothing right after i closed the ssh session this morning...really weird tbh.
  17. This is all I've got for now. Seems it crashed in the middle of the night (after I logged out late) Don't know if the error is related to the crash. Don't know why there are so few entries either. Ill connect a monitor to see if i have sth on screen the next time it crashes. To compare it to my syslog. The device in the last crash is But that error comes back frequently. syslog-127.0.0.1.log
  18. Hi Guys, I've been getting some regular crashes lately. Mostly when my server seems a bit loaded with applications. The specific error is Kernel Panic. But the thing is, since logs are going to the usb-drive, I have no way to see what happened or what caused it. Any tips on how to better diagnose it? I've attempted to go through the logs but to no avail. If you have any tips whether or not I should log to some external source, or if the diagnostics would help here? Ill post them. I added a diagnostics from a couple of weeks ago, when it also happened. Maybe it will help to give you a glance at my system. - memtest passed. - Ive set syslog to a separate share. So let's see once a crash happens. server-diagnostics-20240115-1739.zip
  19. yes perfect. i did it that way as well. root@server:/boot/config# cat /boot/config/go #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & cat /boot/config/kees_aliases >> /etc/profile
  20. hey. I've started noticing it didnt work anywhere. Any modal was throwing errors. The web dev console showed 507 responses I believe. That the server didnt have sufficient storage: I dont know why WebDAV though? Or if they mean memory...anyway... So i figured let's dive deeper. And multiple thing started to act weird. I then looked at my memory allocations, and usage. And it seemed there was no more free memory. At least, not real "free" memory. There was allocated memory for cache/buffer. but even that was starting to lower. At it's peak, memory used was 32GB, the rest cache/buffer (total 47GB). So i shutdown the system. and restarted. And now it's back to normal (25GB used, 11GB cache, and 8/10GBfree). I do already see a slow incrementation in my Grafana Chart. But i guess that could also be grafana itself at the moment? This eating of my memory started when i started to setup all my monitoring (Grafana, Prometheus, Telegraf, Loki, etc.) to play around. So I'll keep a close eye on that.
  21. hey guys. I suddenly have an issue, where I click anything, compose up, or update stack, it opens the Modal window, and instantly disconnects. So nothing happens. Any ideas? I cant update or run most of my stacks now. I'll do it manually for now, but it's weird.
  22. so to be more exact: - I create a script somewhere on my flashdrive (for instance /boot/config/aliases.sh) - I set this script to launch in the /boot/config/go file - all done? Or your way? You copy the profile?
  23. did you find out if it's possible? Im a bit tired of unraid having these limitations :P.
  24. hey. Old topic. But I was just seeing the same thing. My adguard-home is catching tons of requests that are repeated over time. All local requests, so they get a NX Domain response anyway! Apart from me going to have Unraid use a standard DNS address, i was still wondering...why does it make those requests anyway? They are all dns addresses of known open source projects, like pihole, traktarr, etc. Most of which i dont even have installed! So why make them? It literally looks like about 50 requests at once. To all known open source projects from A-Z.
  25. bei post commands "--user telegraf:$(stat -c '%g' /var/run/docker.sock)" hinzufügen.
×
×
  • Create New...