kaymer327

Members
  • Posts

    6
  • Joined

  • Last visited

kaymer327's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Understood... I'll go a different route then - I'll schedule this as a user script to run 5 minutes before my scheduled back up. problem solved. Thanks!
  2. So referencing my previous post: Setting the "Path To Custom Pre-start Script:" didn't work as expected... [03.02.2023 05:00:45] Backing Up /usr/bin/tar: ./binhex-qbittorrentvpn/qBittorrent/config/ipc-socket: socket ignored [03.02.2023 05:02:48] Verifying Backup ./binhex-qbittorrentvpn/qBittorrent/config/qBittorrent.conf: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Size differs [03.02.2023 05:04:52] tar verify failed! [03.02.2023 05:04:52] done [03.02.2023 05:04:52] Executing custom pre-start script /mnt/user/data/scripts/backup-pre-start My script kicked off successfully and killed the target processes as expected... Just a bit too late.. Looks like it's pre-start of dockers, not pre-start of backup as I was expecting/assuming. I couldn't find any documentation confirming what it's supposed to be, but from the code, looks like it's a pre-start of dockers: https://github.com/Commifreak/ca.backup2/blob/master/source/ca.backup2/usr/local/emhttp/plugins/ca.backup2/scripts/backup.php#L323 I haven't touched php in a minute and don't have a good way to test, so I'd be hesitant to submit a PR, but anyway to get a pre-backup-start script option added in?
  3. I was running into the common backup error related to dockers. Came to this thread for some research. Found a lot of people that weren't stopping all dockers - that's not my case as I am in fact stopping all dockers... So I was poking around a bit more and testing... For the record here are the errors that I ran into: [27.01.2023 05:04:31] Verifying Backup ./binhex-qbittorrentvpn/qBittorrent/config/qBittorrent.conf: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Mod time differs ./binhex-qbittorrentvpn/qBittorrent/data/logs/qbittorrent.log: Size differs [27.01.2023 05:06:36] tar verify failed! Last successful backup (scheduled weekly, Friday's, 5AM) was Dec 9th. While manually running a backup, confirming that all dockers remained down via the web UI (they did) - it still failed. So I did it again and ran ps -ef | grep -i qb while the back up was running... Found this guy: root 19499 1 0 2022 ? 00:00:02 /usr/bin/ttyd -d0 -t disableLeaveAlert true -t theme {'background':'black'} -t fontSize 15 -t fontFamily monospace -i /var/tmp/binhex-qbittorrentvpn.sock docker exec -it binhex-qbittorrentvpn bash bingpot! Killed that process, re-ran and I've got successful backups again. Looks like this process is from opening the console to that docker from the UI. I'm able to reproduce this hanging process - typing "exit" in the console windows just re-opens a new shell. Closing the browser window seems to be the only way to "close" it - but the process remains. The following command will kill all running docker exec commands - but may potentially be too greedy depends on your specific needs - good enough for what I do though: ps -ef | grep -i "docker exec -it" | grep -v grep | awk '{print $2}' | xargs kill I will likely be putting this command in a pre-start script. Thought I post here in case anyone is having similar problems.
  4. Completely understood - and re-reading my comment I see it can be taken as me being a jerk (not my intent) - just making the point that it could very well be a bug. Apologies for being short with my words. But in researching my problem, I found various other users and threads (as noted) and everyone else attempting to help seems to be jumping to "it's a hardware problem" or "it's just you". Multiple users all having hardware problems within days of a new release that magically go away when reverting to the old release? Seems more like it could be a bug and shouldn't be dismissed so quickly. From what I found, it seems that users aren't submitting bug reports. They're going to Reddit and/or other areas of the forums. It might not be on everyone's radar just yet. Speaking of reddit - @medolino2009 - do you have any dockers with custom IPs? Another user in a reddit thread noted that they have a docker with a custom IP - I do as well (unifi docker, custom 192.168.1.x static IP to use for my "inform" address). Might be related to the other bug report listed here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-host-access-macvlan-r1356/
  5. If it doesn't sound a bug in the current release, why suggest rolling back to the previous release? Looks like multiple users are reporting kernel panics on 6.9.x only - 6.8.x is stable - both here in the forums and on reddit: https://forums.unraid.net/topic/103544-690-random-crashesrestarts-since-upgrading https://www.reddit.com/r/unRAID/comments/m5n0za/kernel_panic_help_happens_every_few_days/ https://www.reddit.com/r/unRAID/comments/m3ncd5/weird_crash_and_lost_all_appdata/ https://www.reddit.com/r/unRAID/comments/m3n0wv/nextcloud_docker_not_working_after_crash/
  6. I had a kernel panic last night as well. Was stable on 6.8.3 since it was released. My Flash drive went bad around the time of/during the 6.9.1 upgrade - I had replacement waiting since I've had that flash drive for a while. Replaced it with one of spaceinvaderone's recommendations (SAMSUNG BAR Plus). And all was fine for about a week. Syslog doesn't seem to have any details in it from prior to the restart to recover from the panic. Any suggestions for ensuring that logging is retained in such a scenario? Syslog server? While I understand that hardware can and does go bad - the catalyst here was the 6.9.1 upgrade. Other users are reporting similar experiences (6.8.x stable, 6.9.x kernel panics) here in the forums as well as on Reddit: https://www.reddit.com/r/unRAID/comments/m5n0za/kernel_panic_help_happens_every_few_days/ https://www.reddit.com/r/unRAID/comments/m3ncd5/weird_crash_and_lost_all_appdata/ https://www.reddit.com/r/unRAID/comments/m3n0wv/nextcloud_docker_not_working_after_crash/ So seems more like a software issue than a hardware one with multiple users reporting. Thanks for any assistance!