Jump to content

Anym001

Members
  • Posts

    416
  • Joined

  • Last visited

Posts posted by Anym001

  1. Oct 28 01:10:32 Sirius usbhid-ups[19839]: WARNING: send_to_all: write 34 bytes to socket 15 failed (ret=-1), disconnecting: Broken pipe
Oct 28 01:10:42 Sirius usbhid-ups[32495]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
Oct 28 01:10:43 Sirius upsd[32517]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
Oct 28 01:10:43 Sirius upsmon[32521]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it


    I have recently been getting the following error messages in the syslog. Does anyone have an idea what this could mean?

  2. On 9/24/2023 at 10:27 PM, mgutt said:

    1.) Umso häufiger ein Gerät beim Verbrauch schwankt, umso häufiger kommt es zu DB-Einträgen. Vom Prinzip logisch, ich hätte nur nicht gedacht, dass alleine der Shelly 3EM, der den Allgemeinstrom des Stromzählers ermittelt, 39% (14+13+12) aller DB-Einträge ausmacht. Wie soll ich das reduzieren?!

     

    Ich habe dazu, gleich wie du die 3 Phasen vom Shelly 3EM zusammengerechnet, aber zusätzlich diese 3 Phasen beim Recorder deaktivier, damit diese nicht mehr in die Datenbank geschrieben werden.  (Siehe -> sensor.shellypro3em_*_active_power)

     

    - trigger:
        - platform: time_pattern
          seconds: "/10"
      sensor:
      - name: ShellyPro3EM aktuelle Leistung Gesamt
        unique_id: shellypro3em_aktuelle_leistung_gesamt
        unit_of_measurement: 'W'
        state: '{{ (states("sensor.shellypro3em_0cb815fcbbd4_phase_a_active_power") | float(0)
               + states("sensor.shellypro3em_0cb815fcbbd4_phase_b_active_power") | float(0)
               + states("sensor.shellypro3em_0cb815fcbbd4_phase_c_active_power") | float(0)) | round(2) }}'
        device_class: power
        state_class: measurement

     

    Recorder Einstellungen: 

    db_url: !secret mariadb_url
    purge_keep_days: 30
    commit_interval: 60
    exclude:
      domains:
        - device_tracker
        - sun
      entity_globs:
        - sensor.count*
        - sensor.date*
        - sensor.time*
        - sensor.internet_time*
        - sensor.abfalltermine_*
        # sonos
        - switch.*_surround_music_full_volume*
        - switch.*_surround_enabled*
        - switch.*_subwoofer_enabled*
        - switch.*_speech_enhancement*
        - switch.*_night_sound*
        - switch.*_loudness*
        - switch.*_crossfade*
        - number.*_treble*
        - number.*_bass*
        - number.*_gain*
        - number.*_surround_level*
        - number.*_sub_gain*
        - number.*_audio_delay*
        - binary_sensor.*_microphone*
        # shelly
        - sensor.shellypro3em_*_active_power
        - sensor.*_power_factor
        - sensor.*_voltage
        - sensor.*_current
        - sensor.*_device_temperature*
        - binary_sensor.*_overheating
        - binary_sensor.*_overpowering
        - binary_sensor.*_overvoltage
        # aqara weather and window sensor
        - button.lumi*_identify*
        # aqara smart plug
        - sensor.lumi*_rms_voltage*
        - sensor.lumi*_power_factor*
        - sensor.lumi*_device_temperature*
        - button.lumi*_identify*
      entities:
        # shelly
        - sensor.steckdose_server_power

     

     

     

    On 9/24/2023 at 10:27 PM, mgutt said:

    2.) Benutzerdefinierte Sensoren / Entitäten werden nicht aus der DB entfernt, obwohl sie nicht mehr existieren. zB alles was mit "stromzahler" anfängt, gibt es gar nicht mehr. Ich habe schon über Entwicklerwerkzeuge > Dienste > Recorder: Purge Entities > Entity Globs to remove > "- sensor.stromzahler*" versucht diese Entitäten zu löschen, aber sie bleiben hartnäckig in der Datenbank enthalten 🤔

     

    Bei mir funktionier es unter Entwicklerwerkzeuge > Dienste > Recorder: Purge Entities > Entität auswählen.

    Ich wähle einfach alle gewünschten Enitäten manuell aus und gehe dann auf Dienst ausführen. 

    Danach 5-10 warten und Neustart. Das hilft normalerweise. 

    • Thanks 1
  3. Folgendes Script wird bei mir via UserScripts Plugin beim Array Start ausgeführt:

     

    #!/bin/bash
    znapzend --logto=/var/log/znapzend.log --daemonize


    Und das sind die Settings für das Backup vom Appdata Share:

    IMG_0169.jpeg


    Der Befehl dafür ist im oberen Post von mir verlinkt.

  4. @Markus-Berlin

    Redis ist zwar prinzipiell nicht notwendig, ich verwende es jedoch auch für das File Locking weil die Performance dadurch gefühlt besser ist.

    Schaut in meiner config.php wie folgt aus:

     

      'memcache.local' => '\\OC\\Memcache\\APCu',
      'memcache.distributed' => '\\OC\\Memcache\\Redis',
      'memcache.locking' => '\\OC\\Memcache\\Redis',
      'redis' => 
      array (
        'host' => 'Redis',
        'port' => 6379,
      ),

     

    Dazu die offizielle Doku bzw. die Empfehlungen von Nextcloud:

    https://docs.nextcloud.com/server/latest/admin_manual/configuration_server/caching_configuration.html

  5. Es hat tatsächlich etwas mit Docker Credentials zu tun. 
     

    Ich habe folgenden Befehl ausprobiert: (Zuerst auf GitHub mit meinem Account einen API Token erstellt)

    echo <GITHUB_TOKEN> | docker login ghcr.io -u <GITHUB_USERNAME> --password-stdin


    Daraufhin werden Zugangsdaten in der Datei /root/.docker/config.json abgespeichert. 
    Diese sind auch noch nach einem Reboot verfügbar. (Müssten die nicht nach einem Reboot eigentlich verschwunden sein?)

     

    Damit war es mir dann möglich den oben genannten Container zu pullen. 

     

    Was mir nicht ganz klar ist, warum hier nicht von Unraid nach einem Reboot Zugangsdaten gesetzt werden, damit ein Pullen von Containern von ghcr.io standardmäßig möglich ist?

     

    EDIT:

    Ich habe nun die Datei /root/.docker/config.json komplett gelöscht. 
    Damit war es mir ebenfalls möglich den Container zu pullen. 

  6. Hallo, 

    leider habe ich das Problem, dass ich keine Container von ghcr.io pullen kann. 
    Beim pullen erhalte ich folgende Meldung. 

    IMG_0127.jpeg


    Daraufhin habe ich testweise eine LXC Umgebung unter Unraid installiert, darin Docker installiert und das gleiche Image gepullt. Hier hat es funktioniert. Daher kann ich schon mal DNS oder Firewall Probleme ausschließen. 

    IMG_0125.jpeg

     

    Was ich noch versucht habe:

    • Im Go File alle Powertop Features deaktiviert 
    • User Scripts gecheckt
    • Docker Logout + anschließender reboot (Vermutung auf Docker registry credentials)
    • Kein custom docker deamon.json file in Verwendung 
    • Keine custom Änderungen in der docker.cfg vorgenommen

     

    Bin leider mit meinem Latein am Ende. 
     

    Vielleicht hat ja von euch noch jemand eine Idee an was es liegen könnte?

  7. 26 minutes ago, mgutt said:
    1 hour ago, Anym001 said:

    Du meinst /mnt/user oder?

    Wenn man docker auf /mnt/cache umgestellt hat und es auf /mnt/user zurückbauen will. So war das gemeint.

     

    Ah okay verstehe.

     

    Also eigentlich überall hier oder?

    1. bei allen Containern im Template auf /mnt/user umstellen

    2. In den docker settings auf /mnt/user umstellen (sowohl bei image als auch bei directory mode)

    3. Eventuelle Skripte auf /mnt/user umändern

    4. In den VM Templates auf /mnt/user umstellen

    5. In den VM Settings auf /mnt/user umstellen

     

    Hoffe ich hab nichts vergessen.

  8. On 4/28/2023 at 12:50 AM, mgutt said:

    Als wäre das aber nicht genug, gibt es noch eine große Änderung: Exclusive Shares. Erstellt man einen Pool ohne Secondary Storage, dann wird dieser Pool nicht mehr mit Unraids typischem FUSE-Dateisystem eingebunden, der bekanntlich viel CPU-Leistung kosten kann, sondern man nutzt den einfachen Bind-Mount, der Datenträger ohne Overhead in das Betriebssystem einbindet:

    image.png.5cf6985a6dd3e6383d314ed5abebb1ee.png

     

    Verstehe ich das richtig, dass man damit nicht mehr /mnt/cache/appdata/whatever machen muss um eine bessere Leistung zu erzielen, sondern /mnt/user/appdata/whatever das selbe Ergebnis bringt? (Stichwort Leistungsvorteil bei zB Nextcloud bzw. MariaDB durch direkt mount)

  9. 1 hour ago, KluthR said:

    They are not exact matches, so this is working as expected. I dont resolve any path matching syntax. Thats tars task. Only if a mapping is exact listed 1:1 inside exclusion, I dont event take it to tar.

     

    Many thanks for the information and the prompt adjustments.

  10. 10 hours ago, KluthR said:
    21 hours ago, Anym001 said:

    I have now tried my backup with the current beta version.

    Could you update as well and upload a new debug log? I saw an issue inside your log and want to confirm its fixed.

     

    Here the debug.log with the updated version.

     

    Does it make sense to output the following information as info instead of warning?

    [25.04.2023 06:54:10][swag][warning] Exclusion "/mnt/cache/appdata/Jellyfin/config/log" matches a container volume - ignoring volume/exclusion pair
    [25.04.2023 06:54:11][swag][warning] Exclusion "/mnt/cache/appdata/nextcloud/log" matches a container volume - ignoring volume/exclusion pair
    [25.04.2023 06:54:12][swag][warning] Exclusion "/mnt/cache/appdata/vaultwarden/vaultwarden.log" matches a container volume - ignoring volume/exclusion pair

     

    The excludes of "debian-bullseye" and "PhotoPrism" Container are not considered. 

    [25.04.2023 06:54:22][Debian-Bullseye][debug] Container got excludes! 
    /mnt/cache/appdata/other/debian-bullseye/
    [25.04.2023 06:54:22][Debian-Bullseye][info] Calculated volumes to back up: /mnt/cache/appdata/debian-bullseye, /mnt/cache/appdata/other/debian-bullseye/debian.sh
    [25.04.2023 07:07:10][PhotoPrism][debug] Container got excludes! 
    /mnt/cache/appdata/other/PhotoPrism/
    [25.04.2023 07:07:10][PhotoPrism][info] Calculated volumes to back up: /mnt/cache/appdata/photoprism/config, /mnt/cache/appdata/other/PhotoPrism/.ppignore

     

    ab.debug.log

  11. I have now tried my backup with the current beta version.
    The error message with the swag container no longer appears, but I get other warnings in the log.

    Debug log attached

     

    [24.04.2023 09:03:29][info] 👋 WELCOME TO APPDATA.BACKUP!! :D
    [24.04.2023 09:03:29][info] Backing up to: /mnt/user/backup/appdatabackup/ab_20230424_090329
    [24.04.2023 09:03:29][info] Saving container XML files...
    [24.04.2023 09:03:29][info] Method: Stop all container before continuing.
    [24.04.2023 09:03:29][info] Stopping vaultwarden...
    [24.04.2023 09:03:30][info] done! (took 1 seconds)
    [24.04.2023 09:03:30][info] No stopping needed for unmanic: Not started!
    [24.04.2023 09:03:30][info] Stopping Thunderbird...
    [24.04.2023 09:03:32][info] done! (took 2 seconds)
    [24.04.2023 09:03:32][info] No stopping needed for tdarr_node: Not started!
    [24.04.2023 09:03:32][info] No stopping needed for tdarr: Not started!
    [24.04.2023 09:03:32][info] Stopping Sonarr...
    [24.04.2023 09:03:34][info] done! (took 2 seconds)
    [24.04.2023 09:03:34][info] Stopping sabnzbd...
    [24.04.2023 09:03:38][info] done! (took 4 seconds)
    [24.04.2023 09:03:38][info] Stopping Radarr...
    [24.04.2023 09:03:41][info] done! (took 3 seconds)
    [24.04.2023 09:03:41][info] Stopping Portfolio-Performance...
    [24.04.2023 09:03:46][info] done! (took 5 seconds)
    [24.04.2023 09:03:46][info] Stopping PhotoPrism...
    [24.04.2023 09:03:46][info] done! (took 0 seconds)
    [24.04.2023 09:03:46][info] Stopping paperless-ngx...
    [24.04.2023 09:03:50][info] done! (took 4 seconds)
    [24.04.2023 09:03:50][info] Stopping Nextcloud...
    [24.04.2023 09:03:51][info] done! (took 1 seconds)
    [24.04.2023 09:03:51][info] No stopping needed for MusicBrainz-Picard: Not started!
    [24.04.2023 09:03:51][info] No stopping needed for MKVToolNix: Not started!
    [24.04.2023 09:03:51][info] No stopping needed for MakeMKV: Not started!
    [24.04.2023 09:03:51][info] No stopping needed for Krusader: Not started!
    [24.04.2023 09:03:51][info] No stopping needed for jellyseerr: Not started!
    [24.04.2023 09:03:51][info] Stopping Jellyfin...
    [24.04.2023 09:03:51][info] done! (took 0 seconds)
    [24.04.2023 09:03:51][info] No stopping needed for jDownloader2: Not started!
    [24.04.2023 09:03:51][info] Stopping Imaginary...
    [24.04.2023 09:03:52][info] done! (took 1 seconds)
    [24.04.2023 09:03:52][info] Stopping heimdall...
    [24.04.2023 09:03:56][info] done! (took 4 seconds)
    [24.04.2023 09:03:56][info] No stopping needed for HandBrake: Not started!
    [24.04.2023 09:03:56][info] Stopping Duplicacy...
    [24.04.2023 09:03:56][info] done! (took 0 seconds)
    [24.04.2023 09:03:56][info] Stopping Debian-Bullseye...
    [24.04.2023 09:03:57][info] done! (took 1 seconds)
    [24.04.2023 09:03:57][info] Stopping Collabora...
    [24.04.2023 09:03:58][info] done! (took 1 seconds)
    [24.04.2023 09:03:58][info] Stopping calibre-web...
    [24.04.2023 09:04:02][info] done! (took 4 seconds)
    [24.04.2023 09:04:02][info] No stopping needed for calibre: Not started!
    [24.04.2023 09:04:02][info] Stopping Authelia...
    [24.04.2023 09:04:02][info] done! (took 0 seconds)
    [24.04.2023 09:04:02][info] No stopping needed for adminer: Not started!
    [24.04.2023 09:04:02][info] Stopping swag...
    [24.04.2023 09:04:07][info] done! (took 5 seconds)
    [24.04.2023 09:04:07][info] Stopping Redis...
    [24.04.2023 09:04:08][info] done! (took 1 seconds)
    [24.04.2023 09:04:08][info] Stopping mariadb...
    [24.04.2023 09:04:12][info] done! (took 4 seconds)
    [24.04.2023 09:04:12][info] Starting backup for containers
    [24.04.2023 09:04:12][info] Backing up mariadb...
    [24.04.2023 09:07:50][info] Backup created without issues
    [24.04.2023 09:07:50][info] Verifying backup...
    [24.04.2023 09:08:25][info] Redis does not have any volume to back up! Skipping
    [24.04.2023 09:08:25][warning] Exclusion "/mnt/cache/appdata/Jellyfin/config/log" matches a container volume - ignoring volume/exclusion pair
    [24.04.2023 09:08:26][warning] Exclusion "/mnt/cache/appdata/nextcloud/log" matches a container volume - ignoring volume/exclusion pair
    [24.04.2023 09:08:28][warning] Exclusion "/mnt/cache/appdata/vaultwarden/vaultwarden.log" matches a container volume - ignoring volume/exclusion pair
    [24.04.2023 09:08:29][info] Backing up swag...
    [24.04.2023 09:08:34][info] Backup created without issues
    [24.04.2023 09:08:34][info] Verifying backup...
    [24.04.2023 09:08:35][info] adminer does not have any volume to back up! Skipping
    [24.04.2023 09:08:35][info] Backing up Authelia...
    [24.04.2023 09:08:35][info] Backup created without issues
    [24.04.2023 09:08:35][info] Verifying backup...
    [24.04.2023 09:08:35][warning] '/mnt/cache/Ebooks/Bücher_Import' is within mapped volume '/mnt/cache/Ebooks/Bücher'! Ignoring!
    [24.04.2023 09:08:36][info] Backing up calibre...
    [24.04.2023 09:08:38][info] Backup created without issues
    [24.04.2023 09:08:38][info] Verifying backup...
    [24.04.2023 09:08:39][info] Backing up calibre-web...
    [24.04.2023 09:08:39][info] Backup created without issues
    [24.04.2023 09:08:39][info] Verifying backup...
    [24.04.2023 09:08:39][info] Collabora does not have any volume to back up! Skipping
    [24.04.2023 09:08:39][warning] Removing container mapping "/mnt/cache/appdata" because it is a source path!
    [24.04.2023 09:08:40][info] Backing up Debian-Bullseye...
    [24.04.2023 09:10:43][info] Backup created without issues
    [24.04.2023 09:10:43][info] Verifying backup...
    [24.04.2023 09:11:09][info] Backing up Duplicacy...
    [24.04.2023 09:11:22][info] Backup created without issues
    [24.04.2023 09:11:22][info] Verifying backup...
    [24.04.2023 09:11:25][warning] '/mnt/cache/Temp_Cache/Handbrake/Input' is within mapped volume '/mnt/cache/Temp_Cache/Handbrake'! Ignoring!
    [24.04.2023 09:11:26][warning] '/mnt/cache/Temp_Cache/Handbrake/Output' is within mapped volume '/mnt/cache/Temp_Cache/Handbrake'! Ignoring!
    [24.04.2023 09:11:27][info] Backing up HandBrake...
    [24.04.2023 09:11:27][info] Backup created without issues
    [24.04.2023 09:11:27][info] Verifying backup...
    [24.04.2023 09:11:27][info] Backing up heimdall...
    [24.04.2023 09:11:28][info] Backup created without issues
    [24.04.2023 09:11:28][info] Verifying backup...
    [24.04.2023 09:11:28][info] Imaginary does not have any volume to back up! Skipping
    [24.04.2023 09:11:28][info] Backing up jDownloader2...
    [24.04.2023 09:11:39][info] Backup created without issues
    [24.04.2023 09:11:39][info] Verifying backup...
    [24.04.2023 09:11:41][warning] '/mnt/user/Videos_Privat' is within mapped volume '/mnt/user/Videos'! Ignoring!
    [24.04.2023 09:11:42][warning] '/mnt/user/Filme/Filme Kids' is within mapped volume '/mnt/user/Filme/Filme'! Ignoring!
    [24.04.2023 09:11:44][warning] '/mnt/user/Serien/Serien Kids' is within mapped volume '/mnt/user/Serien/Serien'! Ignoring!
    [24.04.2023 09:11:45][info] Backing up Jellyfin...
    [24.04.2023 09:19:07][info] Backup created without issues
    [24.04.2023 09:19:07][info] Verifying backup...
    [24.04.2023 09:20:52][info] Backing up jellyseerr...
    [24.04.2023 09:20:52][info] Backup created without issues
    [24.04.2023 09:20:52][info] Verifying backup...
    [24.04.2023 09:20:52][warning] '/mnt/cache/appdata/krusader' is within mapped volume '/mnt'! Ignoring!
    [24.04.2023 09:20:53][info] Krusader does not have any volume to back up! Skipping
    [24.04.2023 09:20:53][info] Backing up MakeMKV...
    [24.04.2023 09:20:53][info] Backup created without issues
    [24.04.2023 09:20:53][info] Verifying backup...
    [24.04.2023 09:20:53][info] Backing up MKVToolNix...
    [24.04.2023 09:20:53][info] Backup created without issues
    [24.04.2023 09:20:53][info] Verifying backup...
    [24.04.2023 09:20:53][info] Backing up MusicBrainz-Picard...
    [24.04.2023 09:20:58][info] Backup created without issues
    [24.04.2023 09:20:58][info] Verifying backup...
    [24.04.2023 09:20:59][info] Backing up Nextcloud...
    [24.04.2023 09:21:25][info] Backup created without issues
    [24.04.2023 09:21:25][info] Verifying backup...
    [24.04.2023 09:21:30][info] Backing up paperless-ngx...
    [24.04.2023 09:21:32][info] Backup created without issues
    [24.04.2023 09:21:32][info] Verifying backup...
    [24.04.2023 09:21:32][info] Backing up PhotoPrism...
    [24.04.2023 09:21:32][info] Backup created without issues
    [24.04.2023 09:21:32][info] Verifying backup...
    [24.04.2023 09:21:33][info] Backing up Portfolio-Performance...
    [24.04.2023 09:21:45][info] Backup created without issues
    [24.04.2023 09:21:45][info] Verifying backup...
    [24.04.2023 09:21:47][warning] '/mnt/user/Filme/Filme Kids' is within mapped volume '/mnt/user/Filme/Filme'! Ignoring!
    [24.04.2023 09:21:49][info] Backing up Radarr...
    [24.04.2023 09:22:52][info] Backup created without issues
    [24.04.2023 09:22:52][info] Verifying backup...
    [24.04.2023 09:23:05][info] Backing up sabnzbd...
    [24.04.2023 09:23:43][info] Backup created without issues
    [24.04.2023 09:23:43][info] Verifying backup...
    [24.04.2023 09:23:52][warning] '/mnt/user/Serien/Serien Kids' is within mapped volume '/mnt/user/Serien/Serien'! Ignoring!
    [24.04.2023 09:23:53][info] Backing up Sonarr...
    [24.04.2023 09:24:00][info] Backup created without issues
    [24.04.2023 09:24:00][info] Verifying backup...
    [24.04.2023 09:24:01][info] Backing up tdarr...
    [24.04.2023 09:24:22][info] Backup created without issues
    [24.04.2023 09:24:22][info] Verifying backup...
    [24.04.2023 09:24:28][info] Backing up tdarr_node...
    [24.04.2023 09:24:28][info] Backup created without issues
    [24.04.2023 09:24:28][info] Verifying backup...
    [24.04.2023 09:24:28][info] Backing up Thunderbird...
    [24.04.2023 09:25:16][info] Backup created without issues
    [24.04.2023 09:25:16][info] Verifying backup...
    [24.04.2023 09:25:24][info] Backing up unmanic...
    [24.04.2023 09:25:49][info] Backup created without issues
    [24.04.2023 09:25:49][info] Verifying backup...
    [24.04.2023 09:25:55][info] Backing up vaultwarden...
    [24.04.2023 09:25:56][info] Backup created without issues
    [24.04.2023 09:25:56][info] Verifying backup...
    [24.04.2023 09:25:56][info] Set containers to previous state
    [24.04.2023 09:25:56][info] Starting mariadb... (try #1)
    [24.04.2023 09:25:58][info] Starting Redis... (try #1)
    [24.04.2023 09:26:01][info] Starting swag... (try #1)
    [24.04.2023 09:26:03][info] adminer is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:03][info] Starting Authelia... (try #1)
    [24.04.2023 09:26:03][info] The container has a delay set, waiting 5 seconds before carrying on
    [24.04.2023 09:26:08][info] calibre is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:08][info] Starting calibre-web... (try #1)
    [24.04.2023 09:26:11][info] Starting Collabora... (try #1)
    [24.04.2023 09:26:13][info] Starting Debian-Bullseye... (try #1)
    [24.04.2023 09:26:15][info] Starting Duplicacy... (try #1)
    [24.04.2023 09:26:17][info] HandBrake is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:17][info] Starting heimdall... (try #1)
    [24.04.2023 09:26:20][info] Starting Imaginary... (try #1)
    [24.04.2023 09:26:22][info] jDownloader2 is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:22][info] Starting Jellyfin... (try #1)
    [24.04.2023 09:26:24][info] jellyseerr is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:24][info] Krusader is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:24][info] MakeMKV is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:24][info] MKVToolNix is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:24][info] MusicBrainz-Picard is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:24][info] Starting Nextcloud... (try #1)
    [24.04.2023 09:26:27][info] Starting paperless-ngx... (try #1)
    [24.04.2023 09:26:29][info] Starting PhotoPrism... (try #1)
    [24.04.2023 09:26:32][info] Starting Portfolio-Performance... (try #1)
    [24.04.2023 09:26:34][info] Starting Radarr... (try #1)
    [24.04.2023 09:26:36][info] Starting sabnzbd... (try #1)
    [24.04.2023 09:26:38][info] Starting Sonarr... (try #1)
    [24.04.2023 09:26:41][info] tdarr is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:41][info] tdarr_node is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:41][info] Starting Thunderbird... (try #1)
    [24.04.2023 09:26:43][info] unmanic is being ignored, because it was not started before (or should not be started).
    [24.04.2023 09:26:43][info] Starting vaultwarden... (try #1)
    [24.04.2023 09:26:45][info] Backing up the flash drive.
    [24.04.2023 09:27:09][info] Flash backup created!
    [24.04.2023 09:27:09][info] Checking retention...
    [24.04.2023 09:27:09][info] DONE! Thanks for using this plugin and have a safe day ;)
    [24.04.2023 09:27:09][info] ❤️

     

    ab.debug.log

  12. 22 hours ago, KluthR said:
    23 hours ago, Anym001 said:

    Sorry i posted the wrong debug log above.

    Ah I see.

    swag backups contents of Jellyfin, maybe you want to exclude them (/mnt/cache/appdata/Jellyfin/log) as well as the other container mappings (vaultwarden?).

     

    I have now tried to exclude the affected mappings. 
    Unfortunately I still get an error message. 
    The tar verifications failed again. 

    Debug log attached

     

    image.thumb.png.459b4ae1f30aa44019dc0c57eb392ea2.png

     

     

    [17.04.2023 20:57:45][debug] Backup swag - Container Volumeinfo: Array
    (
        [0] => /mnt/cache/appdata/swag:/config:rw
        [1] => /mnt/cache/appdata/Jellyfin/config/log/:/var/log/jellyfin/:ro
        [2] => /mnt/cache/appdata/nextcloud/log/:/var/log/nextcloud/:ro
        [3] => /mnt/cache/appdata/vaultwarden/vaultwarden.log:/var/log/vaultwarden/vaultwarden.log:ro
    )
    
    [17.04.2023 20:57:45][debug] Should NOT backup ext volumes, sanitizing them...
    [17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/swag' IS within AppdataPath '/mnt/cache/appdata'!
    [17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/Jellyfin/config/log' IS within AppdataPath '/mnt/cache/appdata'!
    [17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/nextcloud/log' IS within AppdataPath '/mnt/cache/appdata'!
    [17.04.2023 20:57:45][debug] Volume '/mnt/cache/appdata/vaultwarden/vaultwarden.log' IS within AppdataPath '/mnt/cache/appdata'!
    [17.04.2023 20:57:45][debug] Final volumes: /mnt/cache/appdata/swag, /mnt/cache/appdata/Jellyfin/config/log, /mnt/cache/appdata/nextcloud/log, /mnt/cache/appdata/vaultwarden/vaultwarden.log
    [17.04.2023 20:57:45][debug] Target archive: /mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz
    [17.04.2023 20:57:45][debug] Container got excludes! 
    /mnt/cache/appdata/Jellyfin/config/log
    /mnt/cache/appdata/nextcloud/log
    /mnt/cache/appdata/vaultwarden/vaultwarden.log
    [17.04.2023 20:57:45][debug] Generated tar command: --exclude '/mnt/cache/appdata/vaultwarden/vaultwarden.log' --exclude '/mnt/cache/appdata/nextcloud/log' --exclude '/mnt/cache/appdata/Jellyfin/config/log' -c -P -z -f '/mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz' '/mnt/cache/appdata/swag' '/mnt/cache/appdata/Jellyfin/config/log' '/mnt/cache/appdata/nextcloud/log' '/mnt/cache/appdata/vaultwarden/vaultwarden.log'
    [17.04.2023 20:57:45][info] Backing up swag...
    [17.04.2023 20:57:51][debug] Tar out: 
    [17.04.2023 20:57:51][info] Backup created without issues
    [17.04.2023 20:57:51][info] Verifying backup...
    [17.04.2023 20:57:51][debug] Final verify command: --exclude '/mnt/cache/appdata/vaultwarden/vaultwarden.log' --exclude '/mnt/cache/appdata/nextcloud/log' --exclude '/mnt/cache/appdata/Jellyfin/config/log' --diff -f '/mnt/user/backup/appdatabackup/ab_20230417_205409/swag.tar.gz' '/mnt/cache/appdata/swag' '/mnt/cache/appdata/Jellyfin/config/log' '/mnt/cache/appdata/nextcloud/log' '/mnt/cache/appdata/vaultwarden/vaultwarden.log'
    [17.04.2023 20:57:52][debug] Tar out: tar: Removing leading `/' from member names; tar: /mnt/cache/appdata/Jellyfin/config/log: Not found in archive; tar: /mnt/cache/appdata/nextcloud/log: Not found in archive; tar: /mnt/cache/appdata/vaultwarden/vaultwarden.log: Not found in archive; tar: Exiting with failure status due to previous errors
    [17.04.2023 20:57:52][error] tar verification failed! More output available inside debuglog, maybe.
    [17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/swag)
    Array
    (
    )
    
    [17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/Jellyfin/config/log)
    Array
    (
    )
    
    [17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/nextcloud/log)
    Array
    (
    )
    
    [17.04.2023 20:57:52][debug] lsof(/mnt/cache/appdata/vaultwarden/vaultwarden.log)
    Array
    (
    )
    
    [17.04.2023 20:57:52][debug] AFTER verify: Array
    (
        [Image] => linuxserver/swag:latest
        [ImageId] => a51e6e60de2c
        [Name] => swag
        [Status] => Exited (0) 3 minutes ago
        [Running] => 
        [Paused] => 
        [Cmd] => /init
        [Id] => ff1513d2b608
        [Volumes] => Array
            (
                [0] => /mnt/cache/appdata/swag:/config:rw
                [1] => /mnt/cache/appdata/Jellyfin/config/log/:/var/log/jellyfin/:ro
                [2] => /mnt/cache/appdata/nextcloud/log/:/var/log/nextcloud/:ro
                [3] => /mnt/cache/appdata/vaultwarden/vaultwarden.log:/var/log/vaultwarden/vaultwarden.log:ro
            )
    
        [Created] => 5 hours ago
        [NetworkMode] => proxynet
        [CPUset] => 
        [BaseImage] => 
        [Icon] => https://raw.githubusercontent.com/linuxserver/docker-templates/master/linuxserver.io/img/linuxserver-ls-logo.png
        [Url] => https://[IP]:[PORT:443]
        [Shell] => 
        [Ports] => Array
            (
                [0] => Array
                    (
                        [IP] => 
                        [PrivatePort] => 443
                        [PublicPort] => 4443
                        [NAT] => 1
                        [Type] => tcp
                    )
    
                [1] => Array
                    (
                        [IP] => 
                        [PrivatePort] => 80
                        [PublicPort] => 4480
                        [NAT] => 1
                        [Type] => tcp
                    )
    
            )
    
    )

     

     

    ab.debug.log

  13. 11 hours ago, KluthR said:

    @Anym001Duplicacy and jellyfin both having volumes within volumes (cache on each). This is working for docker but messes with the tar verification. The plugin script does not handle this currently. It will be, in future version.

     

    Sorry i posted the wrong debug log above.

    Here is the current debug log, where the problems with duplicacy and jellyfin have been fixed.
    Error for swag remains.

    ab.debug.log

  14. 41 minutes ago, KluthR said:

    @Anym001Duplicacy and jellyfin both having volumes within volumes (cache on each).

     

    I solved this problems with die jellyfin and Duplicacy Container but not for the swag one.

    Is there any solution for this or should I wait for the future version?

×
×
  • Create New...