T0a

Members
  • Posts

    149
  • Joined

  • Last visited

Everything posted by T0a

  1. I added the progress flag to my borgmatic command (borgmatic create --verbosity 1 --progress --stats) today and noticed that the logs available via the WebUI are incomplete when compared to docker logs from the command line. The left side shows the logs via the build-in web UI log viewer. The right side shows the logs via `docker logs -f <id>`. Not sure whether this is an issue with how Unraid fetches the docker logs for the container. Anybody experienced something similar in the past or can explain that behavior?
  2. Thanks @dlandon. I cannot proceed my preclear session from yesterday after a server shutdown. Any ideas? Is this expected behavior? Are there any workarounds? I don't want to start the preclear from the beginning. Update: My mistake. You need to click "start preclear" and then the plugins asks to resume. I initially though "start preclear" will trigger a new preclear run. I expected the UI to show my paused session in the devices table. Log: May 3 16:22:27 Preclear resumed on all devices.
  3. I just tested this plugin with a new disk attached via USB to my Unraid server (6.9.2). I started the preclear process and paused it when the status reached 65%. Unfortunately, no "*.resume" was created on "/boot": May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Preclear Disk Version: 1.0.25 May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Disk size: 4000787030016 May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Disk blocks: 976754646 May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Blocks (512 bytes): 7814037168 May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Block size: 4096 May 02 17:49:33 preclear_disk_WD-WX92DA1DAR18_12570: Start sector: 0 May 02 17:49:36 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: zeroing the disk started (1/5) ... May 02 17:49:36 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: emptying the MBR. May 02 19:13:16 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: progress - 25% zeroed @ 192 MB/s May 02 20:45:26 preclear_disk_WD-WX92DA1DAR18_12570: Zeroing: progress - 50% zeroed @ 167 MB/s May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: Pause requested May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: cp: cannot create regular file '/boot/preclear_reports/WD-WX92DA1DAR18.resume': No such file or directory May 02 21:49:18 preclear_disk_WD-WX92DA1DAR18_12570: Paused After manually creating the folder "/boot/preclear_reports", starting the preclear process and pausing it again the file "WD-WX92DA1DAR18.resume" gets written to disk as expected: root@server:/boot# ls /boot/preclear_reports/ WD-WX92DA1DAR18.resume root@server:/boot# cat /boot/preclear_reports/WD-WX92DA1DAR18.resume # parsed arguments verify_disk_mbr='' erase_preclear='n' short_test='' read_stress='y' erase_disk='n' notify_freq='1' format_html='' verify_zeroed='' write_disk_mbr='' write_size='' skip_preread='y' read_size='' notify_channel='4' no_prompt='y' cycles='1' skip_postread='' read_blocks='' # current operation current_op='zero' current_pos='2621668589568' current_timer='14431' current_cycle='1' # previous operations preread_average='' preread_speed='' write_average='' write_speed='' postread_average='' postread_speed='' # current elapsed time main_elapsed_time='14442' cycle_elapsed_time='14441'
  4. Maybe WatchTower will serve your needs. Personally, I don't like automatic updates for my containers as I have the urge for checking the application after an update. Keep in mind that even minor updates might break the setup. Maybe Diun will serve your needs. I have not stated to remove the paperless-ng container any time soon. Please re-read the introduction post again:
  5. That is because both instances share the same session identifier (cookie). Apply to one of them the environment variable PAPERLESS_COOKIE_PREFIX=second. Updated the FAQ.
  6. Migrate from paperless to paperless-ng: https://paperless-ng.readthedocs.io/en/latest/setup.html?highlight=migrate#migration-to-paperless-ng Migrating from paperless to paperless-ngx: https://paperless-ngx.readthedocs.io/en/latest/setup.html?highlight=migrate#migrating-from-paperless Migrate from paperless-ng to paperless-ngx: https://paperless-ngx.readthedocs.io/en/latest/setup.html?highlight=migrate#migrating-from-paperless-ng Will add those links to the first post. Please do a full backup first!
  7. T0a

    Paperless-ng

    Ich habe bereits ein paperless-ngx Unraid Template vorbereitet, welches wartet akzeptiert zu werden. Die beiden Templates paperless-ng und paperless-ngx werden dann fuer eine Weile koexistieren. Damit kann jeder entscheiden, ob und wann er migrieren moechte. Bis klar ist wie sich paperless-ngx mit der neuen Dev-Gemeinde entwickelt, ist meine Empfehlung aber weiterhin auf paperless-ng fuer ein produktives System zu setzen. Jonas hat sich nicht zum Status von paperless-ng geaussert und ist seit Monaten abwesend. Es ist moeglich, dass er irgendwann zurueckkehrt und die Entwicklung weiter geht. Update: Template paperless-ngx nun ist verfuegbar
  8. Overview: Dedicated support thread for the Docker template paperless-ngx provided via the selfhosters/unRAID-CA-templates repository. Project Page: https://github.com/paperless-ngx/paperless-ngx Demo: https://demo.paperless-ngx.com/ Documentation: https://docs.paperless-ngx.com Registry: https://github.com/paperless-ngx/paperless-ngx/pkgs/container/paperless-ngx Changelog: https://docs.paperless-ngx.com/changelog/ This is the official paperless-ngx Docker support thread. Feel free to ask questions, share your experience with paperless-ngx or describe your paperless setup at home. I try to update this main post regularly based on your feedback. From here on, I will use the terms paperless and paperless-ngx interchangeable. 1. What is paperless-ngx and how does it differ from paperless-ng? Paperless-ngx forked from paperless-ng to continue the great work and distribute responsibility of supporting and advancing the project among a team of people. The paperless-ng project hasn't received a lot of updates and bug fixes in the past month. Even pull requests are not merged for some time now. For now, the paperless-ng and paperless-ngx Unraid templates will coexist in the community application store. That allows existing users to still rely on the mature paperless-ng for their productive environment and make the change to paperless-ngx once they feel comfortable. Consider joining us! Discussion of this transition can be found in issues #1599 and #1632. 2. How to Install 2. 1 New Installation Download and install a Redis container from the community application store (CA) Download and configure the paperless-ngx container from the CA Make sure you point the container to your Redis instance. Use your actual IP and not localhost, because the reference is resolved in the container. In case you need to pass a password to Redis, use the following connection string redis://:[PASSWORD]@[IP]:6379 instead. At the moment Redis doesn't support users and only provides authentication against a global password. You can pass anything as a username, including the empty string as in my example here. To configure a password for your Redis container, set 'redis-server --requirepass "your-secret"' as post arguments on the Redis docker container. Also make sure to not use any special characters. Otherwise, the connection string might not be readable by paperless. Create a user account after this container is created i.e. from Unraids Docker UI, click the paperless-ngx icon and choose Console. Then enter the command "python manage.py createsuperuser" in the prompt and follow the instructions. Alternative, set 'PAPERLESS_ADMIN_USER' and 'PAPERLESS_ADMIN_PASSWORD' in your paperless-ngx docker template (docs). With the later approach, it might be easier to find your password to sensible documents stored in paperless. 2.1 Migration from paperless-ng Paperless-ngx is meant to be a drop-in replacement for Paperless-ng and thus upgrading should be trivial for most users, especially when using docker. However, as with any major change, it is recommended to take a full backup first! Migrating from paperless to paperless-ngx: https://docs.paperless-ngx.com/setup/#migrating-from-paperless Migrate from paperless-ng to paperless-ngx: https://docs.paperless-ngx.com/setup/#migrating-from-paperless-ng Migrate from paperless to paperless-ng: https://paperless-ng.readthedocs.io/en/latest/setup.html?highlight=migrate#migration-to-paperless-ng 3. My personal paperless workflow I use the iOS app ScannerPro to scan my documents and upload them via the app to a webDAV target on my Unraid server. The webDAV target is mounted in the container as consume directory. I use the pre and post hooks to execute web hooks in order to check via Home Assistant whether the processing failed for an uploaded document. Home Assistant sends then notifications about the import status to my phone. This way I can throw away the physical document without worrying about it not being imported. How does your workflow look like? Feel free to share it in this thread. Here you can also find the official recommended workflow for managing your documents with paperless-ngx. 4. FAQ 4.1 Why does the consumer not pick up my files? The consumer service uses inotify to detect new documents in the consume folder. This subsystem, however, does not support NFS shares. You can disable inotify and use a time-based polling mechanism instead (checkout the 'PAPERLESS_CONSUMER_POLLING' variable. If set to a value n greater than 0, inotify is disabled and the directory is polled every n seconds). 4.2 How to customize paperless-ngx? Paperless-ngx does support much more environment variables than the Unraid template initially offers. You can find them in the documentation here. Make sure to have a proper backup before playing around with the environment variables. 4.3 What scanner do you use for paperless-ngx at home? A list of scanners used by our community: iPhone with ScannerPro app; one time purchase (@T0a) More will be added when you share your scanner Paperless-ngx also maintains a list of recommended scanners. Feel free opening a pull request over there to add your recommended scanner to the documentation too. 4.4 Can I use paperless-ngx on a mobile device? Mobile support in paperless-ngx is also almost there. Some layouts don't work yet on small screens. There is also a mobile app in pretty early development stage. Though, it is only available in the Android store yet. 4.5 What is the future of the original paperless-ng template in Unraid? At some point, I will probably remove the paperless-ng template and close its support thread. 4.6 How to configure PostgreSQL as a database? See this post on how to configure PostgreSQL in the template. The official documentation gives further migration steps needed. 4.7 When running two instances of paperless, I cannot stay logged-in to both That is because both instances share the same session identifier (cookie). Apply to one of them the environment variable PAPERLESS_COOKIE_PREFIX (documentation).
  9. I am aware of the fork and involved as well. Until the first dust has setteled and the fork is going forward in a good healthy way, I will offer it to the UnRaid community. It may be as a separate container though. Right now, they are organizing the project and still in the migration phase. Let‘s give them time and let the fork mature. Until then, I would recommend not exposing paperless-ng to the the Internet directly. This is because, security related fixes in third-party dependendies are not merged anymore. You should not expose it anyways since it is not hardended in any way!
  10. Ich kann dir Borg bzw. Borgmatic als Backup Option für die Hetzner Storagebox empfehlen. Schau dir mal beispielsweise diesen Artikel an: https://dominicpratt.de/hetzner-storagebox-mit-borgmatic/ In den Community Applications gibt es auch einen Docker Container für Borgmatic. Ich setze das Setup sehr erfolgreich mit meiner Storagebox und UnRaid Server ein.
  11. Sorry for bringing up this old thread. Is there a reason, why the apps from smdion's repository are still available via CA? The repository seems deprecated for a while now and the two remaining templates are not up-to-date anymore. For example, 'cadvisor' still pulls its images from the deprecated docker hub provider. For the latter, I created a pull-request to the selfhosters template repository. @Squid May I suggest to remove smdion's 'cadvisor' template from CA or blacklist his repository? Let me know what you think.
  12. I would like to add authentication for the node-exporter and the Prometheus metrics endpoint. What I did so far: scrape_configs: - job_name: 'prometheus' scrape_interval: 15s static_configs: - targets: ["localhost:9090"] - targets: ["<UNRAID_IP>:9100"] basic_auth: username: myuser password: mypassword I then updated the data source with the basic auth option in Grafana given the credentials listed above. Though, it seems like Grafana does not receive any metrics after that. Do I have to add the basic auth credentials to the node-exporter too? If so, how can I pass the web.yml with the field basic_auth_users and the bcrypted password to the node-exporter plugin then? Follow-up question: How can I pass a web.yml to the Prometheus container configuring basic authenticiation for connections to the Prometheus expression browser and HTTP API (reference) ? Would love to see an "Authentication" section in one of the entry posts.
  13. Ah, thanks for the heads up! I will try it and proceed with caution. Do you know if the problems with the 2.x image are known to the company? Hopefully, they stabilize the image then.
  14. Hi @ich777 👋, just installed your CheckMK Raw container and noticed the version provided is 1.x. Any chance to get the container updated or any chance to get another container in CA with version 2.x? Is there a reason you ship your container with version 1.x? Thanks you very much in advance.
  15. How is this related to cdr/code-server? Is the gitpod-io implementation superior in any concern?
  16. Eine Alternative wäre ein Board ohne IPMI zu nehmen und stattdessen PIKVM zu verwenden.
  17. In Heimdall go to users and then try the link in the column 'app.apps.autologin_url'. Using that link should auto login your user and require to not enter any password. Note, anybody acquiring that link is able to gain admin privileges using that link! It is like handing out your password. Hope that helps.
  18. @mgutt Thank you for the hint! I added 'powertop --auto-tune' a while ago to my Go file, but couldn't find the time to measure, analyze and give feedback. Here we go now: Introduction: The system contains all parts listed in 1 with 1.1 as an upgrade. The Samsung 860 EVO M.2 with its controller is marked as passthrough device. For the measurement there is also no device connected to the Inatek KT4006, no other USB devices attached to the host either and no display plugged-in. The iGPU is blacklisted hence it is used for VM passthrough too. I use VM's as my daily driver with an external display connected. I need the Inatek KT4006, because I couldn't get my audio interface Presonos AudioBox USB reliable working otherwise. Bios Settings: The mainboard does not come with a lot of power saving settings in the BIOS. I have disabled audio, enabled Intel Speed Shift, enabled C-States and set C State Limit from AUTO to C10. Boot parameter: kernel /bzimage append video=efifb:off,vesafb:off modprobe.blacklist=i2c_i801,i2c_smbus,snd_hda_intel,snd_hda_codec_hdmi,i915,drm,drm_kms_helper,i2c_algo_bit initrd=/bzroot Measurement: 1. Unraid idle, all disks spin-up, powertop --auto-tune: 26W 2. Unraid idle, only cache spin-up, powertop --auto-tune: 19,6W 3. Unraid idle, only cache spin-up, powertop --auto-tune, Ubuntu VM idle: 19,9W to peaks 21W 4. Unraid idle, only cache spin-up, powertop --auto-tune, Ubuntu VM shutdown: 18W My initial measurement was 15W without the Inatek KT4006, Samsung 860 EVO M.2, using a i3-8100 and no powertop optimizations. In consideration that I've added two new devices an increase of 3W (with powertop optimization included) seems reasonable. However, I wonder why the hardware is not capable of reaching lower C-states. Maybe the USB add-on card prevents lower states.
  19. Can highly recommend that product, the software and the developer behind it. Maxim is a great guy and a hell of a developer and engineer. Thanks for sharing here!
  20. Nein, keine weiteren Parameter ausser die Post Arguments wie oben angegeben. Habe den aktualisierten Befehl ausprobiert und kann nun alle Volumes und Container zuordnen. Vielen Dank!
  21. Ich habe mir den Fall nun etwas genauer angesehen. Es handelt sich bei dem Volume, welches sich nicht zuordnen laesst, um Redis. Den Grund feur die Datei kannst du hier nachlesen. 2021-08-28 12:46:51.018237367 +0200 /var/lib/docker/volumes/23b5360adabd40ddfef437d4e5702c7e07af7f578b3b2a538378c2ea8e61a964/_data/dump.rdb ======================================================================================================== Redis /var/lib/docker/(btrfs|overlay2)/.../57b5a1df778859ae545e595b285381e7f585eb547a6f731016c832cbd4072ce8 db390ed2556d /var/lib/docker/containers/db390ed2556d6cbdb41f7400615ece4cd5bf749a4309edeb67544c06efedb9ee Ich starte den Redis Container (von jj9987) mit 'redis-server --requirepass "secret"' als Post Arguments. Der Container wird fuer paperless-ng benoetigt. Ich schaue gerade in der Dokumentation von Redis, ob und wie man das Interval fuer das Snapshot Feature anpassen kann. In der Standard-Konfiguration wird die Datei alle 30 Sekunden geschrieben. Fuer die Anwendung paperless-ng ist es IMHO nicht noetig Daten in Redis dauerhaft zu persistieren. Edit: Der Befehl schaltet das Feature ab: redis-server --requirepass "secret" --save '' Wahrscheinlich muss man dann auch die Datei "dump.rdb" loeschen, sonst laed Redis bei jedem Start veraltete Daten. Mir sind die Implikationen auf paperless-ng jedoch noch nicht bewusst. Ueber "--save" kann man auch die Frequenz einstellen. Der Befehl "--save 60 1000" speichert z.B. das Dataset alle 60 Sekunden, wenn sich 1000 Keys geaendert haben. Laut diesem Artikel, wird der Snapshot nur so haeufig durchgefuehrt, wenn viele Daten durch Redis verarbeitet werden. Aktuell verarbeite ich aber keine Daten aktiv durch paperless-ng. Ich vermute, dass das Scheduling der Worker Tasks Ursache ist.
  22. Seems like newer versions of paperless-ng rely on python3.9 now. Please try 'python manage.py createsuperuser'. I will adapt the first post accordingly. @jseeman Seems like. I already created a pull request to make paperless-ng work with the new Gotenberg 7 API. Waiting for the maintainers to review and merge. Will let you know, once it is available.
  23. Hi @mgutt, vielen Dank fuer deine detaillierte Analyse und Schritte zur Senkung der Schreibvorgaenge. Leider habe ich ein paar Container, welche ich mit deinen Befehlen nicht identifizieren kann. In dem angefuegten Beispiel finde ich nur einen Container. Ich verwende BTRFS. $ find /var/lib/docker -type f -print0 | xargs -0 stat --format '%Y :%y %n' | sort -nr | cut -d: -f2- | head -n30 2021-08-26 14:22:10.173941747 +0200 /var/lib/docker/containers/b08e6e2d18fbb8ac7743f05c9587936cf216d07c02fb077bf36ec6ec666ddfa4/b08e6e2d18fbb8ac7743f05c9587936cf216d07c02fb077bf36ec6ec666ddfa4-json.log 2021-08-26 14:22:10.075941360 +0200 /var/lib/docker/volumes/11ad4a2d8c58a5baf36528dafe15c4d71b3b47db02676219ec4a35c4247131fc/_data/dump.rdb 2021-08-26 14:20:43.434600397 +0200 /var/lib/docker/containers/3a619f64c1353174cecf913582ffb3360a3d972521640c39d17e924f99e88d62/3a619f64c1353174cecf913582ffb3360a3d972521640c39d17e924f99e88d62-json.log 2021-08-26 14:05:40.423066270 +0200 /var/lib/docker/containers/1de8d6e23f5a1cb3c1cfe497156c557deb3c1b187d7916ab71eb2116885cd685/1de8d6e23f5a1cb3c1cfe497156c557deb3c1b187d7916ab71eb2116885cd685-json.log 2021-08-26 14:05:39.344062070 +0200 /var/lib/docker/btrfs/subvolumes/2c44fb572165fdf52a19fc518924de003750fcdbfc4074505a14cb8997477f1b/run/tomcat/tomcat9.pid 2021-08-26 14:05:31.841032871 +0200 /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db 2021-08-26 14:05:31.786032657 +0200 /var/lib/docker/btrfs/subvolumes/2c44fb572165fdf52a19fc518924de003750fcdbfc4074505a14cb8997477f1b/etc/mysql/my.cnf $ csv="CONTAINER ID;NAME;SUBVOLUME\n"; for f in /var/lib/docker/image/*/layerdb/mounts/*/mount-id; do sub=$(cat $f); id=$(dirname $f | xargs basename | cut -c 1-12); csv+="$id;" csv+=$(docker ps --format "{{.Names}}" -f "id=$id")";" csv+="/var/lib/docker/.../$sub\n"; done; echo -e $csv | column -t -s';' CONTAINER ID NAME SUBVOLUME 0837668ab9aa ha-dockermon /var/lib/docker/.../c453891a316ec4eb019e3f422822522ab02eab5206d8057fb683148f9f9ee73e 1de8d6e23f5a ApacheGuacamole /var/lib/docker/.../2c44fb572165fdf52a19fc518924de003750fcdbfc4074505a14cb8997477f1b 327b99639003 mediabox-webdav /var/lib/docker/.../060bacaf2b0a5ba9da609170d23f3aa784c740013a868a7144109b5074679e8d 3332bcd89455 /var/lib/docker/.../9b2cfb3fa23e8f6e223b3e706663fcbbeebf84b8597d706c8ce23f643fc42274 3a619f64c135 paperless-ng /var/lib/docker/.../1c3de7a697750b83951615239d40a196fdfe8a3c1f14dd987ee39b28a72ee6f3 658a3cdbc4ec pyload /var/lib/docker/.../9ee162566420a1490a21a1365665b9d72bd0282f89a384950867d0142cee4cb1 85d30bcb7ebc PhotoPrism /var/lib/docker/.../afe5ba079e28dafb855219569836437f013add1ea5f4ae77972c78ac72d42c04 a486d6c7fd29 syncthing /var/lib/docker/.../42ec5e12317b47750f9c6776952678028b20416bbe868f5e152deb10bc239b56 abfd165b7c2d cyberchef /var/lib/docker/.../8437459cd2b973a976954aaa87bbe82be907bf48c106cafb2b3ea540cc460b6f b08e6e2d18fb Redis /var/lib/docker/.../230a683c4c170076532d72dd3c44fe5864ffafc3b354c3936831f86d239ca22c b6d9783d0efa borgmatic /var/lib/docker/.../00348b7c4c31d7a8c62c1d3f887db40be599d7bce20a121afd9a07c532d6c98d c0e15cf5cfee /var/lib/docker/.../202335a2b00810c0292518af7cf2985341e3953e6c5cbc86f206a8bff52a9c70 e6027b587016 /var/lib/docker/.../65e4ae75354b4593e84f8555444e9168420984d11d76365ee1ad06b6dcc4acfb $ for f in /var/lib/docker/image/btrfs/layerdb/mounts/*/mount-id; do echo $(dirname $f | xargs basename | cut -c 1-12)' (Container-ID) > '$(cat $f)' (BTRFS subvolume ID)'; done 0837668ab9aa (Container-ID) > c453891a316ec4eb019e3f422822522ab02eab5206d8057fb683148f9f9ee73e (BTRFS subvolume ID) 1de8d6e23f5a (Container-ID) > 2c44fb572165fdf52a19fc518924de003750fcdbfc4074505a14cb8997477f1b (BTRFS subvolume ID) 327b99639003 (Container-ID) > 060bacaf2b0a5ba9da609170d23f3aa784c740013a868a7144109b5074679e8d (BTRFS subvolume ID) 3332bcd89455 (Container-ID) > 9b2cfb3fa23e8f6e223b3e706663fcbbeebf84b8597d706c8ce23f643fc42274 (BTRFS subvolume ID) 3a619f64c135 (Container-ID) > 1c3de7a697750b83951615239d40a196fdfe8a3c1f14dd987ee39b28a72ee6f3 (BTRFS subvolume ID) 658a3cdbc4ec (Container-ID) > 9ee162566420a1490a21a1365665b9d72bd0282f89a384950867d0142cee4cb1 (BTRFS subvolume ID) 85d30bcb7ebc (Container-ID) > afe5ba079e28dafb855219569836437f013add1ea5f4ae77972c78ac72d42c04 (BTRFS subvolume ID) a486d6c7fd29 (Container-ID) > 42ec5e12317b47750f9c6776952678028b20416bbe868f5e152deb10bc239b56 (BTRFS subvolume ID) abfd165b7c2d (Container-ID) > 8437459cd2b973a976954aaa87bbe82be907bf48c106cafb2b3ea540cc460b6f (BTRFS subvolume ID) b08e6e2d18fb (Container-ID) > 230a683c4c170076532d72dd3c44fe5864ffafc3b354c3936831f86d239ca22c (BTRFS subvolume ID) b6d9783d0efa (Container-ID) > 00348b7c4c31d7a8c62c1d3f887db40be599d7bce20a121afd9a07c532d6c98d (BTRFS subvolume ID) c0e15cf5cfee (Container-ID) > 202335a2b00810c0292518af7cf2985341e3953e6c5cbc86f206a8bff52a9c70 (BTRFS subvolume ID) e6027b587016 (Container-ID) > 65e4ae75354b4593e84f8555444e9168420984d11d76365ee1ad06b6dcc4acfb (BTRFS subvolume ID) PS: Kann es sein, dass du vergessen hast den Befehl fuer BTRFS im Guide zu ergaenzen?
  24. I switched from the 'docker.img' to a docker bind-mount directory on a separate docker share, and restored all of my docker containers via this plugin for the first time. Great experience, this feature works like a charm! After restoring and checking on the containers, the docker tab still reported 'Update available' for some of the containers. I clicked on check for updates then and everything reported as 'up-to-date'. Seems like the plugin does not fetch the version status of the container after restoring. Maybe this is intended behavior. Was not a problem for me. Just wanted to let you know and use the chance to say thank you!