sir_storealot

Members
  • Posts

    56
  • Joined

  • Last visited

Everything posted by sir_storealot

  1. Just one strange thing that I noticed: my backup reported an error d/t missing docker path (as described above), but the post-run-script still got called with TRUE as the 3rd argument, indicating a successful backup. Isn't that odd?
  2. You are missing the point, this is not about backing up "valuable data". It's about getting rid of the error message (i.e. ignore the volumes) and allow the backup to finish without throwing an error. The volumes ARE disposable (at least in my case). Well... Just because you cannot think of a use case, doesn't mean it does not exist for others. You can read up on the general advantages here: https://docs.docker.com/storage/volumes/
  3. Ah interesting, I'm facing the same issue. Would be nice if volume mounts (mappings with no leading slash) would be ignored. Not true at all. Volume mappings work fine with the docker compose plugin, and in fact once created (with the plugin or the command line), also with the unraid vanilla interface. Volumes are the preferred docker way of storage, and they are very useful on unraid in certain use cases.
  4. I totally feel you. Hope this will get some good answers on the Unraid side, would be interested myself. However, as far as the IoT devices are concerned, did you think about assigning them to their own wifi network which has client isolation and only (NAT) outside access (kinda like DMZ)?
  5. Awesome, thank you so much! Makes my scripts a lot cleaner!
  6. @SirCadian glad it is working fine for you! You actually inspired me to do a little experiment regarding deduplication efficiency of the Appdata Backup output. Test scenario: Appdata Backup Full flash + VM meta (1 VM) + docker config backup (13 containers), AB compression disabled This is then backed up to an external repository using (a) restic and (b) kopia to compare the two, including best-speed deflate compression. AB Backup 1 taken at t0 (source dir size 1552 MB) AB Backup 2 taken at t1 = t0 + 24h (source dir size 1548 MB) Normal system/container usage during the 24h period, nothing special. Note the 2nd backup is slightly smaller, maybe some logs inside containers got deleted, database trimmed or whatever. Results: Repo size (MB) after 1st backup: 1038 kopia 988 restic Repo size (MB) after 2nd backup: 1291 kopia (+253 MB) 1086 restic (+98 MB) Results - Variant "untarred": I also did a test (with fresh repositories) untarring the the appdata backup tars (and then deleting them), to see if backing up the raw files improves deduplication, and it did: kopia untarred size increase t0->t1 154 MB (99 MB less) restic untarred size increase t0->t1 75 MB (23 MB less) My takeaway: As long as the Appdata Backup compression is turned off, deduplication will work ok. For best deduplication, the files need to be untarred. Could be an interesting addition for the AB plugin to add an option to just copy files instead of tarring. Notes: Flash backup was still compressed all the time, did not want to mess with this.
  7. You make some excellent points, and I agree 100 % with your risk assesment. However, I believe I have already tighened all the "other screws" that you mention a long time ago, with docker containers from non-official sources (hard to judge what really is "reputable", it's always a grey area) now being the 2nd biggest exposure I can see, which is why I am so nervous about it. The biggest exposure in my eyes are Unraid plugins... But it's so hard to live without them. As always, a trade-off needs to be made between security and convenience... @jlh Thanks for your additional in-depth explanation. Really great to see you guys are on top of this!
  8. Thank you for your response! So if I understand you correctly, as a mitigation, or even a future best practice implemented on the user side, we could try to run the containers with the --user=nobody option, and check if they still work? This way, attack surface could be reduced significantly, I would be quite happy with this approach.
  9. I don't get your example. For my backup container, I map the directories I need backed up as a read-only volume volume into the docker. I definitely would NOT want my backup container to have unrestricted r/w access to everything. Edit: on second thought, I think I understand what you mean, the backup user would have to have the right permissions to read the files that have been mapped into the container. Makes sense. According to the description, this attack also allows for someone exploiting a vulnerability in a non-malicious container to escape the container, which is the much bigger concern for me (and probably other users running various services processing external data) - see: https://github.com/opencontainers/runc/security/advisories/GHSA-xr7r-f8xq-vfvv "Strictly speaking, while attack 3a is the most severe from a CVSS perspective, attacks 2 and 3b are arguably more dangerous in practice because they allow for a breakout from inside a container as opposed to requiring a user execute a malicious image." This opens up the attack surface SIGNIFICANTLY in comparison to just malicious containers. I will try to manually reduce my container privileges as suggested, seems to be a good idea anyway. Thanks a lot for explaining the bigger implications!
  10. Not a plex issue, sounds like filesystem corruption. Check here for more info: https://docs.unraid.net/legacy/FAQ/check-disk-filesystems/ (legacy docs link, but there does not seem to be a current version)
  11. Interesting link, thanks for sharing! I always assumed the best deduplication would be plain files, then uncompressed tarball, but from the thread it seems it should not matter much. Good to know, as I am lazy, and just use uncompressed tar, and let my backup tool handle dedup/compression. Please share your insight if you get around to making some actual first hand tests of untared vs. tared!
  12. "One of the best practices while running Docker Container is to run processes with a non-root user." (dockerlabs) I think this incident raises the following questions. Are there any specific reasons why root user is required to run docker in Unraid? Are there any plans to change to a non-root user in the future? Is it possible to change the configuration by hand to run as non-root user? Any recommendations on how to best achieve this? Would appreciate an answer @limetech @SpencerJ Thank you!
  13. Shouldn't deduplicating backup tools handle this situation just fine using rolling hashes (e.g. rabin karp)? As long as the file is not compressed I would not expect issues, did you actually test with compression=off? Maybe not the most elegant solution, but why not just use the post-run-script to extract all the tars (AB backup dir, or to your backup dropdir, doesnt matter), then delete original tar?
  14. Awesome, thanks! It would be really nice to have a returncode I could use for my alerting and healthchecks. Would it be too much to ask to include something like this at the end: if ($errorOccured){ exit (1); } else { exit (0); } Sorry, don't know much PHP, but you catch mydrift I hope. 😅 I guess all the "goto end" stuff would need to be pointed there as well.
  15. Hi, the backup plugin is great, however I would like to trigger it externally and not use the built-in scheduler. Is this a safe/good way to it? php /usr/local/emhttp/plugins/appdata.backup/scripts/backup.php > /dev/null 2>&1 Or is there any better way?
  16. Hi, not sure if you still need help, but try running: docker restart plex ...in the root shell window in unraid. If this works, add it to a script, and you're done.
  17. Same question Any update/experiences you guys can share?
  18. btrfs is good enough, unless you use Raid 5/6. So in the context of unraid, imho it's totally fine and sufficient.
  19. I agree this should be standard. I am now converting my btrfs shares to subvolumes manually, unnecessary pain...
  20. Doing a search for "unbalance" in community apps yields two results from same repository, one called "unBALANCE" last updated 2021, and one called "unbalanced" with a recent update. However, the old one says "popularity #14". Is this two versions of the same thing, or just something that needs cleaning up? Which one should I use?
  21. I was directed here by the "Fix Common Problems" plugin, and it is completely unclear to me why the included driver would be an issue, and what exactly this "community driver" is. Is this like a newer version? Or a hacked version of the original driver with improvements? I think the 1st post could benefit from some basic information explaining the situation.
  22. @EDACerton This plugin is simply amazing. Thank you so much, truly unbelievable how smooth everything works! 🤩 I have a suggestion though, it would be nice to have some quick tailnet status card on the unraid dashboard, showing connection status, exit node status, number of clients etc.. If course no idea how difficult that is, and it works perfectly without it, but my inner geek would really like this 😂
  23. Hi, here is what worked for me: Go to your unraid, settings, tailscale in the first tab, you will see your account name, and "viewing" next to it click on the "viewing" and "sign in to confirm identity" (it could be that the client you are using needs to be connected to tailscale too, but I believe it's not necessary) once you re-authentice, you will be taken to a new settings page, where you can change exit node and routing stuff There is no need to go into the CLI at all, you can do it all from the GUI Hope this helps!
  24. Hi, thank you for your pointer! In fact, that happened to me a month or so ago, and caused exactly the issues you described. I have resolved the situation since, and I'm still seeing the same issue, so it makes me wonder if running out of disk space maybe "broke" something, and leads to this error persisting even though disk space is not an issue anymore. Will try to investigate further in this area!
  25. I cant seem to solve this Any idea how to find out what deluge is doing?