T0rqueWr3nch

Members
  • Posts

    88
  • Joined

Converted

  • URL
    engineerworkshop.com
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

T0rqueWr3nch's Achievements

Rookie

Rookie (2/14)

43

Reputation

  1. You can keep drilling down. du -sh /mnt/cache/system/* du -sh /mnt/cache/appdata/* "Normal" is subjective. For you to jump 70 gigs (but probably less than that in reality) is significant. Did you install any new Docker containers? The most likely culprit is a misconfigured Docker container (either missing a mount by you, or by the image maintainer) filling up the Docker vDisk or excessive logging or both.
  2. So Docker is down now? That's a different issue. Turn off Docker under Settings (and apply). Reboot Unraid. Make sure unassigned disk has mounted. Re-enable Docker.
  3. @jlh, you know, it's funny, I use podman all the time at work (hence the request for rootless Docker) but I never considered that there may be a preexisting podman package for Slackware. It turns out there is: I've attached the feature request for reference. What's nice is that by using podman, we could actually sidestep having to deal with migrating over to rootless Docker. Those who wish to run Docker as root by default can continue to do so, those who want a bit more security can switch to podman. It even supports namespace mapping.
  4. This is a good recommendation. I never considered that a podman package might actually be available for Slackware. SlackBuilds.org - podman It even supports uid mapping. This would also avoid any conflict with switching over to rootless Docker, since we would no longer have to. Those who wish to stay running on Docker as root can continue to do so without breaking backward compatibility; those who want a little more security could switch over to podman. With some relatively light development, the web GUI frontend of Community Apps could even be configured to allow the optional use of podman (in the future; not necessary for this request).
  5. Sorry, I realized I missed your main question. Have you run a trim recently? Try that first 'fstrim -va'. It's also possible (for the reasons in my previous post) that you have deleted files being held open by a process. These files are seen by df but not by du. There are a couple of commands you can try here to figure out what: 'lsof +aL1 /unassigned/ud-docker' 'lsof -n | grep -i deleted' Of course, you can also just try restarting Unraid if you haven't already.
  6. rsync can work over SSH to the second server (borgmatic can as well). If you're going to buy two hard drives, I'd probably just stick with Unraid for the second server in case you need to expand. The additional cost of a basic Unraid license is marginal and for a good cause! With this kind of redundancy, do you have plans for offsite (3-2-1 principle)?
  7. I don't want to take away business from Unraid and I have never used the Unraid My Servers (which I think is now Unraid Connect?) functionality, so if there are reasons to recommend that route, I hope others will chime in here. In the meantime, I'll give you some keywords to research: How big is the data set that we're talking here? Enough to fit on a single drive? If so, as a beginner, I'd recommend rsync with the Unassigned Devices plugin. Another option, and possibly the best one to use long-term, is borgmatic, but the configuration may be a bit intimidating (especially if you're new), so you don't have to start there.
  8. The Unraid dashboard reports based on "df -H". It's better for monitoring if you're going to run out of space; as such, the more useful column there is the "Free" column. I don't use Krusader, but I bet it's using du to calculate disk usage. They use different techniques for reporting: df reports what the file system says it has; du loops over the directories and adds up the file sizes. Because of this, df is fast, du is slow (you might be able to notice this when you use Krusader to give you disk utilization). As for why they're different: du doesn't run on the whole file system; if the process running du doesn't have sufficient permissions, it can't see everything (since it's manually looping over the directories); if the files are deleted, but still open by another process (which happens a lot with log files), du can't see them but df "can" (in the sense that it's still reported by the file system), etc. Keep in mind we're also looking at GB vs GiB here. Oversimplified tl;dr: df (and the Unraid stats page) are good for telling you if you're going to run out of room (free disk); du is better for telling you what is using that storage.
  9. Another option, albeit a clunky one, is to not use Docker in Unraid and instead run a VM where you can run rootless docker with user namespace mapping; just let Unraid be a NAS. This option would probably be overkill for most containers (and users) and you'd really be losing out on some of the advantages of running Docker locally on Unraid. (I still think there should be further consideration of rootless Docker in Unraid- as previously stated, sure there are disadvantages, but there's a reason this is an industry standard). @sir_storealot Consider your threat model. Are you most concerned about ransomware or data exfiltration? What are potential attack vectors an attacker could use to get at you? Do you have SMBs with guest access and no passwords? Do you have your Unraid server exposed to the internet? Are you running containers that have an elevated risk of an attack? (In which case, don't run sketchy images). I only mention these things because I think you should always be concerned about security, but I don't think you necessarily need to worry. The higher likelihood risks to your data on Unraid come more from things external to Unraid (such as encryption of network shares or people exposing their Unraid webGUI to the internet) than locally on Unraid itself. That being said, I'm glad this vulnerability is being fixed. Thanks everyone.
  10. Hi @sir_storealot, I know your question wasn't directed at me, but I imagine the biggest problem is because Unraid is built on Slackware. Great for making a lightweight server, not as much fun from a sysadmin/distro maintainer perspective as it's a barren Linux distribution. Almost everything has to be built by hand. There is the --user option which can be passed in the extra parameters section of the container template. That option can be used to force the container processes to run as a non-root user (some containers actually do this in their Dockerfile- I believe many/most Linuxserver images do this; hence their special PUID/PGID options which make them unique). The problem with the --user option is that it forces the process inside the container, to also run under this UID (i.e. this isn't user namespace mapping), which doesn't work with all Docker images (think things like postgresql which requires root inside the container IIRC). I did a very cursory overview to scope out what it would take to pull this request off (rootless Docker) and I think it's achievable; especially given that Slackware has more features than I thought it would (i.e. it looks like we do have uidmapping, which was quite a pleasant surprise). If Unraid is looking for a part-time dev/DevOps/security admin, just let me know!😆
  11. On one hand, good that the output reveals nothing, which is probably to be expected since you currently aren't running out of memory...on the other hand, now we're in this ambiguous state on if we're still compromised since persistence is always a concern. Good that you've never exposed SSH. And you've never exposed your Unraid Web GUI to the internet correct? What are the other forwards to? The logs show this happened on the 28th- did you have anything (Docker containers, plugins, etc.) then that you don't have now?
  12. Run the following command and give us the output: ps -auxf | grep -v grep | grep -i xmrig This is what Fix Common Problems is looking for. Kudos @Squid for thinking to include this. We need to go into damage control mode and figure out if they've established persistence and how. Did you ever expose your Unraid server to the internet? Ever port forwarded to SSH?
  13. I assume you were forwarding port 22 on your router? You've turned that off now, correct? (I think that's what you're saying, but I want to double check). The line that was most concerning to me was this one: Jan 13 15:44:30 sshd[28347]: Connection closed by authenticating user root 170.64.214.0 port 46188 [preauth] But the preauth tag makes me think this SSH connection was never fully established and did not gain access to your system. You didn't have any other ports open on your router did you? If you want reassurance, I think you're most likely okay. Complex passwords help a lot. Logs don't capture everything, especially outbound stuff, but I don't want to worry you needlessly so we'll leave it at that.
  14. Not a stupid question at all; in fact it's a great one. In keeping with the Docker philosophy, it's best to think about things as "services", where your Wordpress container and all of its supporting containers (i.e. your database) constitute one service. This is more clear if you're using docker compose where, when you want to bring up Wordpress, it brings up the Wordpress app container itself plus its associated db container. (By the way, I recommend docker compose, even on Unraid). Best practice is subjective, but I recommend a separate db per instance. It makes db backups and db container management easier. This configuration also gives you another advantage: with docker compose, the db container doesn't need external network access at all and can communicate with the Wordpress app within an internal Docker network defined locally within the docker compose file. The Docker network provides DNS resolution. By the way, instead of Wordpress, have you considered Ghost? A lot less security issues.