T0rqueWr3nch

Members
  • Posts

    88
  • Joined

Everything posted by T0rqueWr3nch

  1. You can keep drilling down. du -sh /mnt/cache/system/* du -sh /mnt/cache/appdata/* "Normal" is subjective. For you to jump 70 gigs (but probably less than that in reality) is significant. Did you install any new Docker containers? The most likely culprit is a misconfigured Docker container (either missing a mount by you, or by the image maintainer) filling up the Docker vDisk or excessive logging or both.
  2. So Docker is down now? That's a different issue. Turn off Docker under Settings (and apply). Reboot Unraid. Make sure unassigned disk has mounted. Re-enable Docker.
  3. @jlh, you know, it's funny, I use podman all the time at work (hence the request for rootless Docker) but I never considered that there may be a preexisting podman package for Slackware. It turns out there is: I've attached the feature request for reference. What's nice is that by using podman, we could actually sidestep having to deal with migrating over to rootless Docker. Those who wish to run Docker as root by default can continue to do so, those who want a bit more security can switch to podman. It even supports namespace mapping.
  4. This is a good recommendation. I never considered that a podman package might actually be available for Slackware. SlackBuilds.org - podman It even supports uid mapping. This would also avoid any conflict with switching over to rootless Docker, since we would no longer have to. Those who wish to stay running on Docker as root can continue to do so without breaking backward compatibility; those who want a little more security could switch over to podman. With some relatively light development, the web GUI frontend of Community Apps could even be configured to allow the optional use of podman (in the future; not necessary for this request).
  5. Sorry, I realized I missed your main question. Have you run a trim recently? Try that first 'fstrim -va'. It's also possible (for the reasons in my previous post) that you have deleted files being held open by a process. These files are seen by df but not by du. There are a couple of commands you can try here to figure out what: 'lsof +aL1 /unassigned/ud-docker' 'lsof -n | grep -i deleted' Of course, you can also just try restarting Unraid if you haven't already.
  6. rsync can work over SSH to the second server (borgmatic can as well). If you're going to buy two hard drives, I'd probably just stick with Unraid for the second server in case you need to expand. The additional cost of a basic Unraid license is marginal and for a good cause! With this kind of redundancy, do you have plans for offsite (3-2-1 principle)?
  7. I don't want to take away business from Unraid and I have never used the Unraid My Servers (which I think is now Unraid Connect?) functionality, so if there are reasons to recommend that route, I hope others will chime in here. In the meantime, I'll give you some keywords to research: How big is the data set that we're talking here? Enough to fit on a single drive? If so, as a beginner, I'd recommend rsync with the Unassigned Devices plugin. Another option, and possibly the best one to use long-term, is borgmatic, but the configuration may be a bit intimidating (especially if you're new), so you don't have to start there.
  8. The Unraid dashboard reports based on "df -H". It's better for monitoring if you're going to run out of space; as such, the more useful column there is the "Free" column. I don't use Krusader, but I bet it's using du to calculate disk usage. They use different techniques for reporting: df reports what the file system says it has; du loops over the directories and adds up the file sizes. Because of this, df is fast, du is slow (you might be able to notice this when you use Krusader to give you disk utilization). As for why they're different: du doesn't run on the whole file system; if the process running du doesn't have sufficient permissions, it can't see everything (since it's manually looping over the directories); if the files are deleted, but still open by another process (which happens a lot with log files), du can't see them but df "can" (in the sense that it's still reported by the file system), etc. Keep in mind we're also looking at GB vs GiB here. Oversimplified tl;dr: df (and the Unraid stats page) are good for telling you if you're going to run out of room (free disk); du is better for telling you what is using that storage.
  9. Another option, albeit a clunky one, is to not use Docker in Unraid and instead run a VM where you can run rootless docker with user namespace mapping; just let Unraid be a NAS. This option would probably be overkill for most containers (and users) and you'd really be losing out on some of the advantages of running Docker locally on Unraid. (I still think there should be further consideration of rootless Docker in Unraid- as previously stated, sure there are disadvantages, but there's a reason this is an industry standard). @sir_storealot Consider your threat model. Are you most concerned about ransomware or data exfiltration? What are potential attack vectors an attacker could use to get at you? Do you have SMBs with guest access and no passwords? Do you have your Unraid server exposed to the internet? Are you running containers that have an elevated risk of an attack? (In which case, don't run sketchy images). I only mention these things because I think you should always be concerned about security, but I don't think you necessarily need to worry. The higher likelihood risks to your data on Unraid come more from things external to Unraid (such as encryption of network shares or people exposing their Unraid webGUI to the internet) than locally on Unraid itself. That being said, I'm glad this vulnerability is being fixed. Thanks everyone.
  10. Hi @sir_storealot, I know your question wasn't directed at me, but I imagine the biggest problem is because Unraid is built on Slackware. Great for making a lightweight server, not as much fun from a sysadmin/distro maintainer perspective as it's a barren Linux distribution. Almost everything has to be built by hand. There is the --user option which can be passed in the extra parameters section of the container template. That option can be used to force the container processes to run as a non-root user (some containers actually do this in their Dockerfile- I believe many/most Linuxserver images do this; hence their special PUID/PGID options which make them unique). The problem with the --user option is that it forces the process inside the container, to also run under this UID (i.e. this isn't user namespace mapping), which doesn't work with all Docker images (think things like postgresql which requires root inside the container IIRC). I did a very cursory overview to scope out what it would take to pull this request off (rootless Docker) and I think it's achievable; especially given that Slackware has more features than I thought it would (i.e. it looks like we do have uidmapping, which was quite a pleasant surprise). If Unraid is looking for a part-time dev/DevOps/security admin, just let me know!😆
  11. On one hand, good that the output reveals nothing, which is probably to be expected since you currently aren't running out of memory...on the other hand, now we're in this ambiguous state on if we're still compromised since persistence is always a concern. Good that you've never exposed SSH. And you've never exposed your Unraid Web GUI to the internet correct? What are the other forwards to? The logs show this happened on the 28th- did you have anything (Docker containers, plugins, etc.) then that you don't have now?
  12. Run the following command and give us the output: ps -auxf | grep -v grep | grep -i xmrig This is what Fix Common Problems is looking for. Kudos @Squid for thinking to include this. We need to go into damage control mode and figure out if they've established persistence and how. Did you ever expose your Unraid server to the internet? Ever port forwarded to SSH?
  13. I assume you were forwarding port 22 on your router? You've turned that off now, correct? (I think that's what you're saying, but I want to double check). The line that was most concerning to me was this one: Jan 13 15:44:30 sshd[28347]: Connection closed by authenticating user root 170.64.214.0 port 46188 [preauth] But the preauth tag makes me think this SSH connection was never fully established and did not gain access to your system. You didn't have any other ports open on your router did you? If you want reassurance, I think you're most likely okay. Complex passwords help a lot. Logs don't capture everything, especially outbound stuff, but I don't want to worry you needlessly so we'll leave it at that.
  14. Not a stupid question at all; in fact it's a great one. In keeping with the Docker philosophy, it's best to think about things as "services", where your Wordpress container and all of its supporting containers (i.e. your database) constitute one service. This is more clear if you're using docker compose where, when you want to bring up Wordpress, it brings up the Wordpress app container itself plus its associated db container. (By the way, I recommend docker compose, even on Unraid). Best practice is subjective, but I recommend a separate db per instance. It makes db backups and db container management easier. This configuration also gives you another advantage: with docker compose, the db container doesn't need external network access at all and can communicate with the Wordpress app within an internal Docker network defined locally within the docker compose file. The Docker network provides DNS resolution. By the way, instead of Wordpress, have you considered Ghost? A lot less security issues.
  15. Nope, separate vulnerability in glibc which is also severe given its ubiquity (and that of the affected function therein).
  16. I evaluated this yesterday on my own system and came to the following conclusion: We are currently affected on Unraid- including the runc and BuildKit vulnerabilities as well as with the separate Docker engine vulnerability ("All users on versions older than 23.0 could be impacted."). The runc and BuildKit vulnerabilities are the most severe vulnerabilities. They are especially bad considering that Unraid runs Docker as root; container escape means that the attacker would have full root filesystem access on the host. Even worse, it's possible that an attacker could further obtain full root command abilities on the Unraid host. As for disabling container updates, that depends on your risk profile, but yes, that could be a potential risk mitigation, especially given that this vulnerability is now public. Depending on where you source your images, it's possible (albeit unlikely) that a malicious image is pulled. Given the layered nature of building Docker images (where it's common to build images on top of other images), there's also the possibility of a supply chain attack, where an image maintainer unknowingly pulls a layer that has itself been compromised.
  17. I just did a very quick pass through the code. The CSS seems fairly limited, so my guess (and that's really all it is) is that this pink checkbox is an artifact of the overall Unraid theme you have applied. If you want to feel better about it- in your mismatch logs, are you seeing hits on files that are on that disk? If so, then your integrity checks are working and I wouldn't concern myself about it (unless @bonienl says otherwise).
  18. Hello! I'm glad you asked this question, as I recently contributed to implementing this functionality in Dynamix File Integrity (DFI). With the recent plugin update at the beginning of this month, DFI now supports auto-hashing for cache-to-disk moves. This means that the checksums are generated after the Mover has transferred the files to the main array. Before this update, automatic hashing was limited; it only occurred on main array disk writes. This would include new files being written directly to the main disk array or modifications being made to files already present on the array. Therefore, checksums weren’t automatically generated for files moved from the cache pool, necessitating users to manually trigger a build. With this update, users should hopefully notice that they no longer need to manually trigger builds when automatic checksums are enabled. If you have any other questions, feel free to ask!
  19. We're really only watching for a close_write, so the fact that you're seeing this implies that VLC is opening the file like it intended to write to it. My bet is that it's related to this: VLC drop linux FS-Cache when playing (#27848) · Issues · VideoLAN / VLC · GitLab. Specifically this line: Try updating VLC and let us know how that goes. Update (5/23/23): It doesn't appear this is fixed until 3.0.19 (current release is 3.0.18), so you may have to wait for VLC to release 3.0.19 or seek out an alpha/beta version.
  20. Pull requests in: Implement auto-hashing for cache-to-disk moves by Torqu3Wr3nch · Pull Request #79 · bergware/dynamix (github.com) Fix Automatic Checksums - Don't exclude everything when nothing is set to be excluded. by Torqu3Wr3nch · Pull Request #81 · bergware/dynamix (github.com)
  21. @bonienlI ran into the same problem when I attempted to implement this plugin with the "automatic" functionality today. I believe I have figured out the problem (I actually found two problems; one I've already submitted the PR for) and I have the automatic functionality working as expected on my Unraid development server as we speak. I just wanted to do some further testing before I submit the additional PR. tl;dr: Focus on your 6.12 release. You can take this off your mind. I'll submit the PR for your review. We appreciate your work!
  22. You're right: # Perform the clearing logger -tclear_array_drive "Clear an unRAID array data drive v$version" echo -e "\rUnmounting Disk ${d:9} ..." logger -tclear_array_drive "Unmounting Disk ${d:9} (command: umount $d ) ..." umount $d echo -e "Clearing Disk ${d:9} ..." Thank you!
  23. While preparing to shrink my array, I had a question which I didn't see documented on the wiki or in @RobJ's zeroing script post. If we are zeroing the data disk, shouldn't we disable the mover? Otherwise, don't we run the risk of data loss if the mover runs and moves data to the disk being zeroed?
  24. Can you help me run through the following scenarios? 1) Parity 1 fails- This one seems obvious. Data drives are still intact, let parity 2 run through to completion. 2) Parity 2 fails- Parity never existed on it in the first place, so who cares. But the loss of a data drive seems less obvious to me: What happens if a data drive fails? Does the parity 2 rebuild continue to run or does it stop automatically? If it doesn't stop automatically, aren't we just now calculating parity 2 with a corrupt disk thus corrupting parity 2? Do we stop the parity 2 calculation? Do we go back in to the New Config and remove Parity 2 from circulation, check the "Parity Already Valid Box", restart the array, stop the array again, power down, pull the failed data drive, install new data drive, power on, assign new data drive and rebuild the data drive? Sorry for the barrage of questions; I really value your input.