T0rqueWr3nch

Members
  • Posts

    60
  • Joined

  • Last visited

Converted

  • URL
    engineerworkshop.com
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

T0rqueWr3nch's Achievements

Rookie

Rookie (2/14)

33

Reputation

  1. OS updates (IIRC) come from AWS. Best of luck keeping track of those. Do you have a specific concern with allowing Unraid to initiate the outgoing connection and then allowing the established return traffic?
  2. Hey everyone, I recently had a permissions issue where an rsync backup failed to an NFS share in Unraid. It ended up being due to an ACL on that particular directory (and in fact it's on the whole share). I won't rule out that something outside of Unraid itself accessing that directory didn't set it, but is anyone else seeing something similar? Seems to be new since the latest RC, but that could just be a coincidence. Thanks in advance!
  3. It's an excellent point. Unfortunately, I don't think there's going to be anyway around checking in layers since none of these checks are going to be absolutely foolproof. There's just too many that the library can be hidden/obfuscated. That's why I recommend following it all up with the scan at the end even if the other tests come back negative. If it's positive, then you know you're affected, and if it's not you can at least be reasonably confident that you aren't. And then you're also not relying entirely on the developer/Docker maintainer that may not even be aware of the dependencies they're using themselves. Can I get your opinion on something? I am considering replacing step 1 ("the quick and easy way") with an even quicker and easier way that has a few more automated checks. The problem is that it uses a remote script: wget https://raw.githubusercontent.com/rubo77/log4j_checker_beta/main/log4j_checker_beta.sh -q -O - | bash I trust the remote script, it's clearly visible what it's doing and it's basically doing the same checks (but automated and includes your Java install check), but what is your opinion on recommending this to other Unraid users? Would you yourself be comfortable with it?
  4. It's a good starting point so I should probably at least mention it, thanks. By itself though isn't completely accurate and thus not sufficient to check for vulnerability:
  5. For what it's worth, I wrote a quick guide on testing your Unraid server + Docker containers against the log4j/log4shell vulnerability if you want to independently verify. Log4j for Dummies: How to Determine if Your Server (or Docker Container) Is Affected by the Log4Shell Vulnerability The guide is still very much a work in progress as the situation is evolving, but when tested against a container with known vulnerabilities, it flagged it as such. In the interest of getting a guide in yall's hands as soon as possible, I prioritized testing and writing the guide. As a result, I have not yet created any templates for Community Apps (let me know if anyone would like to collaborate with me) and so you'll have to deploy the scanner container manually. Any feedback or additions anyone would recommend to the guide would be very much welcome. -TorqueWrench
  6. Nope, I haven't noted any ill effects so far. Unraid's umask seems to be set to wide open (checked by running umask -S which returns 000), so I really don't understand what could've changed. Reviewing this stuff does make me wonder again if I should be doing more for ransomware protection...
  7. That's what I ended up doing right after I posted that! Glad to have the independent verification. Thanks! Honestly, I'm not sure why this hasn't always been a problem- rclone says its default umask is "2", but other sources say this is really 0022, which is what I'm seeing. I wonder what changed with rc2. Could it be related to this "Fixed issue in User Share file system where permissions were not being honored."? Unraid OS version 6.10.0-rc2 available - Prereleases - Unraid While troubleshooting this, I also used this as an opportunity to update how I mount rclone. I passed the uid and gid arguments to mount as "nobody" and "users" (UID 99; GID 100). I might go back to just mounting as root again. Thoughts?
  8. I'm having the same problem, "Group" and "Other" are both missing write permissions. Started with update to rc2. Unfortunately this didn't work for me. I assume you added these arguments to the rclone mount command? This feels like a umask issue...
  9. Don't know what to tell you- it doesn't work for at least two of us...Not sure what setup differences we have to account for this but that script works fine for us.
  10. Script definitely works as I've been using it since June when I ran across this issue. In contrast I can personally attest that fusermount -uz does not and as reported by another user here:
  11. I forked the repo and submitted a pull request to fix the unmount script. You can see my proposed unmount script here: https://github.com/Torqu3Wr3nch/unraid_rclone_mount/blob/umountFix/rclone_unmount
  12. Configuration stored in /etc/rc.d/rc.nfsd changed with Unraid 6.10.0-rc1. nfs_config() should be updated to: nfs_config() ( set -euo pipefail sed -i ' s/^#RPC_STATD_PORT=.*/RPC_STATD_PORT='$STATD_PORT'/; s/^#LOCKD_TCP_PORT=.*/LOCKD_TCP_PORT='$LOCKD_PORT'/; s/^#LOCKD_UDP_PORT=.*/LOCKD_UDP_PORT='$LOCKD_PORT'/; ' ${DEFAULT_RPC} sed -i ' s/^\s\{4\}\/usr\/sbin\/rpc\.mountd$/ \/usr\/sbin\/rpc\.mountd -p '$MOUNTD_PORT'/; /if \[ \-x \/usr\/sbin\/rpc.mountd \]/ i RPC_MOUNTD_PORT='$MOUNTD_PORT'; ' ${RC_NFSD} /etc/rc.d/rc.rpc restart sleep 1 /etc/rc.d/rc.nfsd restart ) The above should cover both 6.10.0-rc1 while still keeping compatibility with prior versions.
  13. So just a follow-up for at least question 1: I DO NOT recommend using /mnt/user0/mount_rclone. I wanted my cache to be a real cache (i.e. to use the Unraid cache drive), but I also wanted to be able to move it to disk if I need to clear up space, so instead I went with /mnt/user/mount_rclone with the mount_rclone share set to use cache. As for question two, I still haven't thoroughly looked into why the upload script is necessary when using rclone mount. I believe the reason is because we're using mergerfs and when we write new files to the mergerfs directory, we're physically writing to the LocalFileShare mount and not to mount_rclone itself. Therefore the upload script is necessary to make sure any new files get uploaded. Any pre-existing files, if modified, I'm willing to bet are actually modified within the rclone mount cache and handled directly by rclone mount itself.
  14. Another few questions myself: 1. RcloneCacheShare="/mnt/user0/mount_rclone" - Is there a reason this "Rclone Cache" isn't using the cache and is using spinning rust directly instead? Should this be /mnt/cache/mount_rclone? I saw a similar question asked in the past 99 pages, but never saw a response. 2. If we're using VFS caching with rclone mount, why do we need the rclone upload (rclone move) script? I have noticed that sometimes when I make a change, it's transferred immediately (even though the upload script hasn't run yet) and other times, the upload script seems to have to do the work. Any idea why? Thanks.
  15. I'm wondering the same thing; and I'm wondering how much of this is dogma. I also don't know why it would need the root /mnt/user instead of the more conservative /mnt/user/merger_fs (which I would still not be a fan of since I assume most of us are going to be compartmentalizing that directory further). My only guess as to why their might be a difference is if rclone can share a connection if the root directory is mounted, instead of having to reestablish a connection to the Cloud service for each individual subdirectory request, but I readily admit ignorance of rclone's mechanisms.