T0rqueWr3nch

Members
  • Posts

    52
  • Joined

  • Last visited

Converted

  • URL
    engineerworkshop.com
  • Personal Text
    The Engineer's Workshop: https://engineerworkshop.com

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

T0rqueWr3nch's Achievements

Rookie

Rookie (2/14)

29

Reputation

  1. Don't know what to tell you- it doesn't work for at least two of us...Not sure what setup differences we have to account for this but that script works fine for us.
  2. Script definitely works as I've been using it since June when I ran across this issue. In contrast I can personally attest that fusermount -uz does not and as reported by another user here:
  3. I forked the repo and submitted a pull request to fix the unmount script. You can see my proposed unmount script here: https://github.com/Torqu3Wr3nch/unraid_rclone_mount/blob/umountFix/rclone_unmount
  4. Configuration stored in /etc/rc.d/rc.nfsd changed with Unraid 6.10.0-rc1. nfs_config() should be updated to: nfs_config() ( set -euo pipefail sed -i ' s/^#RPC_STATD_PORT=.*/RPC_STATD_PORT='$STATD_PORT'/; s/^#LOCKD_TCP_PORT=.*/LOCKD_TCP_PORT='$LOCKD_PORT'/; s/^#LOCKD_UDP_PORT=.*/LOCKD_UDP_PORT='$LOCKD_PORT'/; ' ${DEFAULT_RPC} sed -i ' s/^\s\{4\}\/usr\/sbin\/rpc\.mountd$/ \/usr\/sbin\/rpc\.mountd -p '$MOUNTD_PORT'/; /if \[ \-x \/usr\/sbin\/rpc.mountd \]/ i RPC_MOUNTD_PORT='$MOUNTD_PORT'; ' ${RC_NFSD} /etc/rc.d/rc.rpc restart sleep 1 /etc/rc.d/rc.nfsd restart ) The above should cover both 6.10.0-rc1 while still keeping compatibility with prior versions.
  5. So just a follow-up for at least question 1: I DO NOT recommend using /mnt/user0/mount_rclone. I wanted my cache to be a real cache (i.e. to use the Unraid cache drive), but I also wanted to be able to move it to disk if I need to clear up space, so instead I went with /mnt/user/mount_rclone with the mount_rclone share set to use cache. As for question two, I still haven't thoroughly looked into why the upload script is necessary when using rclone mount. I believe the reason is because we're using mergerfs and when we write new files to the mergerfs directory, we're physically writing to the LocalFileShare mount and not to mount_rclone itself. Therefore the upload script is necessary to make sure any new files get uploaded. Any pre-existing files, if modified, I'm willing to bet are actually modified within the rclone mount cache and handled directly by rclone mount itself.
  6. Another few questions myself: 1. RcloneCacheShare="/mnt/user0/mount_rclone" - Is there a reason this "Rclone Cache" isn't using the cache and is using spinning rust directly instead? Should this be /mnt/cache/mount_rclone? I saw a similar question asked in the past 99 pages, but never saw a response. 2. If we're using VFS caching with rclone mount, why do we need the rclone upload (rclone move) script? I have noticed that sometimes when I make a change, it's transferred immediately (even though the upload script hasn't run yet) and other times, the upload script seems to have to do the work. Any idea why? Thanks.
  7. I'm wondering the same thing; and I'm wondering how much of this is dogma. I also don't know why it would need the root /mnt/user instead of the more conservative /mnt/user/merger_fs (which I would still not be a fan of since I assume most of us are going to be compartmentalizing that directory further). My only guess as to why their might be a difference is if rclone can share a connection if the root directory is mounted, instead of having to reestablish a connection to the Cloud service for each individual subdirectory request, but I readily admit ignorance of rclone's mechanisms.
  8. It's been that way for as long as I can remember, but then again I've been on some form of 6.9rc for as long as I can remember...ha.
  9. Keep advocating for Unraid R&D to implement NFS v4. That's your best bet. (Or go iSCSI).
  10. Hi @Gordon Shumway, Sorry for the delay; I haven't been as active here in the forums lately as I should've been. Could you run this: nfsstat -m for me on your Ubuntu client and paste the results? -TorqueWrench
  11. Same. I've never had this issue before, though admittedly it could just be a coincidence. Just to confirm @DarkMan83, since I don't think this is necessarily an "SMB" issue, when you navigate to "Shares" in the GUI, are your shares missing there as well? In my case the locally-mounted "shares" themselves were gone. My working theory is that this is what's going on for most of the "SMB issues" being reported. I think for many users, they only interact with unRAID via using its exported SMB shares and so the issue manifests itself as an "SMB problem" though the underlying cause is that the local mounts themselves are gone. Just my theory so far. Regardless, the flag on this post seems misprioritized. It doesn't seem like a "minor" issue. -TorqueWrench
  12. I can confirm this issue (or at least a related one), though mine really has nothing to do with SMB itself. Today, my locally-mounted "shares" disappeared completely. Here's the blow-by-blow: While accessing a previously running container (Grafana), I was getting errors in the browser. Stopping and starting the container resulted in the error "Execution error - server error". Then I realized that all of my Docker containers weren't working. Attempting to restart Docker itself, I noticed this: And, sure enough, navigating to "Shares" in the GUI, I don't have any mounted shares: The only thing that looks interesting in the syslog is this, which occurred at the time I see my server going offline in Grafana: shfs: shfs: ../lib/fuse.c:1451: unlink_node: Assertion `node->nlookup > 1' failed. Research: This looks similar to a problem reported last year with the same error message and symptoms: Let me know if you would like for me to open up a new issue. -TorqueWrench
  13. I would have loved to have blamed it on your individual Docker containers, but I agree, those don't seem like extravagant containers. PMS is definitely a clunker. A lot of database containers also seem to be particularly bad about cache writes. MongoDB was horrendous for me: Since you seem to still be experiencing this issue, could I get you to run docker stats I'm curious if Block I/O identifies a particular container. -TorqueWrench
  14. Very strange. I had the exact opposite experience from the latest beta update to 6.9.0-beta22. My cache writes are way down to a much more reasonable ~500 kB/s and it's still holding from this morning. It's weird that we have such discrepancies.
  15. I did this morning. While it's still very early, I think this may finally be fixed: Screenshots here: https://forums.engineerworkshop.com/t/unraid-6-9-0-beta22-update-fixes-and-improvements/215 I am seeing a drop from ~8 MB/s to ~500 kB/s after upgrade with a similar server load (basically idle) and the same Docker containers running. Hopefully the trend holds. -TorqueWrench