M1kep_

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by M1kep_

  1. This still leads to new files being written to the cache while the mover is running. I believe historically we used to be able to run the mover and it'd move the files to the primary storage no matter what. But now, we can only run the mover when cache is set as primary, as we can't set cache to secondary.
  2. @DZMM Have you ever seen the mergerfs straight up crash? Today we've had it happen twice, the mergerfs process crashes with no logs(That I am aware of). The rclone mount and various other scripts are still functioning as expected. Do you know if there is some way to review errors or crash reasons for mergerfs?
  3. If the mount wasn't killed nicely there is most likely still the mount_running file in place. I'd also confirm with a ps to grep to see if the mount is indeed not running properly. ps -aux | grep rclone If the mount scripts really isn't running, then you should run the rclone_unmount script as that will cleanup the necessary lock files. The deletion commands used by the script are: find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete
  4. I've been having this issue now. I goes away if I manually kill the rclone script, the rcloneorig process, and mergerFS process. Not really sure why, or how safe it is to just kill those processes
  5. Okay, pretty much finished going through the whole thing and building out my setup plan. Two more questions: I've found that the /user0/ mount writes to the disks instead of the cache. With that being said, what is your mount_rclone share set as for the cache settings in Unraid? With Prefer, I feel like the mover would constantly try to move the data off of the array and throw it on cache. Also, does having the cache in the mount_rclone folder cause it to be uploaded to google as well as that's a mounted folder? Is this needed, or an artifact of your specific setup and can be replaced with "ignore"?
  6. Awesome! Yeah, the vfs local caching is really awesome. Really merges the gap between hosting on cloud and locally. Especially for files that are being accessed recently/often. I'll keep an eye out on the GitHub to see if any changes show up. If you don't get a chance to update, if you could possibly provide your mount flags at least that'd be very useful. I'll probably work on getting this migrated over this weekend. Thanks again for the help
  7. I want to make sure I'm understanding this correctly. Steps I intend to take for shares: Create User Shares with names of gDriveStaging(LocalFileShare), gDrive(RcloneMountShare), mergerFS(MergerfsMountShare) and set to "Prefer Cache" Set the variables in the scripts to match Rclone: With encryption, RcloneRemoteName should be the crypt remote When the array is stopped, Unraid will handle the unmount as it's a User Share When the array is started, the mount script will just need to be ran again Random questions: Is there a particular reason why vfs-cache-mode is writes, instead of full? Any gotchas when using full(I assume that a cache-dir would need to be set to be sure it doesn't try to full up the root fs)? How do the service accounts work? It look like the counter is updated, but the next service account will only be used at the next upload interval? Do the service accounts help with the 750GB upload limit? It is mentioned that MergerFS works better when everything is mapped like /user --Maps to--> /mnt/user Would there be any issues with the following setup? Current Setup: Google Drive path: /mnt/disks/gdrive_secure/ Radarr: /ShareName ---> /mnt/disks/gdrive_secure/ShareName Sonarr: /ShareName/TV ---> /mnt/disks/gdrive_secure/ShareName/TV Proposed Setup MergerFS Path: /mnt/user/gDrive Radarr: /ShareName ---> /mnt/user/gDrive/ShareName Sonarr: /ShareName/TV ---> /mnt/user/gDrive/ShareName/TV The goal is to avoid having to remap all of the directories and rescan everything. But if mergerFS is going to make it a pain, rescanning/remapping isn't the end of the world