privateer Posted March 9, 2021 Share Posted March 9, 2021 Using the current set of scripts, what factors effect the load time for files from gdrive? Quote Link to comment
Roudy Posted March 10, 2021 Share Posted March 10, 2021 On 3/6/2021 at 6:41 AM, neeiro said: Quick Question - Is it possible to have 2 unraid servers using the same google account at the same time or will it cause problems? Also would you just use the same config file/scripts on each? I do this with a Windows machine and an unRAID server. Just generate the second box another oauth and service account for it so you can avoid hitting API limits and make the second box read only. Also, on the second box, don't set up a cache, because if you change files or update a file that is already in the cache, it will cause duplicates if they have different extensions. Quote Link to comment
privateer Posted March 10, 2021 Share Posted March 10, 2021 My setup is a combination of local + cloud storage using this setup. I'm using Unraid as an OS. Recently, I've encountered some issues with my CPU maxing out due to my Unraid use + the number of transcodes I have. Instead of upgrading the chips or adding a graphics card, I chose the less expensive solution of grabbing a dedicated plex box for transcoding using Quick Sync. The main reason for the decision was the setup was far cheaper ($80) and the overall power usage is low, so total cost is significantly lower. Also allows you to run other things on this box if you choose. Box has Ubuntu with Plex on bare metal. I mounted my local unraid drives and mounted the gdrives. I haven't maxed out my transcodes yet but looks like the box can likely support 15+ (although I would bet probably 20+). I'm only allowing transcodes on 1080p content. For people who are using a similar setup to me, I think this is a good solution. Just wanted to let everyone know this is an option! Quote Link to comment
Chaos_Therum Posted March 22, 2021 Share Posted March 22, 2021 So I'm running into some strange issues, I recently migrated from unionfs to mergerfs so far it seems like the actual file browser responsiveness is far better. But I'm running into issues with files appearing corrupt, or my software just not seeing them. From what I can tell the files are not actually corrupt they open and play fine but stuff like metadata isn't showing up properly. For example I'm telling MediaMonkey to scan my music collection and it's picking up maybe 10 to 15 files at a time then seeing that files aren't available even though I can play them. I'm assuming this is some sort of timeout issue but I didn't have any issues while using the unionfs system just folders that wouldn't delete. I'm also getting weird permissions issues only for them to go away after a refresh of the folder my main system is Windows. Has anyone else ran into issues like this I've looked around but I haven't found anything. I'm not sure what other info to provide please let me know if there is anything else you would need to know. Quote Link to comment
Tuftuf Posted March 27, 2021 Share Posted March 27, 2021 (edited) I'm setting up another system and changing how my paths are arranged. The main question here is, are people using the cache setting? I'm reading on other forums and places that the cache setting shouldn't be needed and hasn't been for a long time, since the ranged gets were added. Do I need this cache mount? Can I just remove the 3 lines defining it? /mnt/storage is a SSD cache pool. EDIT - I have changed the /mnt/remotes/rclonefs to be on the SSD. I was going to place the rclone mount in /mnt/remotes as I expected this to be read only, remote filesystem mounted. RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="250G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable Edited March 27, 2021 by Tuftuf Quote Link to comment
Neo_x Posted March 27, 2021 Share Posted March 27, 2021 On 3/10/2021 at 8:18 PM, privateer said: My setup is a combination of local + cloud storage using this setup. I'm using Unraid as an OS. Recently, I've encountered some issues with my CPU maxing out due to my Unraid use + the number of transcodes I have. Instead of upgrading the chips or adding a graphics card, I chose the less expensive solution of grabbing a dedicated plex box for transcoding using Quick Sync. The main reason for the decision was the setup was far cheaper ($80) and the overall power usage is low, so total cost is significantly lower. Also allows you to run other things on this box if you choose. Box has Ubuntu with Plex on bare metal. I mounted my local unraid drives and mounted the gdrives. I haven't maxed out my transcodes yet but looks like the box can likely support 15+ (although I would bet probably 20+). I'm only allowing transcodes on 1080p content. For people who are using a similar setup to me, I think this is a good solution. Just wanted to let everyone know this is an option! i might be interested in this -> which solution is $80?!?! also currently running plex on unraid , but no amount of explaining can convince the users to not use transcode...sigh. upgrading cpu will only get me so far, and no slots available for an nvidia card (or probably i believe it will limit bandwidth on other slots if i install one) Quote Link to comment
Tuftuf Posted March 29, 2021 Share Posted March 29, 2021 @DZMM I moved over from plexguide to your script over a year ago. Using the old version of the script without cache settings works as expected. If I use the new version with cache defined, I get an extra folder created within my mount point the same name as my mount point. Am I missing something or should the configure below valid? The paths have all changed as I moved it to a new system. I'm not certain if I want the cache setting or not but I dislike the new script not working correctly for me, I've read before that it was not getting maintained within the rclone code. I've also always been mounting mine as gdrive & tdrive. Looking at it again recently, I see I don't ever use the gdrive sections and they don't seem to be required. 0.96.4 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files you want to upload without trailing slash to rclone e.g. /mnt/user/local RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount 0.96.9.2 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="250G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount I have gdrive & gcrypt, I carried the config over but recently noticed I don't use them or even mount them. Ok to remove? Do you use gdrive or just team drives (now shared drives) I'm missing scope = drive but its the default option (just checked) [gdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAA type = drive token = {"access_token":""} [gcrypt] type = crypt remote = gdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS1 password2 = PASS2 [tdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAAAAA type = drive token = {""} team_drive = AAAAAAAAAAAAAAAAAAA [tcrypt] type = crypt remote = tdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS3 password2 = PASS4 Quote Link to comment
privateer Posted March 31, 2021 Share Posted March 31, 2021 On 3/27/2021 at 7:18 AM, Neo_x said: i might be interested in this -> which solution is $80?!?! also currently running plex on unraid , but no amount of explaining can convince the users to not use transcode...sigh. upgrading cpu will only get me so far, and no slots available for an nvidia card (or probably i believe it will limit bandwidth on other slots if i install one) Look for a cheap laptop or desktop/thin client that has an intel 7th+ gen chip in it. I bought an older HP Prodesk. Install linux, Plex, mount unraid and cloud drives Quote Link to comment
yoyotueur Posted March 31, 2021 Share Posted March 31, 2021 (edited) Hello Guys, I am trying to get the rclone_mount script running but somehow it fails at checking the connectivity. I can see in the script that it tries to ping google so it should work if my server is online. I get the following: Script location: /tmp/user.scripts/tmpScripts/rclone_mount/script Note that closing this window will abort the execution of this script 31.03.2021 23:24:51 INFO: Creating local folders. 31.03.2021 23:24:51 INFO: Creating MergerFS folders. 31.03.2021 23:24:51 INFO: *** Starting mount of remote test_drive 31.03.2021 23:24:51 INFO: Checking if this script is already running. 31.03.2021 23:24:51 INFO: Script not running - proceeding. 31.03.2021 23:24:51 INFO: *** Checking if online 31.03.2021 23:24:54 FAIL: *** No connectivity. Will try again on next run Could you help me please? Edit: I changed the ping destination from google.com to 8.8.8.8 and it did the trick. Edited April 1, 2021 by yoyotueur Update Quote Link to comment
Roudy Posted April 3, 2021 Share Posted April 3, 2021 On 3/27/2021 at 5:07 AM, Tuftuf said: The main question here is, are people using the cache setting? I'm reading on other forums and places that the cache setting shouldn't be needed and hasn't been for a long time, since the ranged gets were added. Do I need this cache mount? Can I just remove the 3 lines defining it? It really just depends on what you use the rclone drive for. If you are sending downloads to it that are only going to be there temporarily until they are moved to their respective folders, I would say a cache drive is almost necessary to have. If it is just for media and your downloads are local, you may not need it. I personally would keep it either way. Depending on what you are uploading, recent data is more likely to be accesses, which would be local and cut down on your rclone drive being queried, thus reducing the amount of API calls. Quote Link to comment
Roudy Posted April 3, 2021 Share Posted April 3, 2021 On 3/29/2021 at 3:57 AM, Tuftuf said: @DZMM I moved over from plexguide to your script over a year ago. Using the old version of the script without cache settings works as expected. If I use the new version with cache defined, I get an extra folder created within my mount point the same name as my mount point. Am I missing something or should the configure below valid? The paths have all changed as I moved it to a new system. I'm not certain if I want the cache setting or not but I dislike the new script not working correctly for me, I've read before that it was not getting maintained within the rclone code. I've also always been mounting mine as gdrive & tdrive. Looking at it again recently, I see I don't ever use the gdrive sections and they don't seem to be required. 0.96.4 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files you want to upload without trailing slash to rclone e.g. /mnt/user/local RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount 0.96.9.2 # REQUIRED SETTINGS RcloneRemoteName="tcrypt" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/storage/firefly/rclonefs" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/storage/firefly/localfs" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/storage/firefly/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="250G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/storage/cloudfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"movies,tv"\} # comma separated list of folders to create within the mount I have gdrive & gcrypt, I carried the config over but recently noticed I don't use them or even mount them. Ok to remove? Do you use gdrive or just team drives (now shared drives) I'm missing scope = drive but its the default option (just checked) [gdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAA type = drive token = {"access_token":""} [gcrypt] type = crypt remote = gdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS1 password2 = PASS2 [tdrive] client_id = clientid@google client_secret = AAAAAAAAAAAAAAAAAAAA type = drive token = {""} team_drive = AAAAAAAAAAAAAAAAAAA [tcrypt] type = crypt remote = tdrive:/encrypt filename_encryption = standard directory_name_encryption = true password = PASS3 password2 = PASS4 From what you posted, it appears you have the same rclone setup, just one uses the "team_drive" setting. You don't reference the gdrive or gcrypt in the settings that you posted, so it wouldn't be access/used. I would say you are free to remove it from your rclone config if you don't use it for any other purpose. I have a similar setup, but I use the separate drive mappings for separate things. Quote Link to comment
USSHauler Posted April 7, 2021 Share Posted April 7, 2021 Hi everyone, if anyone has the time to help a fellow hoarder out I would appreciate it. For some time I have been using Plex, Sonarr, Radarr, Ombi and NZBGet to my media needs but am growing tired of buying drives. Really I would like to convert everything over to Google Workspace Enterprise Standard I believe it is and encrypt all my data. I am interested in only using the latest and greatest method on my UnRAID server and so I am reaching out for help. I am not a newbie when it comes to servers and computers but I am when it comes to rclone, mergerfs and encryption. If anyone can help me out I would be extremely grateful. I will leave my discord username here if someone can reach out please to USS Hauler #5050 Quote Link to comment
privateer Posted April 11, 2021 Share Posted April 11, 2021 On 4/7/2021 at 10:59 AM, USSHauler said: Hi everyone, if anyone has the time to help a fellow hoarder out I would appreciate it. For some time I have been using Plex, Sonarr, Radarr, Ombi and NZBGet to my media needs but am growing tired of buying drives. Really I would like to convert everything over to Google Workspace Enterprise Standard I believe it is and encrypt all my data. I am interested in only using the latest and greatest method on my UnRAID server and so I am reaching out for help. I am not a newbie when it comes to servers and computers but I am when it comes to rclone, mergerfs and encryption. If anyone can help me out I would be extremely grateful. I will leave my discord username here if someone can reach out please to USS Hauler #5050 See the first post. Quote Link to comment
privateer Posted April 11, 2021 Share Posted April 11, 2021 On 3/5/2021 at 9:12 AM, axeman said: That's just how I have it - because of circumstance, really. UnRaid and Emby (Sonnar too) were on different VMs for ages. I just added the scripts to UnRaid, and updated the existing instances to point to the mounts on UnRaid. I didn't have to do anything else. I also have non cloud shares that I still need UnRaid for - so to me, having all things storage related be on UnRaid server (local and cloud), and presentation and gathering on a separate machine is a good separation of concerns. Quick follow up here - everything is running well on my end. I have my plex box mount my unraid shares using autoFS and then mount the gdrive shares direct from the cloud using rclone which I run as a service. For your setup, when you say "point to unraid shares" how are you mounting the shares physically located on unraid as well as the gdrive shares that you have mounted on unraid. For the latter, are you mounting a copy of a mounted cloud drive? Sorry if that's confusing but I realized that instead of dismissing what you've done I'd like to know exactly what it is. Quote Link to comment
axeman Posted April 11, 2021 Share Posted April 11, 2021 3 hours ago, privateer said: Quick follow up here - everything is running well on my end. I have my plex box mount my unraid shares using autoFS and then mount the gdrive shares direct from the cloud using rclone which I run as a service. For your setup, when you say "point to unraid shares" how are you mounting the shares physically located on unraid as well as the gdrive shares that you have mounted on unraid. For the latter, are you mounting a copy of a mounted cloud drive? Sorry if that's confusing but I realized that instead of dismissing what you've done I'd like to know exactly what it is. Okay - so I have the script setup somewhat as intended. Tower/local - this is where the stuff that will get uploaded goes. Tower/videos - all my other "non cloud" videos (kids movies. Need available even if the cloud is down due to ISP issue. Tower/rclone - this is where all my gdrive mounts are directly mounted. I don't touch this, except maybe to see what's local/cloud Tower/mergerfs - combines Tower/local, Tower/Videos and Tower/RClone So emby server library has paths presented as: Tower/mergerfs/Videos/TV or Tower/mergerfs/videos/kids Quote Link to comment
privateer Posted April 12, 2021 Share Posted April 12, 2021 On 4/11/2021 at 12:15 PM, axeman said: Okay - so I have the script setup somewhat as intended. Tower/local - this is where the stuff that will get uploaded goes. Tower/videos - all my other "non cloud" videos (kids movies. Need available even if the cloud is down due to ISP issue. Tower/rclone - this is where all my gdrive mounts are directly mounted. I don't touch this, except maybe to see what's local/cloud Tower/mergerfs - combines Tower/local, Tower/Videos and Tower/RClone So emby server library has paths presented as: Tower/mergerfs/Videos/TV or Tower/mergerfs/videos/kids And your emby software is being run on a physically separated device and you mount tower/mergerfs there or something? Do you just use AutoFS for this? Quote Link to comment
axeman Posted April 12, 2021 Share Posted April 12, 2021 3 minutes ago, privateer said: And your emby software is being run on a physically separated device and you mount tower/mergerfs there or something? Do you just use AutoFS for this? Just tower/mergerfs. The only? downside, is that emby also creates the metadata there (I have it configured to save metadata to folders). So all those small files count toward the 400K teamdrive limit. If it gets too much, I can always just create a local metadata folder on the Emby Server - and let it store metadata there. But right now, it's not a huge problem. Quote Link to comment
privateer Posted April 12, 2021 Share Posted April 12, 2021 And no issues mounting the mergerfs folder instead of mounting everything natively? I guess I should experiment with this as well Quote Link to comment
Bjur Posted April 13, 2021 Share Posted April 13, 2021 16 hours ago, axeman said: Just tower/mergerfs. The only? downside, is that emby also creates the metadata there (I have it configured to save metadata to folders). So all those small files count toward the 400K teamdrive limit. If it gets too much, I can always just create a local metadata folder on the Emby Server - and let it store metadata there. But right now, it's not a huge problem. You can just do as I do. Use exclusion to exclude metadata and subs from being uploaded. Quote Link to comment
privateer Posted April 14, 2021 Share Posted April 14, 2021 On 4/12/2021 at 1:14 PM, axeman said: Just tower/mergerfs. The only? downside, is that emby also creates the metadata there (I have it configured to save metadata to folders). So all those small files count toward the 400K teamdrive limit. If it gets too much, I can always just create a local metadata folder on the Emby Server - and let it store metadata there. But right now, it's not a huge problem. How do you mount the tower/mergerfs folder on your non-unraid machine? I use autoFS to get the local unraid drives to mount on my separate machine, but the tower/mergerfs folder just loads in blank. I suspect I need to use some different mounting commands but everything I've tried has failed. Mind describing what you do and sharing your command? Quote Link to comment
axeman Posted April 14, 2021 Share Posted April 14, 2021 25 minutes ago, privateer said: How do you mount the tower/mergerfs folder on your non-unraid machine? I use autoFS to get the local unraid drives to mount on my separate machine, but the tower/mergerfs folder just loads in blank. I suspect I need to use some different mounting commands but everything I've tried has failed. Mind describing what you do and sharing your command? My Emby server is on a Windows machine and it access the mergerfs share like any other unraid share. Zero difference. \\tower\mergerfs\Videos etc ... Quote Link to comment
privateer Posted April 14, 2021 Share Posted April 14, 2021 Ok so you aren't actually mounting anything. Thanks for clarifying. I'll update here if I ever figure out how to mount the mergerfs folder on another linux box. Or if anyone else knows how, let me know! Quote Link to comment
axeman Posted April 14, 2021 Share Posted April 14, 2021 26 minutes ago, privateer said: Ok so you aren't actually mounting anything. Thanks for clarifying. I'll update here if I ever figure out how to mount the mergerfs folder on another linux box. Or if anyone else knows how, let me know! I do have it mapped on my windows machine as a drive letter. not sure if that's the same - but again - no issues. Quote Link to comment
Stephan296 Posted April 15, 2021 Share Posted April 15, 2021 (edited) I have got the problem that sonarr can't import the media because he don't have the rights.... When i use unraid to get them new rights it will be fixed for a couple of minutes, after that time i get the same error again. Sonarr is the only container with this problem. I use PGID 100 and PUID 99 to run this container. I have mounted the /mnt/user as /user in the container has somebody a solution? Edited April 15, 2021 by Stephan296 Quote Link to comment
animeking Posted April 17, 2021 Share Posted April 17, 2021 hello when i followed the guide i got everything working except mount_unionfs folder. there is nothing in the folder. is there a step im missing for mount_unionfs to work???? Quote #!/bin/bash ###################### #### Mount Script #### ###################### ## Version 0.96.9.2 ## ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="PlexUnion" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="400G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="Cloudflare NginxProxyManager organizr sabnzbd binhex-nzbhydra2 jackett EmbyServer jellyfin bazarr lidarr deemix tautulli deluge PASTA Plex2 Plex-Media-Server xteve DizqueTV2 overseerr ErsatzTV qbittorrent sonarr binhex-sonarr radarr autoscan amvd amd ama" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"downloads/complete,downloads/intermediate,music,movies,tv"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING ####### ############################################################################### ####### Preparing mount location variables ####### RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location ####### create directories for rclone mount and mergerfs mounts ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName # for script files mkdir -p $RcloneCacheShare/cache/$RcloneRemoteName # for cache files if [[ $LocalFilesShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested." LocalFilesLocation="/tmp/$RcloneRemoteName" eval mkdir -p $LocalFilesLocation else echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders." eval mkdir -p $LocalFilesLocation/"$MountFolders" fi mkdir -p $RcloneMountLocation if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating MergerFS folders as requested." else echo "$(date "+%d.%m.%Y %T") INFO: Creating MergerFS folders." mkdir -p $MergerFSMountLocation fi ####### Check if script is already running ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running fi ####### Checking have connectivity ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if online" ping -q -c2 google.com > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") PASSED: *** Internet online" else echo "$(date "+%d.%m.%Y %T") FAIL: *** No connectivity. Will try again on next run" rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ $Command1 $Command2 $Command3 $Command4 $Command5 $Command6 $Command7 $Command8 \ --allow-other \ --dir-cache-time $RcloneMountDirCacheTime \ --log-level INFO \ --poll-interval 15s \ --cache-dir=$RcloneCacheShare/cache/$RcloneRemoteName \ --vfs-cache-mode full \ --vfs-cache-max-size $RcloneCacheMaxSize \ --vfs-cache-max-age $RcloneCacheMaxAge \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 5 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems. Stopping dockers" docker stop $DockerStart rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi ####### Start MergerFS Mount ####### if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested." else if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place." else # check if mergerfs already installed if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount" else # Build mergerfs binary echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now." mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin # check if mergerfs install successful echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds" sleep 5 if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount." else echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully. Please check for errors. Exiting." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi # Create mergerfs mount echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount." # Extra Mergerfs folders if [[ $LocalFilesShare2 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare2=":$LocalFilesShare2" else LocalFilesShare2="" fi if [[ $LocalFilesShare3 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare3=":$LocalFilesShare3" else LocalFilesShare3="" fi if [[ $LocalFilesShare4 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare4=":$LocalFilesShare4" else LocalFilesShare4="" fi # make sure mergerfs mount point is empty mv $MergerFSMountLocation $LocalFilesLocation mkdir -p $MergerFSMountLocation # mergerfs mount command mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true # check if mergerfs mount successful echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created." if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed. Stopping dockers." docker stop $DockerStart rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi fi ####### Starting Dockers That Need Mergerfs Mount To Work Properly ####### # only start dockers once if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started." else # Check CA Appdata plugin not backing up or restoring if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ] ; then echo "$(date "+%d.%m.%Y %T") INFO: Appdata Backup plugin running - not starting dockers." else touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start $DockerStart fi fi rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.