DZMM Posted November 9, 2020 Author Share Posted November 9, 2020 @Lucka glad you for them all working without any hiccups. When they run smoothly in the background, it is really good if you have enough bandwidth. It's saved me thousands of pounds in storage and a fair chunk in electricity cost from fewer HDDs spinning. Using traktarr is a good addition Quote Link to comment
MowMdown Posted November 9, 2020 Share Posted November 9, 2020 @axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/) My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media. Quote Link to comment
axeman Posted November 10, 2020 Share Posted November 10, 2020 10 hours ago, MowMdown said: @axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/) My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media. Thanks I noticed that - but don't really understand the difference. I thought perhaps you just typed it as an example of your script. Quote Link to comment
DZMM Posted November 12, 2020 Author Share Posted November 12, 2020 (edited) On 9/8/2020 at 4:41 PM, DZMM said: Can I get some help testing please. V1.5.3 of rclone (remember you have to remove and reinstall the plugin to update it) now supports better caching where files can be cached locally. I'll add a variable in for setting the cache location once it's all working, but for now can a few people try these settings in the mount script: # create rclone mount rclone mount \ --allow-other \ --dir-cache-time 720h \ --log-level INFO \ --poll-interval 15s \ --cache-dir=/mnt/user/downloads/rclone/tdrive_vfs/cache \ --vfs-cache-mode full \ --vfs-cache-max-size 500G \ --vfs-cache-max-age 336h \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & set the cache-dir to wherever is convenient. The settings above will keep up to 500GB of files downloaded from gdrive for up to 2 weeks, with the oldest removed first when full. I think this will work well with my kids who keep stopping and starting the same file, or when plex is indexing or doing other operations. However, I don't think it will help majorly with playback for my setup, unless a user tries to open the same file within a few hours. Dunno. There's another new setting --vfs-read-ahead that could potentially help with forward skipping/smoother playback by downloading more data ahead of the current stream position, that we can play with as well. Edit: poll-interval shortens the default 1m, so should hopefully add a bit more butter to updates. Edit 2:. Initial launch times are much faster even before the cache kicks in!! I've just updated the mount script to support local file caching. In my experience this has vastly improved playback experience and reduced transfer, and is definitely worth an upgrade. To utilise you need to be on V1.5.3+ The new toggles to set are in the REQUIRED SETTINGS block: RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="400G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files I use /user0 as my location as I have 7 teamdrives mounted, so I don't have enough space on my SSD. Choose wherever works for you. https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount Edited November 12, 2020 by DZMM 1 Quote Link to comment
InCaseOf Posted November 13, 2020 Share Posted November 13, 2020 (edited) hey! so I got a problem, most likely cause im new to unraid lol So I got the upload script to work as expected, files are passed to gdrive with encryption, then removed from local. My problem is with my mount script I assume. In the console running rclone lsd gdrive: or running rclone lsd gdrive_crypt: returns all of my folders as expected. However when I run my mount script the folder is empty. Here is my config stuff: [gdrive] type = drive client_id = ***** client_secret = ***** scope = drive token = {"access_token":*****"} [gdrive_crypt] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = **** password2 = ****** and here is my mount script RcloneRemoteName="gdrive_crypt" RcloneMountShare="/mnt/user/the_stuff/gdrive" RcloneMountDirCacheTime="720h" LocalFilesShare="/mnt/user/the_stuff/local" RcloneCacheShare="/mnt/user0/the_stuff/gdrive" RcloneCacheMaxSize="400G" RcloneCacheMaxAge="336h" s MergerfsMountShare="/mnt/user/mount_mergerfs" DockerStart="nzbget plex sonarr radarr ombi" here is the output of it ( Script location: /tmp/user.scripts/tmpScripts/rclone_mount_plugin/script Note that closing this window will abort the execution of this script 12.11.2020 19:00:00 INFO: Creating local folders. 12.11.2020 19:00:00 INFO: Creating MergerFS folders. 12.11.2020 19:00:00 INFO: *** Starting mount of remote gdrive_crypt 12.11.2020 19:00:00 INFO: Checking if this script is already running. 12.11.2020 19:00:00 INFO: Script not running - proceeding. 12.11.2020 19:00:00 INFO: *** Checking if online 12.11.2020 19:00:02 PASSED: *** Internet online 12.11.2020 19:00:02 INFO: Success gdrive_crypt remote is already mounted. 12.11.2020 19:00:02 INFO: Check successful, gdrive_crypt mergerfs mount in place. 12.11.2020 19:00:02 INFO: Starting dockers. Error response from daemon: No such container: nzbget Error response from daemon: No such container: plex Error response from daemon: No such container: sonarr Error response from daemon: No such container: radarr Error response from daemon: No such container: ombi Error: failed to start containers: nzbget, plex, sonarr, radarr, ombi 12.11.2020 19:00:02 INFO: Script complete and here is my upload script # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="gdrive_crypt" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="gdrive_crypt" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/the_stuff/local" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/the_stuff/gdrive" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first (it outputs this: 12.11.2020 19:23:06 INFO: *** Rclone move selected. Files will be moved from /mnt/user/the_stuff/local/gdrive_crypt for gdrive_crypt *** 12.11.2020 19:23:06 INFO: *** Starting rclone_upload script for gdrive_crypt *** 12.11.2020 19:23:06 INFO: Script not running - proceeding. 12.11.2020 19:23:06 INFO: Checking if rclone installed successfully. 12.11.2020 19:23:06 INFO: rclone installed successfully - proceeding with upload. 12.11.2020 19:23:06 INFO: Uploading using upload remote gdrive_crypt 12.11.2020 19:23:06 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload. 2020/11/12 19:23:06 DEBUG : --min-age 15m0s to 2020-11-12 19:08:06.429286925 -0800 PST m=-899.987925388 2020/11/12 19:23:06 DEBUG : rclone: Version "v1.53.2" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/the_stuff/local/gdrive_crypt" "gdrive_crypt:" "--user-agent=gdrive_crypt" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,15M 16:00,12M" "--bind=" "--delete-empty-src-dirs"] 2020/11/12 19:23:06 DEBUG : Creating backend with remote "/mnt/user/the_stuff/local/gdrive_crypt" 2020/11/12 19:23:06 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf" 2020/11/12 19:23:06 INFO : Starting bandwidth limiter at 12MBytes/s 2020/11/12 19:23:06 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1 2020/11/12 19:23:06 DEBUG : Creating backend with remote "gdrive_crypt:" 2020/11/12 19:23:06 DEBUG : Creating backend with remote "gdrive:crypt" 2020/11/12 19:23:06 DEBUG : Google drive root 'crypt': root_folder_id = "****" - save this in the config to speed up startup 2020/11/12 19:23:06 DEBUG : downloads: Excluded 2020/11/12 19:23:07 DEBUG : Encrypted drive 'gdrive_crypt:': Waiting for checks to finish 2020/11/12 19:23:07 DEBUG : Encrypted drive 'gdrive_crypt:': Waiting for transfers to finish 2020/11/12 19:23:07 DEBUG : tv: Removing directory 2020/11/12 19:23:07 DEBUG : movies: Removing directory 2020/11/12 19:23:07 DEBUG : Local file system at /mnt/user/the_stuff/local/gdrive_crypt: deleted 2 directories 2020/11/12 19:23:07 INFO : There was nothing to transfer 2020/11/12 19:23:07 INFO : Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA - Elapsed time: 0.7s 2020/11/12 19:23:07 DEBUG : 7 go routines active 12.11.2020 19:23:07 INFO: Not utilising service accounts. 12.11.2020 19:23:07 INFO: Script complete ) going to this path /mnt/user/the_stuff/gdrive contains /cache and /gdrive_crypt both are empty I know in spaceinvaders tutorial his mount script ran in background but when I try to do that it runs for a second then stops. However if i run it once it says it mounted and then if I try again it says that it already mounted. Please let me know if you need me to provide more info, thanks a lot for the help! and thanks for the scripts! Edited November 13, 2020 by InCaseOf Quote Link to comment
axeman Posted November 14, 2020 Share Posted November 14, 2020 (edited) On 11/9/2020 at 9:38 AM, MowMdown said: @axeman, no, if you pay close attention to the path for that move command, im using "/mnt/user/media" not "/mnt/disks/media" (I don't mount to /user/) My rclone mount is under "/mnt/disks/media" so it does not interfere with the move. I'm essentiall moving the files from /mnt/user/media to the "crypt:media" mount but as far as the unraid is concerned the file isn't actually moving since no matter where I put the file it always shows up in /mnt/disks/media. So I'm having trouble creating files on the union location. It's strange because I if I go directly to the media_vfs mount, I can create files. But can't on the media one. I even tried installing the unassigned devices and updated the paths to /disks/ instead of /user/ .. What could I be doing wrong? Thanks for your time. Edit : this shows up on the script log: /test2.txt: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes /test2.txt: WriteFileHandle: Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes an rclone forum post said to set the cache mode ... but I see we are already doing that on the mount script. It seems to happen regardless of the :nc modifier. Edited November 14, 2020 by axeman Quote Link to comment
MowMdown Posted November 14, 2020 Share Posted November 14, 2020 when writing to the union mount directory “media” the non “vfs” one, shouldn’t be touching the cache because it should only write to the local drives your first upstream in the Union setup. Sounds like maybe you should check the spelling/case of that first path. you might need to add the flag -vv to the mount command so you can verbosely debug the issue further. Quote Link to comment
live4ever Posted November 15, 2020 Share Posted November 15, 2020 @DZMM After a server reboot I can’t seem to get the the rclone_mount script (same one in my last post) to work all I’m seeing is: ————- Script Starting Nov 14, 2020 23:51.50 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt 14.11.2020 23:51:50 INFO: Creating local folders. 14.11.2020 23:51:50 INFO: *** Starting mount of remote gdrive_media_vfs 14.11.2020 23:51:50 INFO: Checking if this script is already running. 14.11.2020 23:51:50 INFO: Exiting script as already running. Script Finished Nov 14, 2020 23:51.50 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt _______ Thanks Quote Link to comment
M1kep_ Posted November 15, 2020 Share Posted November 15, 2020 1 hour ago, live4ever said: @DZMM After a server reboot I can’t seem to get the the rclone_mount script (same one in my last post) to work all I’m seeing is: ————- Script Starting Nov 14, 2020 23:51.50 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt 14.11.2020 23:51:50 INFO: Creating local folders. 14.11.2020 23:51:50 INFO: *** Starting mount of remote gdrive_media_vfs 14.11.2020 23:51:50 INFO: Checking if this script is already running. 14.11.2020 23:51:50 INFO: Exiting script as already running. Script Finished Nov 14, 2020 23:51.50 Full logs for this script are available at /tmp/user.scripts/tmpScripts/rclone_mount/log.txt _______ Thanks If the mount wasn't killed nicely there is most likely still the mount_running file in place. I'd also confirm with a ps to grep to see if the mount is indeed not running properly. ps -aux | grep rclone If the mount scripts really isn't running, then you should run the rclone_unmount script as that will cleanup the necessary lock files. The deletion commands used by the script are: find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete Quote Link to comment
M1kep_ Posted November 15, 2020 Share Posted November 15, 2020 @DZMM Have you ever seen the mergerfs straight up crash? Today we've had it happen twice, the mergerfs process crashes with no logs(That I am aware of). The rclone mount and various other scripts are still functioning as expected. Do you know if there is some way to review errors or crash reasons for mergerfs? Quote Link to comment
DZMM Posted November 15, 2020 Author Share Posted November 15, 2020 1 hour ago, M1kep_ said: @DZMM Have you ever seen the mergerfs straight up crash? Today we've had it happen twice, the mergerfs process crashes with no logs(That I am aware of). The rclone mount and various other scripts are still functioning as expected. Do you know if there is some way to review errors or crash reasons for mergerfs? Not sure. Sometimes it stops working (rare) and I have to do a quick tidy up e.g. my dockers might not have stopped in time and I have to actually managed to physically add files to /mount_mergerfs, that I have to move manually to /local, so I can re-mount. Quote Link to comment
Bolagnaise Posted November 15, 2020 Share Posted November 15, 2020 (edited) I seem to be getting some episodes showing up as duplicates in plex in the same file location since switching to mergerFS. Kinda of confused as to why it would be happening to only some shows. Sonarr and plex are both mapped to /user which is mapped to /mnt/user Edited November 15, 2020 by Bolagnaise Quote Link to comment
DZMM Posted November 15, 2020 Author Share Posted November 15, 2020 1 hour ago, Bolagnaise said: I seem to be getting some episodes showing up as duplicates in plex in the same file location since switching to mergerFS. Kinda of confused as to why it would be happening to only some shows. Sonarr and plex are both mapped to /user which is mapped to /mnt/user Have you looked at the actual path to see if there are two files there? I'm not sure if it's a rclone/script or sonarr/radarr issue, but this happens to me sometimes as well e.g. same file like your scenario or two versions of the same show/movie. If I spot them and I can be bothered I tidy up, but I have so much content now that if it plays I don't do anything. The one thing I am anal about is fixing Plex posters as I hate the ones with lots of text on that it seems to default to! Also, movie ratings as I like to have mine consistent as I use them to filter my kids' libraries e.g. they can only see GB/U, GB/PG, and GB/12. Quote Link to comment
live4ever Posted November 15, 2020 Share Posted November 15, 2020 8 hours ago, M1kep_ said: If the mount wasn't killed nicely there is most likely still the mount_running file in place. I'd also confirm with a ps to grep to see if the mount is indeed not running properly. ps -aux | grep rclone If the mount scripts really isn't running, then you should run the rclone_unmount script as that will cleanup the necessary lock files. The deletion commands used by the script are: find /mnt/user/appdata/other/rclone/remotes -name dockers_started* -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running* -delete @M1kep_ Thanks, the command: root@Tower:~# find /mnt/user/appdata/other/rclone/remotes -name mount_running* /mnt/user/appdata/other/rclone/remotes/gdrive_media_vfs/mount_running find /mnt/user/appdata/other/rclone/remotes -name mount_running* -delete allowed the rclone_mount script to start and download/install the mergerfs software. Quote Link to comment
Bolagnaise Posted November 16, 2020 Share Posted November 16, 2020 (edited) 16 hours ago, DZMM said: Have you looked at the actual path to see if there are two files there? I'm not sure if it's a rclone/script or sonarr/radarr issue, but this happens to me sometimes as well e.g. same file like your scenario or two versions of the same show/movie. If I spot them and I can be bothered I tidy up, but I have so much content now that if it plays I don't do anything. The one thing I am anal about is fixing Plex posters as I hate the ones with lots of text on that it seems to default to! Also, movie ratings as I like to have mine consistent as I use them to filter my kids' libraries e.g. they can only see GB/U, GB/PG, and GB/12. There’s isnt 2 versions and if i delete one of them it deletes both of them as the other file becomes unavailable and no longer plays. I tried the plex dance, but they continue to come back as duplicates even after..... Edited November 16, 2020 by Bolagnaise Quote Link to comment
axeman Posted November 17, 2020 Share Posted November 17, 2020 On 11/14/2020 at 10:12 AM, MowMdown said: when writing to the union mount directory “media” the non “vfs” one, shouldn’t be touching the cache because it should only write to the local drives your first upstream in the Union setup. Sounds like maybe you should check the spelling/case of that first path. you might need to add the flag -vv to the mount command so you can verbosely debug the issue further. This might be a me thing - since I need the rclone mounts available to my Windows machines, I have it in /mnt/user . If I try to copy a file into unioned mount from inside of UnRaid via MC, it works exactly as you'd want... the file goes right to the local share. However, if I do the same from a windows machine - it fails. Interestingly this doesn't seem to be problem on an Android device (using SolidExplorer). Nor does it happen with the @DZMM mergerfs based scripts. Quote Link to comment
eqjunkie829 Posted November 17, 2020 Share Posted November 17, 2020 On 11/12/2020 at 2:39 AM, DZMM said: I've just updated the mount script to support local file caching. In my experience this has vastly improved playback experience and reduced transfer, and is definitely worth an upgrade. To utilise you need to be on V1.5.3+ The new toggles to set are in the REQUIRED SETTINGS block: RcloneCacheShare="/mnt/user0/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="400G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files I use /user0 as my location as I have 7 teamdrives mounted, so I don't have enough space on my SSD. Choose wherever works for you. https://github.com/BinsonBuzz/unraid_rclone_mount/blob/latest---mergerfs-support/rclone_mount Ive used rclone with mergerFS on my seedbox with no issues, however this is my first time using rclone on unraid and its not working properly. Using one of your most recent scripts with local caching rclone was downloading everything from my google drive and its now sitting in my /mnt/user0/mount_rclone/cache/gsuite/vfs/gsuite/Plex Folder/. Im not sure what to adjust in the script to stop it from doing that. Any guidance is appreciated! Quote Link to comment
DZMM Posted November 17, 2020 Author Share Posted November 17, 2020 20 minutes ago, eqjunkie829 said: Ive used rclone with mergerFS on my seedbox with no issues, however this is my first time using rclone on unraid and its not working properly. Using one of your most recent scripts with local caching rclone was downloading everything from my google drive and its now sitting in my /mnt/user0/mount_rclone/cache/gsuite/vfs/gsuite/Plex Folder/. Im not sure what to adjust in the script to stop it from doing that. Any guidance is appreciated! Are you sure it's downloading everything - maybe Plex is analysing files as part of scheduled maintenance? If you want to reduce the size of the cache, reduce the size of RcloneCacheMaxSize="400G" Quote Link to comment
eqjunkie829 Posted November 17, 2020 Share Posted November 17, 2020 5 minutes ago, DZMM said: Are you sure it's downloading everything - maybe Plex is analysing files as part of scheduled maintenance? If you want to reduce the size of the cache, reduce the size of RcloneCacheMaxSize="400G" Well, the folder on my unraid box didn't exist a few days ago as it was created by running the scripts you have on github. Also I tracked the data usage on my unifi router and it shows about 3TB of data received through google api over the weekend. Im really sure the script caused it to start downloading my whole google drive to cache locally as I only installed rclone a week ago and only used your script so far. Quote Link to comment
eqjunkie829 Posted November 17, 2020 Share Posted November 17, 2020 Is there a way for me to modify the Mount Script Version 0.96.9.1 to disable caching completely? Quote Link to comment
DZMM Posted November 18, 2020 Author Share Posted November 18, 2020 3 minutes ago, eqjunkie829 said: Is there a way for me to modify the Mount Script Version 0.96.9.1 to disable caching completely? Roll back. I can't explain the behaviour your seeing as I've been running the latest rclone version for a while without any problems. Quote Link to comment
eqjunkie829 Posted November 18, 2020 Share Posted November 18, 2020 23 minutes ago, DZMM said: Roll back. I can't explain the behaviour your seeing as I've been running the latest rclone version for a while without any problems. Do you have a copy of the script prior to caching being added? I just started with rclone this week and dont have an old version of it. Quote Link to comment
DZMM Posted November 18, 2020 Author Share Posted November 18, 2020 17 minutes ago, eqjunkie829 said: Do you have a copy of the script prior to caching being added? I just started with rclone this week and dont have an old version of it. https://github.com/BinsonBuzz/unraid_rclone_mount/commits/latest---mergerfs-support/rclone_mount 1 Quote Link to comment
crazyhorse90210 Posted November 18, 2020 Share Posted November 18, 2020 For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird: # Check CA Appdata plugin not backing up or restoring if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then kept getting this error: /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]' /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: ` if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then' Script Finished Nov 17, 2020 16:41.15 so i removed the outside brackets and it worked as such with no error: # Check CA Appdata plugin not backing up or restoring if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then 1 Quote Link to comment
DZMM Posted November 18, 2020 Author Share Posted November 18, 2020 7 hours ago, crazyhorse90210 said: For some reason I had a problem with line 241 in the new script where you check if CA Appdata is backing up or restoring. I had to remove the outside square brackets on the conditional in order for it to work... weird: # Check CA Appdata plugin not backing up or restoring if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]]; then kept getting this error: /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error in conditional expression /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: syntax error near `]' /tmp/user.scripts/tmpScripts/rclone_mount/script: line 241: ` if [[ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]] ; then' Script Finished Nov 17, 2020 16:41.15 so i removed the outside brackets and it worked as such with no error: # Check CA Appdata plugin not backing up or restoring if [ -f "/tmp/ca.backup2/tempFiles/backupInProgress" ] || [ -f "/tmp/ca.backup2/tempFiles/restoreInProgress" ]; then Thanks - I've just added. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.