DZMM Posted April 11, 2020 Author Share Posted April 11, 2020 3 hours ago, Stinkpickle said: I am looking for a clean way to migrate my Movie collection without causing issues. I don't have enough free space on my array to duplicate my Movie directory into the google drive mergefs share. I was going to point my Local directory for these scripts to my current Media share on Unraid, would that be an ok way to upload my files without moving them? I noticed the currently directory structure gets kind of embedded twice with the RcloneRemoteName. My current mount/upload scripts look like this; #!/bin/bash ###################### #### Mount Script #### ###################### ### Version 0.96.6 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="sgdrive_media" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/sgdrive_media_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/user/sgdrive_media" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="PlexMediaServer NZBGet sonarr radarr radarr4k ombi tautulli" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page LocalFilesShare="/mnt/user/sgdrive_media_local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable MountFolders=\{"Movies,Movies-UHD,Music,Other,Personal,TV,TV-Kids"\} # comma separated list of folders to create within the mount #!/bin/bash ###################### ### Upload Script #### ###################### ### Version 0.95.6 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Edit the settings below to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: Add additional commands or filters # 4. Optional: Use bind mount settings for potential traffic shaping/monitoring # 5. Optional: Use service accounts in your upload remote # 6. Optional: Use backup directory for rclone sync jobs # REQUIRED SETTINGS RcloneCommand="move" # choose your rclone command e.g. move, copy, sync RcloneRemoteName="sgdrive_media" # Name of rclone remote mount WITHOUT ':'. RcloneUploadRemoteName="sgdrive_media" # If you have a second remote created for uploads put it here. Otherwise use the same remote as RcloneRemoteName. LocalFilesShare="/mnt/user/sgdrive_media_local" # location of the local files without trailing slash you want to rclone to use RcloneMountShare="/mnt/user/sgdrive_media_rclone" # where your rclone mount is located without trailing slash e.g. /mnt/user/mount_rclone MinimumAge="15m" # sync files suffix ms|s|m|h|d|w|M|y ModSort="ascending" # "ascending" oldest files first, "descending" newest files first And I noticed this resulted in the following share directories, /mnt/user/sgdrive_media_local/sgdrive_media /mnt/user/sgdrive_media_rclone/sgdrive_media /mnt/user/sgdrive_media/sgdrive_media My current Media share is; /mnt/user/Media/ My concern is if I point my local directory at that, it's going to look for /mnt/user/Media/sgdrive_media/ - Which does not exsist/emtpy. What is the best way to accomplish this? Thank you. Unless I'm missing something, just do LocalFilesShare="/mnt/user/Media/" and then those files will get moved to google Quote Link to comment
Stinkpickle Posted April 11, 2020 Share Posted April 11, 2020 9 minutes ago, DZMM said: Unless I'm missing something, just do LocalFilesShare="/mnt/user/Media/" and then those files will get moved to google My concern was the "sgdrive_media" folder residing in the localshare folder, would the script not be looking at /mnt/user/Media/sgdrive_media? Quote Link to comment
DZMM Posted April 12, 2020 Author Share Posted April 12, 2020 (edited) On 4/12/2020 at 12:34 AM, Stinkpickle said: My concern was the "sgdrive_media" folder residing in the localshare folder, would the script not be looking at /mnt/user/Media/sgdrive_media? Oh I see. I would do the following: LocalFilesShare2="/mnt/user/Media" in the mount script so that files from /mnt/user/Media just appear in /mnt/user/sgdrive_media/sgdrive_media. Make sure plex has empty trash automatically off In plex add the new /mnt/user/sgdrive_media_rclone/sgdrive_media /mnt/user/sgdrive_media or /mnt/user/sgdrive_media_mergerfs folders and remove the old /mnt/user/Media Also update sonarr, radarr locations etc Do a full plex scan - it will update all the file locations and keep all your existing metadata Once scan complete, check all looks ok, then manually empty trash Move files from /mnt/user/Media to /mnt/user/sgdrive_media_local Remove /mnt/user/Media from LocalFilesShare2 as shouldn't need it now use /mnt/user/sgdrive_media_local going forwards Edit: Personally I also would use this as opportunity to change MergerfsMountShare="/mnt/user/sgdrive_media" to MergerfsMountShare="/mnt/user/sgdrive_media_mergerfs" to make things easier to follow Edit2: Fixed #3 Edited April 13, 2020 by DZMM Quote Link to comment
oc3lot Posted April 12, 2020 Share Posted April 12, 2020 I'm just setting up these scripts now and am running into permission issues with mount_mergerfs and nzbget. I have set up my nzbget docker container to include the following path: /user --> /mnt/user/ Within NZBGet, I have set the paths to: MainDir - /user/mount_mergerfs/google/downloads DestDir - ${MainDir}/complete InterDir - ${MainDir}/intermediate NzbDir - ${MainDir}/nzb QueueDir - ${MainDir}/queue TempDir - ${MainDir}/tmp It seems to be having problems with the NzbDir, QueueDir and TempDir directories. I've run Fix Permissions and gone in and run chmod, but still no fix. Anybody have any suggestions? Quote Link to comment
oc3lot Posted April 12, 2020 Share Posted April 12, 2020 Looks like I fixed my own issue. If I understand correctly, ONLY Plex needs to be set to /user --> /mnt/user. Once I reset nzbget to /data --> /mnt/user, it worked fine. Quote Link to comment
Stinkpickle Posted April 12, 2020 Share Posted April 12, 2020 22 hours ago, DZMM said: Oh I see. I would do the following: LocalFilesShare2="/mnt/user/Media" in the mount script so that files from /mnt/user/Media just appear in /mnt/user/sgdrive_media/sgdrive_media. Make sure plex has empty trash automatically off In plex add the new /mnt/user/sgdrive_media_rclone/sgdrive_media folders and remove the old /mnt/user/Media Also update sonarr, radarr locations etc Do a full plex scan - it will update all the file locations and keep all your existing metadata Once scan complete, check all looks ok, then manually empty trash Move files from /mnt/user/Media to /mnt/user/sgdrive_media_local Remove /mnt/user/Media from LocalFilesShare2 as shouldn't need it now use /mnt/user/sgdrive_media_local going forwards Edit: Personally I also would use this as opportunity to change MergerfsMountShare="/mnt/user/sgdrive_media" to MergerfsMountShare="/mnt/user/sgdrive_media_mergerfs" to make things easier to follow Thank you, this got me going. Do you happen to know why none of this traffic shows on the beta rclone webui? Was looking for a way to track transfer progress. Quote Link to comment
DZMM Posted April 12, 2020 Author Share Posted April 12, 2020 1 hour ago, Stinkpickle said: Thank you, this got me going. Do you happen to know why none of this traffic shows on the beta rclone webui? Was looking for a way to track transfer progress. no Quote Link to comment
DZMM Posted April 13, 2020 Author Share Posted April 13, 2020 @Stinkpickle I updated #3 as you should always use the mergerfs mount in dockers Quote Link to comment
Stinkpickle Posted April 13, 2020 Share Posted April 13, 2020 7 hours ago, DZMM said: @Stinkpickle I updated #3 as you should always use the mergerfs mount in dockers Thank you Quote Link to comment
ScottinOkla Posted April 14, 2020 Share Posted April 14, 2020 I found this thread after chasing some rabbits, the rabbit I was chasing to land here was AutoRClone. I then read the 10+ pages since it was first mentioned, but I am not sure if this will help with my current situation. First, I am an UnRaid user, so this is a good starting point. My seedbox grabs files from usenet and shoves the files up into a unlimited google drive. this seedbox is running plexguide, sonarr, radarr, sab. The problem that I would like to tackle is to first copy or sync the data from my unlimited gdrive to a team drive, or multiple team drives. I could create additional team drives for each media type, and utilize encryption. Yesterday I was successful in getting running a simple rclone copy from the gdrive to the team drive. Problem is that it worked for exactly 100 seconds before I got banned for too many API hits on the Drive API. Hopefully cycling through some service accounts will help me with this. It seems like all of the pieces are here, but they may not be assembled in the right order for my needs as from what I understand this is for uploading local files to a gdrive/tdrive. Could this be adapted to sync files from an unlimited gdrive to a encrypted tdrive? Quote Link to comment
DZMM Posted April 14, 2020 Author Share Posted April 14, 2020 (edited) 11 minutes ago, ScottinOkla said: Could this be adapted to sync files from an unlimited gdrive to a encrypted tdrive? yes. Use my scripts to: in your rclone config make sure server_side_across_configs is set to true mount script to mount your gdrive folder create your service accounts. I'd add about 100 if you're doing server side copies as they blow through the 750GB in no time as you discovered - 15 to 16 are needed per Gbps transferring full pelt per day in the upload script set your team drive as the destination and the gdrive from #2 as the source Edited April 14, 2020 by DZMM Quote Link to comment
drogg Posted April 14, 2020 Share Posted April 14, 2020 I _think_ I'm getting the hang of this, but my upload script seems to be stuck. I've checked the status of it every few hours using 'rclone about gdrive_media_vfs:' and nothing has uploaded in a while. When I try to run the upload script, it says "14.04.2020 14:47:30 INFO: Exiting as script already running." so it's not currently uploading anything. Any idea as to what I'm doing wrong here? Thanks for the incredible work on this! Quote Link to comment
DZMM Posted April 14, 2020 Author Share Posted April 14, 2020 @drogg if you are certain there's not an upgrade instance running, go into /mnt/user/appdata/other/rclone and your upload remote folder and manually delete the upload running checker file, or run the cleanup script. Basically the upload script didn't exit properly at some point. Quote Link to comment
drogg Posted April 14, 2020 Share Posted April 14, 2020 5 minutes ago, DZMM said: @drogg if you are certain there's not an upgrade instance running, go into /mnt/user/appdata/other/rclone and your upload remote folder and manually delete the upload running checker file, or run the cleanup script. Basically the upload script didn't exit properly at some point. Ran the script and still happening. Are there any logs I can attach that would help diagnose? Quote Link to comment
Stinkpickle Posted April 14, 2020 Share Posted April 14, 2020 54 minutes ago, DZMM said: @drogg if you are certain there's not an upgrade instance running, go into /mnt/user/appdata/other/rclone and your upload remote folder and manually delete the upload running checker file, or run the cleanup script. Basically the upload script didn't exit properly at some point. I am not sure if the current cleanup script is deleting the proper check file. This is the cleanup script I have, #!/bin/bash ####################### ### Cleanup Script #### ####################### #### Version 0.9.1 #### ####################### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***" ####### Cleanup Tracking Files ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***" find /mnt/user/appdata/other/rclone/remotes -name dockers_started -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running -delete echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***" exit And it appears the upload check file is called "/mnt/user/appdata/other/rclone/remotes/%remotename%/upload_running_daily_upload", that has held up my upload script from running previously. Just a thought. Quote Link to comment
drogg Posted April 14, 2020 Share Posted April 14, 2020 1 hour ago, Stinkpickle said: I am not sure if the current cleanup script is deleting the proper check file. This is the cleanup script I have, #!/bin/bash ####################### ### Cleanup Script #### ####################### #### Version 0.9.1 #### ####################### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_cleanup script ***" ####### Cleanup Tracking Files ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Removing Tracking Files ***" find /mnt/user/appdata/other/rclone/remotes -name dockers_started -delete find /mnt/user/appdata/other/rclone/remotes -name mount_running -delete find /mnt/user/appdata/other/rclone/remotes -name upload_running -delete echo "$(date "+%d.%m.%Y %T") INFO: ***Finished Cleanup! ***" exit And it appears the upload check file is called "/mnt/user/appdata/other/rclone/remotes/%remotename%/upload_running_daily_upload", that has held up my upload script from running previously. Just a thought. Is there something I can do to fix it? Quote Link to comment
DZMM Posted April 14, 2020 Author Share Posted April 14, 2020 2 hours ago, Stinkpickle said: upload check file is called "/mnt/user/appdata/other/rclone/remotes/%remotename%/upload_running_daily_upload" What do you have as RcloneRemoteName and RcloneUploadRemoteName in your scripts? You shouldn't be getting %remotename% Quote Link to comment
drogg Posted April 15, 2020 Share Posted April 15, 2020 I modified the script to read “upload_running_daily_upload”. The upload script is “running” according to userscripts but I’m not sure if anything is actually being uploaded. Quote Link to comment
JonathanM Posted April 15, 2020 Share Posted April 15, 2020 1 hour ago, drogg said: I’m not sure if anything is actually being uploaded. Try this: rclone listremotes then rclone size gdrive_media_vfs: assuming listremotes shows gdrive_media_vfs: as a valid entry. A few minutes or hours later, run the size command again and compare. Quote Link to comment
drogg Posted April 15, 2020 Share Posted April 15, 2020 I think I've found my issue but I'm not sure how to fix. I get this error on Line 3 in the debug. 15.04.2020 10:46:03 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/gdrive_media_vfs for gdrive_media_vfs *** 15.04.2020 10:46:03 INFO: *** Starting rclone_upload script for gdrive_media_vfs *** 15.04.2020 10:46:03 INFO: Script not running - proceeding. 15.04.2020 10:46:03 INFO: Checking if rclone installed successfully. 15.04.2020 10:46:03 INFO: rclone installed successfully - proceeding with upload. 15.04.2020 10:46:03 INFO: Uploading using upload remote gdrive_media_vfs 15.04.2020 10:46:03 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload. ====== RCLONE DEBUG ====== /usr/sbin/rclone: line 3: 28517 Killed rcloneorig --config $config "$@" ========================== 15.04.2020 10:47:42 INFO: Not utilising service accounts. 15.04.2020 10:47:42 INFO: Log files scrubbed 15.04.2020 10:47:42 INFO: Script complete Script Finished Wed, 15 Apr 2020 10:47:42 -0400 Full logs for this script are available at /tmp/user.scripts/tmpScripts/Upload Mount/log.txt Quote Link to comment
DZMM Posted April 15, 2020 Author Share Posted April 15, 2020 1 hour ago, drogg said: I think I've found my issue but I'm not sure how to fix. I get this error on Line 3 in the debug. Out of curiousity can you try this version of the upload script please - https://github.com/BinsonBuzz/unraid_rclone_mount/blob/84c00906b5019a3ad873a636ad0f003e267608b8/rclone_upload It's the version before the last change which added discord notifications which I haven't tested myself as I don't use discord, although the pull request looked ok. Quote Link to comment
drogg Posted April 15, 2020 Share Posted April 15, 2020 6 minutes ago, DZMM said: Out of curiousity can you try this version of the upload script please - https://github.com/BinsonBuzz/unraid_rclone_mount/blob/84c00906b5019a3ad873a636ad0f003e267608b8/rclone_upload It's the version before the last change which added discord notifications which I haven't tested myself as I don't use discord, although the pull request looked ok. This appears to be working for me. I'll update in a couple hours to see, but it's giving me an actual log of uploads which is not something I was getting on the previous script! Quote Link to comment
DZMM Posted April 15, 2020 Author Share Posted April 15, 2020 3 minutes ago, drogg said: This appears to be working for me. I'll update in a couple hours to see, but it's giving me an actual log of uploads which is not something I was getting on the previous script! Thanks. @jonathanm could you try as well please. @watchmeexplode5 I might roll back the discord update as I think it's causing some problems - not sure why I don't run it myself. 1 Quote Link to comment
JonathanM Posted April 15, 2020 Share Posted April 15, 2020 39 minutes ago, DZMM said: . @jonathanm could you try as well please. I'm fine with the 0.95.6 version, I was just trying to help determine whether or not there was actual upload activity despite the errors. My upload log is working with .6 I don't have any discord configured. Quote Link to comment
watchmeexplode5 Posted April 15, 2020 Share Posted April 15, 2020 (edited) @DZMM I was thinking the same thing. Probably best; I've noticed it's been causing some users headaches (especially difficult to track down errors when the -verbose isn't pushed to the script logs portion). Roll-back sounds like the best option 👍 I've also got a new update I've been playing with (gotta keep busy with something during this COVID-19 quarantine 😑 ). I'll push it when I've made more progress and it's more stable. I'll push it as a new branch so users still can choose the stable version of your scripts. Edited April 16, 2020 by watchmeexplode5 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.