Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

1542 posts in this topic Last Reply

Recommended Posts

28 minutes ago, testdasi said:

You shouldn't be running the upload script on an hourly schedule with such a slow connection to be honest.

At least don't run it on schedule untill everything has been uploaded.

Actually any schedule is fine - that's why the checker file is there to stop additional upload jobs starting if there's an instance already running.  If you don't make user script changes while an instance is running, the logs should still keep updating to show you what's happening.

Share this post


Link to post
20 minutes ago, DZMM said:

Actually any schedule is fine - that's why the checker file is there to stop additional upload jobs starting if there's an instance already running.  If you don't make user script changes while an instance is running, the logs should still keep updating to show you what's happening.

I could have explained it clearer.

The rclone upload job could be terminated silently (e.g. out of memory is the most frequent problem I have seen), in which case, the termination of upload script due to existing ongoing upload is actually an indication that something is amiss.

This would be missed by the users if the script is run on a regular schedule during the initial massive upload. Most users don't check the upload log (and network stat etc) and just assume things are just going in the background.

Share this post


Link to post

Thanks for putting this together saves me days of uploads. I have tried to access my encrypted files from a windows pc using a copy of the rclone config file but am unable to find the media folders or files. I originally was using PlexGuide which i used the same SA's in my upload script i am using copy instead of move as i would like to have a copy on UnRaid and my gdrive 

Share this post


Link to post
Posted (edited)

I use vlans at home and this caused all the traffic to leave via the management address even after binding it to an interface within the rclone upload script. The fix was to add a route second routing table and route for the IP I assigned it to.

 

The subnet is 192.168.100/24

The gateway is 168.168.100.1

The IP assigned to rclone upload is 192.168.100.90

 

echo "1 rt2" >> /etc/iproute2/rt_tables
ip route add 192.168.100.0/24 dev br0.100 src 192.168.100.90 table rt2
ip route add default via 192.168.100.1 dev br0.100 table rt2
ip rule add from 192.168.100.90/32 table rt2
ip rule add to 192.168.100.90/32 table rt2

 

Edited by Tuftuf

Share this post


Link to post

So, is everyone else's server getting hammered with everyone staying at home?  

Share this post


Link to post
9 hours ago, DZMM said:

So, is everyone else's server getting hammered with everyone staying at home?  

Yep! I'm down in New Zealand, we're in lock down for 4 weeks at least, apart from essential services. Not even supposed to drive outside your local area.

Share this post


Link to post

I might finally get a chance to migrate to mergerfs and the new mount scripts :)

Share this post


Link to post
On 3/19/2020 at 11:45 AM, JohnJay829 said:

Thanks for putting this together saves me days of uploads. I have tried to access my encrypted files from a windows pc using a copy of the rclone config file but am unable to find the media folders or files. I originally was using PlexGuide which i used the same SA's in my upload script i am using copy instead of move as i would like to have a copy on UnRaid and my gdrive 

I was able to get this going with making a new config file..

 

 

 

Now i am able to see my encrypted files on my windows rclone browser. Although i can see the files i am only able to upload without the sa's when i try using them i get an error:

26.03.2020 12:40:22 INFO: *** Rclone move selected. Files will be moved from /mnt/user/MeJoMediaServer/googledrive_encrypted for googledrive_encrypted ***
26.03.2020 12:40:22 INFO: Checking if rclone installed successfully.
26.03.2020 12:40:22 INFO: rclone installed successfully - proceeding with upload.
26.03.2020 12:40:22 INFO: Counter file found for googledrive_encrypted.
26.03.2020 12:40:22 INFO: Adjusted service_account_file for upload remote googledrive_encrypted to GDSA5.json based on counter 5.
26.03.2020 12:40:22 INFO: *** Not using rclone move - will remove --delete-empty-src-dirs to upload.
====== RCLONE DEBUG ======
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 3.7s
==========================
26.03.2020 12:40:28 INFO: Created counter_6 for next upload run.
26.03.2020 12:40:28 INFO: Log files scrubbed
rm: cannot remove '/mnt/user/appdata/other/rclone/remotes/googledrive_encrypted/upload_running_daily_upload': No such file or directory
26.03.2020 12:40:28 INFO: Script complete
Script Finished Thu, 26 Mar 2020 12:40:28 -0400

Full logs for this script are available at /tmp/user.scripts/tmpScripts/Rclone Upload/log.txt

 

Where i do see the actual errors

Share this post


Link to post

@DZMM Just wanted to drop a thank you note. Finally finished my 26TB upload. 100% cloud based now. Had 12 simultaneous streams last night and no issues. You're a legend.

Share this post


Link to post
10 hours ago, veritas2884 said:

@DZMM Just wanted to drop a thank you note. Finally finished my 26TB upload. 100% cloud based now. Had 12 simultaneous streams last night and no issues. You're a legend.

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

Share this post


Link to post
13 hours ago, DZMM said:

You're welcome.  What are you going to do with your empty HDDs?  I sold mine, even my parity drive as I don't need it now.

 

12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.

 

 

I didn't think about selling them. We are currently on state-wide lock down, so I don't think I'll get to a fedex any time soon. I might also scrap the parity drive to speed up writes.

 

I do have a new issue that popped up. I wanted to start adding 4K TV content and created a 4KTV folder in my local media folder. This folder then showed up inside my MergerFS mount, as expected. However, Sonarr and Radarr cannot add the location even though I can select it, it just doesn't show up upon hitting add as a location. It seems like any new folder I create isn't addressable by my Docker Containers even though it is inside the master media folder that is added to their container with read/write access settings. The odd thing is I had a folder of some of my daughter's dance recitals I had shared on plex for family to see a while ago and I was able to move the files out of there get Sonarr to add the 4Ktv to that. 

 

So TL:DR it seems like some kind of permissions issue with new folder created inside the mergerFS structure. Is there a recommended way to create new folder locations to make them addressable by containers? 

 

 

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly?  

Edited by veritas2884

Share this post


Link to post
5 hours ago, veritas2884 said:

**UPDATE** I was able to chmod 777 the folder inside the mergerFS mount via the terminal and now I can add it to Sonarr. Is this expected behavior or am I creating new folders for new content incorrectly? 

Something's wrong as folders created either in /local or /mount_mergerfs should behave like normal folders i.e. radarr/sonarr etc adding/upgrading/removing when they want. 

 

Some apps like krusader and Plex need to be started after the mount, but that's the only problem I'm aware of.   What are your docker mappings and rclone mount options?

Share this post


Link to post

the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"

 

any ideas? i'd like to be able to see the live progress.

Share this post


Link to post
On 4/1/2020 at 10:59 AM, remedy said:

the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"

 

any ideas? i'd like to be able to see the live progress.

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

Share this post


Link to post
4 hours ago, senpaibox said:

The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Share this post


Link to post

Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts.

 

When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script.

Just make sure you change the folder paths to your situation and put in your dockers.

#!/bin/bash

if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then
	echo "$(date "+%d.%m.%Y %T") INFO: Mount connected."
else
	touch /mnt/user/appdata/other/rclone/mount_disconnected
	echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers."
	docker stop plex nzbget
    rm /mnt/user/appdata/other/rclone/dockers_started
fi

 

Share this post


Link to post
23 hours ago, remedy said:

hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.

 

is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.

Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.