Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

17 minutes ago, DZMM said:

Are you using the user scripts plugin to run the mount script?

Yes

 

285425506_2019-07-2508_53_43-Kanard_Userscripts.thumb.jpg.fa8e4990e2b5e8953f4737e58c526b99.jpg

#!/bin/bash

#######  Check if script is already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_mount_running

fi

#######  End Check if script already running  ##########

#######  Start rclone gdrive mount  ##########

# check if gdrive mount already created

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted."

else

echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs."

# create directories for rclone mount and unionfs mount

mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

# check if mount successful

# slight pause to give mount time to finalise

sleep 5

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End rclone gdrive mount  ##########

#######  Start unionfs mount   ##########

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted."

else

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted."

else

echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed."

rm /mnt/user/appdata/other/rclone/rclone_mount_running

exit

fi

fi

#######  End Mount unionfs   ##########

############### starting dockers that need unionfs mount ######################

# only start dockers once

if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: dockers already started"

else

touch /mnt/user/appdata/other/rclone/dockers_started

echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers."

docker start plex
docker start tautulli
docker start radarr
docker start sonarr

fi

############### end dockers that need unionfs mount ######################

exit
Edited by yendi
Link to comment
9 hours ago, DZMM said:

If I manually create the path, I can run the command but it hangs in the ssh window (i never get the prompt again) so I assume that there is something fishy. Permissions issues?

It looks like you've created /mnt/user/mount_rclone/google_vfs so all should be good.  Are you running the script in the background?

Link to comment
1 minute ago, DZMM said:

It looks like you've created /mnt/user/mount_rclone/google_vfs so all should be good.  Are you running the script in the background?

When I created manually the folders, I ran the rclone command directly in a SSH window. When I hit enter, the prompt is working but I have no message or no way to input any other thing. It is like if it was blocked. 

Is it a normal behavior?

Thanks

Link to comment
4 hours ago, yendi said:

When I created manually the folders, I ran the rclone command directly in a SSH window. When I hit enter, the prompt is working but I have no message or no way to input any other thing. It is like if it was blocked. 

Is it a normal behavior?

Thanks

No.  If you can, reboot your server and run the script in the background using the user scripts plugin

Edited by DZMM
Link to comment
2 hours ago, yendi said:

When I created manually the folders, I ran the rclone command directly in a SSH window. When I hit enter, the prompt is working but I have no message or no way to input any other thing. It is like if it was blocked. 

Is it a normal behavior?

Thanks

I have run the scripts from SSH many times before and have never had any problem so I can tell you for a fact that if done correctly, it makes no difference between the CA Userscript vs SSH.

The problem is "if done correctly" - too many variables as to what you did that may cause it to not work. Hence, run it in CA Userscript in background is the best solution.

Link to comment
21 hours ago, markrudling said:

Radarr or Sonarr may be updating the modified date on the file. To do this, the entire file is pulled locally, the time is updated and then the file is uploaded again. I turned this functionality off in Radarr/Sonarr

 

To answer your question though, enable verbose logging in rclone

Ive checked both and in radarr it was off. In sonarr i cant find this setting.

 

Where/Whats the best way to enable verbose? It feels like its 24/7 downloading something... 

Edited by nuhll
Link to comment
10 minutes ago, nuhll said:

Ive checked both and in radarr it was off. In sonarr i cant find this setting.

 

Where/Whats the best way to enable verbose? It feels like its 24/7 downloading something... 

In Sonarr, under Media Management with Advanced Settings shown, its "Change File Date" under File Management.

 

In your rclone mount script, change --log-level INFO to --log-level DEBUG

Link to comment
23 hours ago, nuhll said:

So everything works perfectly, im just not sure if there is a problem because iam seeing high downloads when plex radarr, sonarr and nzbget are doing nothing.

 

Is there any way to check rclone what he is doing? (ive looked up via ssh and its definitly coming from rclone)

It could be plex - I would definitely turn off thumbnail creation if you have that on in your library settings.   Maybe also turn off 'extensive media analysis' in scheduled tasks - others suggest this but I've had no problems and actually think it's a bad idea.

 

If you look in status/alerts in plex you'll see what plex is doing - it's probably analysing files or creating thumbnails

Link to comment
35 minutes ago, DZMM said:

It could be plex - I would definitely turn off thumbnail creation if you have that on in your library settings.   Maybe also turn off 'extensive media analysis' in scheduled tasks - others suggest this but I've had no problems and actually think it's a bad idea.

 

If you look in status/alerts in plex you'll see what plex is doing - it's probably analysing files or creating thumbnails

Normal it was that. But this time i dont think so. (doesnt show activity)

 

Verbose doenst really seems to help, thats the last lines:

2019/07/25 13:51:11 DEBUG : /: Lookup: name=".unionfs"
2019/07/25 13:51:11 DEBUG : /: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : Filme/: Lookup: name="homemovie (1992)"
2019/07/25 13:51:11 DEBUG : Filme/: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : /: Lookup: name=".unionfs"
2019/07/25 13:51:11 DEBUG : /: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : /: Lookup: name=".unionfs"
2019/07/25 13:51:11 DEBUG : /: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : /: Lookup: name=".unionfs"
2019/07/25 13:51:11 DEBUG : /: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : Filme/: Lookup: name="homemovie 2 (2019)"
2019/07/25 13:51:11 DEBUG : Filme/: >Lookup: node=, err=no such file or directory
2019/07/25 13:51:11 DEBUG : /: Lookup: name=".unionfs"
2019/07/25 13:51:11 DEBUG : /: >Lookup: node=, err=no such file or directory

 

Downloading currently with 30-40Mbits.

 

Plex is doin nothing, radarr and sonarr doensnt seem to do anything, only nzbget is downoading, but its downloading to cache... ? (not to a rclone directory) Also ive set plex should shedule his sh1t between 9 and 14 (which would be over by now)

 

nethogs shows rcloneorig downloading... 

 

I dont really have that big of a problem with it, it just seems wrong. (Maybe some sort of loop?)

 

But wtf, if mount is not downloading, where does it come from???

Edited by nuhll
Link to comment

@nuhll: Questions:

  1. Do you find a copy of a file in the "RW" location of your unionfs mount? (your unionfs should contain 1 RW + 1 RO - it's in the command).
  2. If yes to (1), have you tried to access the file in the RW location directly i.e. bypassing unionfs? If not, do a comparison of the file in the RW location vs the original in the cloud - something as simple as just try to play each copy of the file. I'm trying to eliminate the worst case scenario - that is you are having a cryptovirus trying to encrypt your data.
  3. If no to (1) then at least you know it's not something too serious. It's simply a matter of turning of things one at a time to see what is causing it.

 

Link to comment
7 hours ago, DZMM said:

No.  If you can, reboot your server and run the script in the background using the user scripts plugin

I rebooted, run in the background and I have the same error:

25.07.2019 16:52:39 INFO: mounting rclone vfs.
2019/07/25 16:52:40 Fatal error: Can not open: /mnt/user/mount_rclone/google_vfs: open /mnt/user/mount_rclone/google_vfs: no such file or directory
25.07.2019 16:52:44 CRITICAL: rclone gdrive vfs mount failed - please check for problems.

Could you please double check that I am doing it right:

  1. Installed Rclone BETA and add this config:1923298102_2019-07-2516_59_12-Kanard_rclone-beta.thumb.jpg.b8bbe2bd6788c70e0c5ae4e3d60da75d.jpg
  2. Installed Unionfs-Fuse from Nerdpack:4455667_2019-07-2516_58_56-Kanard_NerdPack.thumb.jpg.d3a30a4a9f638b5d219e762c75815384.jpg
  3. Copied all the scripts in userscripts
  4. Past those commands to ssh:
    1. "touch mountcheck"

    2. "rclone copy mountcheck gdrive_media_vfs: -vv --no-traverse"

  5. Started the mountscript in background

1531577641_2019-07-2508_53_43-Kanard_Userscripts.thumb.jpg.ea37cd23e1cfaa3566785b46dfd9f014.jpg

 

--> Am I missing something? I started everything all over again with the same result...

 

Thanks ! 

2019-07-25 08_53_43-Kanard_Userscripts.jpg

2019-07-25 09_22_44-Kanard_Userscripts.jpg

Edited by yendi
Link to comment

That usually means the folder wasn't created for some reasons and/or your rclone mount command uses a different path.

 

There is a mkdir line in the script e.g.:

mkdir -p /mnt/user/mount_rclone/google_vfs

Do you find an mkdir line in your mount script?

 

Instead of doing screenshot of the scripts, it's better if you copy-paste your script into a post (remember to use the forum's code functionality - the button that looks like </> so it's easier to check).

 

Link to comment
21 hours ago, testdasi said:

@nuhll: Questions:

  1. Do you find a copy of a file in the "RW" location of your unionfs mount? (your unionfs should contain 1 RW + 1 RO - it's in the command).
  2. If yes to (1), have you tried to access the file in the RW location directly i.e. bypassing unionfs? If not, do a comparison of the file in the RW location vs the original in the cloud - something as simple as just try to play each copy of the file. I'm trying to eliminate the worst case scenario - that is you are having a cryptovirus trying to encrypt your data.
  3. If no to (1) then at least you know it's not something too serious. It's simply a matter of turning of things one at a time to see what is causing it.

 

Sorry! It seems like the log window from unraid just froze xD 

 

It seems like lidarr is creating around 5Mbits DL.

I get 100Mbits if sonarr or radarr is doing something so i guess its okay.

 

Thanks everyone!


Ive changing it now to --log-level INFO 

 

--log-level LEVEL

This sets the log level for rclone. The default log level is NOTICE.

DEBUG is equivalent to -vv. It outputs lots of debug info - useful for bug reports and really finding out what rclone is doing.

INFO is equivalent to -v. It outputs information about each transfer and prints stats once a minute by default.

NOTICE is the default log level if no logging flags are supplied. It outputs very little when things are working normally. It outputs warnings and significant events.

ERROR is equivalent to -q. It only outputs error messages.

Edited by nuhll
Link to comment
18 hours ago, testdasi said:

That usually means the folder wasn't created for some reasons and/or your rclone mount command uses a different path.

 

There is a mkdir line in the script e.g.:


mkdir -p /mnt/user/mount_rclone/google_vfs

Do you find an mkdir line in your mount script?

  

Instead of doing screenshot of the scripts, it's better if you copy-paste your script into a post (remember to use the forum's code functionality - the button that looks like </> so it's easier to check).

  

I use the exact script from github, I posted a code insert of it at the top of this page.

Link to comment
1 hour ago, yendi said:

I use the exact script from github, I posted a code insert of it at the top of this page.

Do you have a /mnt/user/mount_rclone/google_vfs folder?  If not create one.  I think this will solve your problem.

 

I've added:

 

mkdir -p /mnt/user/mount_rclone/google_vfs

 

to the mount script.  I think there was a reason why it's not there, but I can't remember why so adding until someone tells me it causes a problem.

 

Edited by DZMM
Link to comment
23 hours ago, DZMM said:

Do you have a /mnt/user/mount_rclone/google_vfs folder?  If not create one.  I think this will solve your problem.

 

I've added:

 


mkdir -p /mnt/user/mount_rclone/google_vfs

 

to the mount script.  I think there was a reason why it's not there, but I can't remember why so adding until someone tells me it causes a problem.

 

This seems to have solved the problem... So trivial ! Thanks :D

As I have 40Tb + of files, is there a way to make a "symbolik link" like of my movie folder and tv shows for them to be uploaded on the background (without deleting them after upload)? So I can keep a copy of everything local until the full upload is done, and switch at once at the end.

Thanks

Edited by yendi
Link to comment
30 minutes ago, Kaizac said:

@DZMM how would your cleanup script work for a mount you've only connected to mount_rclone (backup mount for example which isn't used in mount_unionfs)? I can't alter your script as I'm not 100% sure whether some lines are necessary.

it wouldn't be needed if you're not using unionfs

Link to comment
27 minutes ago, yendi said:

This seems to have solved the problem... So trivial ! Thanks :D

 

good

 

27 minutes ago, yendi said:

As I have 40Gb + of files, is there a way to make a "symbolik link" like of my movie folder and tv shows for them to be uploaded on the background (without deleting them after upload)? So I can keep a copy of everything local until the full upload is done, and switch at once at the end.

Thanks

I'm not sure why you'd want to do this.  If you want to test first, just manually copy a tv show or movie or two to see what happens.

 

If you really want to test the full 40GB, then in the upload script just change rclone move to rclone sync

Link to comment
11 minutes ago, DZMM said:

good

 

I'm not sure why you'd want to do this.  If you want to test first, just manually copy a tv show or movie or two to see what happens.

 

If you really want to test the full 40GB, then in the upload script just change rclone move to rclone sync

I made a typo it’s 40TB.

If my tv show path is /mnt/user/Media/TV shows would this command do the trick ?

Quote

rclone sync “/mnt/user/Media/TV shows” gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m

I put the quotes in the path as there is a space and removed the “--delete-empty-src-dirs” Am I correct?

Thanks

Edited by yendi
Link to comment
1 hour ago, yendi said:

I made a typo it’s 40TB.

If my tv show path is /mnt/user/Media/TV shows would this command do the trick ?

I put the quotes in the path as there is a space and removed the “--delete-empty-src-dirs” Am I correct?

Thanks

Looks right.  Re the space, that's why I always use dashs or underlines in names to make my life easier

Edited by DZMM
Link to comment
4 hours ago, DZMM said:

it wouldn't be needed if you're not using unionfs

Thanks you're right. My backup Share is just using a lot of files. How do you do this yourself? All the pictures and small files you want to keep safe, don't amount to a lot of size, but do to number of files

 

Some way to autozip them would be best I think. Just like the CA backup does but then for your own shares.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.