Jump to content
DZMM

Guide: How To Use Rclone To Mount Cloud Drives And Play Files

1000 posts in this topic Last Reply

Recommended Posts

You might run into memory problems - see some recent posts in this thread on the topic

Share this post


Link to post
Posted (edited)

Lets say i host a 1gb file, and stteam e.g. via ftp or plex or httpd... and another download (file access) would start... would it be clever and use cached data or start a new download from google?

 

Like are 100 1gb downloads 100 individual streams, or can cache be used?

Edited by nuhll

Share this post


Link to post

I think each stream is handled uniquely

Share this post


Link to post

This is why I put a 8 day delay on the upload script: to have fresh files in local to prevent multiple concurrent downloads of the same newly released file.

Share this post


Link to post
Posted (edited)

DISREGARD, ill leave this here incase anyone else has the same issue, i recopied the script from GITHUB and reran it, issue gone.

 

 

I have had an issue ever since i got this working, the union FS cleanup script throws this error everytime something is deleted from the plex server or via sonarr/radarr and then after a rescan, the file is back. Am i missing something?

 

Script location: /tmp/user.scripts/tmpScripts/rclone_cleanup/script
30.08.2019 19:12:16 INFO: starting unionfs cleanup.
rm: cannot remove '/mnt/user/mount_rclone/google_vfs/mnt/user/mount_unionfs/google_vfs/.unionfs/Movies/Aquaman (2018)/Aquaman (2018) Remux-2160p.mkv': No such file or directory

 

 

 

Here's the script as followed from github.

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_cleanup" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_cleanup

fi

#######  End Check if script already running  ##########


################### Clean-up UnionFS Folder  #########################

echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup."

find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do
oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs}
newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~}
rm "$newPath"
rm "$line"
done
find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete

rm /mnt/user/appdata/other/rclone/rclone_cleanup

exit

Edited by Bolagnaise

Share this post


Link to post

I tried following the spaceinvader one seedbox-guide on youtube and use it with this script. But I have some problems that I hope someone has a solution to.

After a torrent is finished and unpacked on the seedbox I use syncthing to transfer the file back to the unraid-server. I mapped the folder "rclone_upload/google_vfs" in the syncthing container and added the path "/Media/Movies" inside the syncthing container as it matches my google drive paths. The first time everything works great and all files are uploaded to the correct google drive folder. But then the script removes the folders and syncthing breakes down as the "folder-markers" are deleted.

If I remove the "--delete-empty-src-dirs" from the rclone_upload script nothing gets deleted and the folders will fill upp. Does anyone have any solution to how to fix this problem?

Share this post


Link to post
4 hours ago, Cliff said:

I tried following the spaceinvader one seedbox-guide on youtube and use it with this script. But I have some problems that I hope someone has a solution to.

After a torrent is finished and unpacked on the seedbox I use syncthing to transfer the file back to the unraid-server. I mapped the folder "rclone_upload/google_vfs" in the syncthing container and added the path "/Media/Movies" inside the syncthing container as it matches my google drive paths. The first time everything works great and all files are uploaded to the correct google drive folder. But then the script removes the folders and syncthing breakes down as the "folder-markers" are deleted.

If I remove the "--delete-empty-src-dirs" from the rclone_upload script nothing gets deleted and the folders will fill upp. Does anyone have any solution to how to fix this problem?

Map to /mount_unionfs/google_vfs instead

Share this post


Link to post
Posted (edited)

Ok, so I can write files directly to that folder and they get transferred to Google drive? Sorry for being slow but in that case why are the rclone_upload folder needed?

 

And hopefully one last question, when installing the dockers for sonarr/radarr I should provide folder-mappings for "tv" and "downloads". From the first page I understand that the tv-folder sould be mapped to in my case: "/mnt/user/mount_unionfs/google_vfs/Media/Tv"

 

But do I need any "downloads" -mapping? 

Edited by Cliff

Share this post


Link to post

Read the first post for how unionfs works - local files added to mount_unionfs are uploaded to Google via rclone_upload

Share this post


Link to post
On 8/16/2019 at 8:37 AM, DZMM said:

That's interesting - are you playing any high bitrate movies or 4k?

I've had some issues streaming 4K remuxes, and have pulled them onto the local drives from gdrive. Any thoughts on the amount of memory / CPU I need (I'm on symmetrical gigabit)? The streams are incredibly choppy and freeze.

 

I didn't see anything that looked like it was getting jammed up, but its been a while since I tested.

Share this post


Link to post
1 hour ago, privateer said:

I've had some issues streaming 4K remuxes, and have pulled them onto the local drives from gdrive. Any thoughts on the amount of memory / CPU I need (I'm on symmetrical gigabit)? The streams are incredibly choppy and freeze.

 

I didn't see anything that looked like it was getting jammed up, but its been a while since I tested.

CPU requirements (I think) shouldn't be much different to when playing locally, so if you can do locally they should work remotely.  Bandwidth clearly isn't an issue - how much ram do you have?  Are they choppy right from the beginning or after a while?  you could try increasing --vfs-read-chunk-size 128M to 256M ..it will slow your start times, but might help

Share this post


Link to post
On 9/6/2019 at 4:14 PM, DZMM said:

CPU requirements (I think) shouldn't be much different to when playing locally, so if you can do locally they should work remotely.  Bandwidth clearly isn't an issue - how much ram do you have?  Are they choppy right from the beginning or after a while?  you could try increasing --vfs-read-chunk-size 128M to 256M ..it will slow your start times, but might help

They tend to start and freeze etc. right from the beginning with no improvement. Can't make it more than a few mins. I have the same copy local to try and make sure everything is fine with the file etc. Right now I have 16GB of RAM.

Share this post


Link to post
3 hours ago, privateer said:

They tend to start and freeze etc. right from the beginning with no improvement. Can't make it more than a few mins. I have the same copy local to try and make sure everything is fine with the file etc. Right now I have 16GB of RAM.

Hmm I think you're low on RAM - try lowering the buffer:

 

 

Share this post


Link to post

I wanna say I appreciate these awesome scripts! They have been working flawlessly for me so far.

Now for my question, can I restructure the directories? I know to change each script and the corresponding directory path. Would this cause any problems?

mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs
CHANGE TO
mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/plexdrive/media_rclone/media
mkdir -p /mnt/user/plexdrive/media_unionfs/media
mkdir -p /mnt/user/plexdrive/media_upload/media

 

Share this post


Link to post
4 hours ago, senpaibox said:

I wanna say I appreciate these awesome scripts! They have been working flawlessly for me so far.

Now for my question, can I restructure the directories? I know to change each script and the corresponding directory path. Would this cause any problems?


mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/mount_rclone/google_vfs
mkdir -p /mnt/user/mount_unionfs/google_vfs
mkdir -p /mnt/user/rclone_upload/google_vfs
CHANGE TO
mkdir -p /mnt/user/appdata/other/rclone
mkdir -p /mnt/user/plexdrive/media_rclone/media
mkdir -p /mnt/user/plexdrive/media_unionfs/media
mkdir -p /mnt/user/plexdrive/media_upload/media

 

No real issues moving the paths if you're careful and update the scripts in the right places.  Changing your docker mappings as well though will mean Plex, radarr etc will need to rescan, but still manageable

Share this post


Link to post

Guys I need your help: I am struggleing since 2 weeks and I am so desperated that i'm considering getting back to HDD as I cannot figure out what is happening, It drives me nuts!!!

 

I am unable to reliably stream media since about 2 weeks: Plex is giving me a infinite spinning wheel with no error when I start a media. It's not on a particular media, anything can be affected. 

 

I have the same issue when mounting the share on Windows and trying to start the file: either explorer get unresponsive, either VLC will just be stuck.

 

The issue is NOT happening on local files (if they have not yet been uploaded)

 

I tried the following:

  • Update all dockers
  • Fix dockers perms
  • Downgrade to 6.6.7
  • Reinstall rclone-beta
  • Reinstall Unionfs
  • Change buffer size
  • Parity check

 

 

I'm pretty sure it is not linked to plex as I face the same issue on Windows so I am feeling hopeless...

 

Maybe you guys can help me? @DZMM

 

Thanks

 

Edited by yendi

Share this post


Link to post
4 minutes ago, yendi said:

Guys I need your help: I am struggleing since 2 weeks and I am so desperated that i'm considering getting back to HDD as I cannot figure out what is happening, It drives me nuts!!!

 

I am unable to reliably stream media since about 2 weeks: Plex is giving me a infinite spinning wheel with no error when I start a media. It's not on a particular media, anything can be affected. 

 

I have the same issue when mounting the share on Windows and trying to start the file: either explorer get unresponsive, either VLC will just be stuck.

 

I tried the following:

  • Update all dockers
  • Fix dockers perms
  • Downgrade to 6.6.7
  • Reinstall rclone-beta
  • Reinstall Unionfs
  • Change buffer size
  • Parity check

 

 

I'm pretty sure it is not linked to plex as I face the same issue on Windows so I am feeling hopeless...

 

Maybe you guys can help me? @DZMM

 

Thanks

 

Your Api is probably temp banned. I've had it happen so many times lately. What you can do is give your main dockers an own mount and Api. I've done this for Plex, Bazarr, Radarr and the rest. So 4 unionfs pointing to the same folders but through different apis.

 

What you need to do is just make a new Api and new rclone mounts. You can use the same local folders in your unionfs, but the different mount. And then in the docker settings point them to the seperate union folders.

 

If you don't understand what I mean, let me know!

Edited by Kaizac

Share this post


Link to post

Where can I see the error message of banned API ? Is there a setting to change to see this ?

Share this post


Link to post
Just now, yendi said:

Where can I see the error message of banned API ? Is there a setting to change to see this ?

You cant as far as I know. But if you play a file in windows it will give an error after a while. Then you know you're banned.

Share this post


Link to post

Ok so let's try this... Could you explain me how to make new API and stuff please? I have NO idea how to do that... Thanks mate so much appreciated !

Share this post


Link to post
Just now, yendi said:

Ok so let's try this... Could you explain me how to make new API and stuff please? I have NO idea how to do that... Thanks mate so much appreciated !

You never did before? Didn't you fill in client id and password while creating a rclone mount?

Share this post


Link to post
Just now, Kaizac said:

You never did before? Didn't you fill in client id and password while creating a rclone mount?

Hum yes, I have but if I redo everyhting I will lost all my files ?

Share this post


Link to post
1 minute ago, yendi said:

Hum yes, I have but if I redo everyhting I will lost all my files ?

Oh wait, are you using the gdrive or team drive?

Share this post


Link to post
2 minutes ago, yendi said:

gdrive

Yeah then making new apis won't work because you can't connect more email accounts to the gdrive. Im using Tdrive.

 

Only thing you can do is migrate to a Tdrive first. Or shut down dockers and stop the ones which might be giving you a lot of Api hits and start a process of elimination.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.