Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Ok, I was prompted to add "--allow-non-empty" so I did that. But it has been running for 30min now and I don't see any files in any of the mount_folders. If I look at the google console drive api there are a couple of hundred requests that increment very slowly like one every 5 sec. 

 

The only thing I noticed that have changed is that I have a crypt folder now on my google drive with a single file in it

Link to comment
On 7/31/2019 at 2:09 PM, DZMM said:

you could try reducing the number of uploads and checkers in the upload script to reduce ram usage.  An extra 8GB might do the trick, but you might need to change some of the buffer settings in the mount command if you anticipate having a lot of concurrent streams

I think there is an issue with Rclone: I saw in live the dismount issue and saw this:

rclone had multiple processes using 100% of available memory. Then it crashes and I have to use the unmount script before the being able to remount.

 

1226689252_2019-08-0121_28_29-mount_rclone.thumb.jpg.4ce3e7b70f50f7a21e864a06e7328649.jpg

 

Is there any possible explanation?

Thanks

Link to comment
On 8/1/2019 at 9:37 PM, yendi said:

I think there is an issue with Rclone: I saw in live the dismount issue and saw this:

rclone had multiple processes using 100% of available memory. Then it crashes and I have to use the unmount script before the being able to remount.

 

1226689252_2019-08-0121_28_29-mount_rclone.thumb.jpg.4ce3e7b70f50f7a21e864a06e7328649.jpg

 

Is there any possible explanation?

Thanks

So with the help of rclone guys I might have found the issue:

 

I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram.

When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously.

 

I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h.

 

Hope it helps someone who would face this issue !

  • Like 1
Link to comment

I have a strange problem if I use rclone mount to mount a google drive folder it gets created correctly and I see all the files on my google drive. I can read and write to the share without problems using a file-browser on unraid. But whenever I add the mount to a docker-container I can't write to it. Does anyone know how to solve this?

Link to comment
7 hours ago, sauso said:

Sounds like you aren't mounting it as RW.  I don't have any issues.

 

Post a screenshot of the folder mapping.

But I can read and write to it with no problems when using a filebrowser on unraid, it is just when trying to write to the share when it is mounted in a docker container that it fails.

Link to comment
1 hour ago, Cliff said:

The "LadingZone" path is a local path on the drive and it works. But the google drive path does not seam to allow writes. I can see 0 byte tmp-files and empty folders being created on my google drive but no actual files gets created.

unraid1.PNG

unraid2.PNG

you are mounting rclone at /mnt/disks - for host path 2 have you selected r/w slave?

Link to comment
9 hours ago, Kaizac said:

What is the docker you are using? And what happens when you restart the docker. Is it working then?

I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created

Link to comment
1 hour ago, Cliff said:

I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created

So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such.

Link to comment
15 minutes ago, Kaizac said:

So how is your rclone set up? Can you post your rclone config? Just remove the tokens and such.

Here is my rclone config. But as I think I said earlier, I can read/write to the mount without problem if I use the Cloud Commander app

unraid.PNG

Edited by Cliff
Link to comment
39 minutes ago, Cliff said:

Here is my rclone config. But as I think I said earlier, I can read/write to the mount without problem if I use the Cloud Commander app

unraid.PNG

Have you tried mounting at mnt/user instead of /mnt/disks ?  I've had problems with /mnt/disks in the past.

 

Also, what about other dockers - do they work ok?

Link to comment
1 minute ago, DZMM said:

Have you tried mounting at mnt/user instead of /mnt/disks ?  I've had problems with /mnt/disks in the past.

 

Also, what about other dockers - do they work ok?

yes I have tried /mnt/disks too. The Cloud Commander docker works and is able to read/write to the mount. But I dont know how that works as that container mounts "/" and then you browse using the web-ui

Link to comment
10 minutes ago, Cliff said:

yes I have tried /mnt/disks too. The Cloud Commander docker works and is able to read/write to the mount. But I dont know how that works as that container mounts "/" and then you browse using the web-ui

Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google.

Link to comment
20 minutes ago, Kaizac said:

Have you tried another docker like Radarr to see if you can write files there? Syncthing doesn't work with browsing through the ui. So you have to put exact paths. So in your case you start with /google.

yes, I have tried mounting /google and even /google/<some other folder on the drive>. As the google drive is already mounted I can browse and select the folder from the docker-settings. And when syncing I see that folders and files are created on google drive, but the problem is that only folders and very small files 0-~5kb are created

Link to comment
Just now, Cliff said:

yes, I have tried mounting /google and even /google/<some other folder on the drive>. As the google drive is already mounted I can browse and select the folder from the docker-settings. And when syncing I see that folders and files are created on google drive, but the problem is that only folders and very small files 0-~5kb are created

Then it's a syncthing issue as far as I can tell. You should test with an other docker.

Link to comment
4 hours ago, Cliff said:

I am using the Syncthing docker container. I have tried restarting both the docker-container and the server multiple times without change. I also tried moving the mountpoint between different disks and tried modifying the mount-script. But I noticed a strange thing today. I changed the remote folder to a new folder on google drive, and this time folders where created again but also some small files like jpeg-thumbnails and .nfo files. But no "big" files where created

Oh dear, don't use Syncthing directly on the mount. It's a rather long explanation as to why it doesn't work but just trust me, it won't work.

  • Upvote 2
Link to comment

Sometimes files start to disappear on the local unionfs mount, they are still in the cloud. I have to unmount/remount unionfs for things to show back up.. Anyway to fix this?

 

I changed the buffer size to 32MB.. will see how it goes.

Edited by mestep
Link to comment
On 8/4/2019 at 5:31 AM, yendi said:

So with the help of rclone guys I might have found the issue:

 

I have 12 gb of ram and when I upload + all the services running on unRAID i am using about 8.5 of the ram.

When Plex is doing the thumbnails it seems that it consume all remaining ram for the job: it consume some ram for Plex itself + the --buffer-size 256 * number of opened files. Apparently its 4-5 files simultaneously.

 

I lowered the buffer-size variable to 128mb and I have not seen the issue since 24h.

 

Hope it helps someone who would face this issue !

Thankyou so much, i have only 10GB of ram currently and was seeing rclone crashes in unraid and out of memory issues. Im upgrading to 16GB tommorow

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.