Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

1 hour ago, bugster said:

Question: Every time the scripts is run I get the following error on multiple Movies/TV

 

 

I'm not even trying to upload the one listed but somehow they show on the log. Why is it doing this?

you have a duplicate directory in your destination, not your source - rclone picks this up.  gdrive allows duplicate directories or files with the same name.  if you're worried or anal about file management, you can investigate and manually delete the dups from gdrive or check out the rclone dedupe commands which are really good and can clean up server-side dupes.

 

Again though, dupes on gdrive's side doesn't affect the mount as they aren't loaded.

Link to comment

I think this is obvious, but wanted to check - should I just create a duplicate version of the script and set it to run on array start, and then also have a second one to run every say 10 min like your git outlines? Reason I'm asking, restarted the server earlier today - current script is set with a run every 10 min cron setup like your example, but after having restarted earlier today the server doesn't have anything mounted and appears it hasn't for most of the day. Just curious if I'm missing something, or if I'm facing a different issue. 

Link to comment
12 hours ago, 1activegeek said:

I think this is obvious, but wanted to check - should I just create a duplicate version of the script and set it to run on array start, and then also have a second one to run every say 10 min like your git outlines? Reason I'm asking, restarted the server earlier today - current script is set with a run every 10 min cron setup like your example, but after having restarted earlier today the server doesn't have anything mounted and appears it hasn't for most of the day. Just curious if I'm missing something, or if I'm facing a different issue. 

Have you checked the log to see why is not mounting?

Link to comment

@axeman yup, you can create the service json files from any machine.

 

It's a little less error prone if you do it on Linux but I've done it on Windows without issues.

 

You can also do it on unraid easily. Make sure you have python installed via nerd tools. If you run into issues like not having pip just ssh to your box and run:

"python3 get-pip.py"

Link to comment
14 minutes ago, watchmeexplode5 said:

@axeman yup, you can create the service json files from any machine.

 

It's a little less error prone if you do it on Linux but I've done it on Windows without issues.

 

You can also do it on unraid easily. Make sure you have python installed via nerd tools. If you run into issues like not having pip just ssh to your box and run:

"python3 get-pip.py"

Thanks ... I ended up doing it in a VM. I have the accounts created.. honestly, I feel like a script kiddie on this one. I normally understand what I'm doing, but with this, I'm just copy/pasta. 

 

Slowly but surely, getting there. 

Link to comment

Okay - so here's my first actual implementation question.

 

Quote

Place Auto-Genortated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/

 

I don't have the appdata folder ... is this because I don't have Docker enabled? I ran my service accounts outside of UnRaid. I'm pretty sure the RClone installation went OK, because if i type in RClone at the command prompt, I get the usage screen.

 

I installed CA user scripts and Rclone via CA Apps.

 

Should I just create the path above? 

 

follow-up when I created the service accounts, does that already create a unique client_id? or do I need to create another one? 

Edited by axeman
client_id
Link to comment

@axeman

The mount script creates the appdata/other/rclone folder. Feel free to create them yourself.

 

For your mount remote id and secret. Use a access API you likely already have created for that. That's used to mount the drive. (Theoretically you could auth with a sa.json but let's not over complicate the setup 😝).

 

No need for additional ids/secrets for the SAs. When an upload is ran, your .json service files will be referenced for credentials and the client id/secret will be ignored ( ie: you upload with a new unique user that has a clean quota limit -- 750gb per service account/day).

 

You can also define a custom location for the accounts if you want. It's just cleaner to keep it all in the other/rclone folder. 

Edited by watchmeexplode5
  • Like 1
Link to comment
On 5/7/2020 at 5:27 PM, DZMM said:

That wouldn't work for me.  It can literally freeze and not shutdown and I need to do a hard shutdown.  It's the main reason why I don't reboot often - e.g. last month I got a disk error from a hard shutdown, which was painful to fix.

 

Sometimes it's seamless - I actually think there's something in 6.8.x that causes the problem.

 

On 5/8/2020 at 8:55 AM, DZMM said:

But

 

- the checker files are created and removed by a script. Just because they aren't there, doesn't mean rclone isn't running

- sleeping for a combined 20s isn't going to fix hangs

 

 

Do you know better solution for a unmount?  Sometimtes I need to shut down the array and example in the weekend I will upgrade my hw setup. So sometimes would be nice to have a script for a clean unmount.

Link to comment

Hi a question. If I wish to overwrite an older version with a new video. Will it be best to move it to local folder first, which would be empty, because the video is already uploaded and figurates in mergerfs and run upload script to let it overwrite or should I move it directly to mergerfs folder and run upload script?

 

I don't want copies of the video just have the older version which is already in the cloud overwritten.

Link to comment
1 hour ago, Bjur said:

Hi a question. If I wish to overwrite an older version with a new video. Will it be best to move it to local folder first, which would be empty, because the video is already uploaded and figurates in mergerfs and run upload script to let it overwrite or should I move it directly to mergerfs folder and run upload script?

 

I don't want copies of the video just have the older version which is already in the cloud overwritten.

you're overthinking things.  Just treat the mergerfs folder like a normal folder where you add, move, edit etc files and don't worry about what happens in the background and let rclone via the scripts deal with everything.

Link to comment

Okay, thanks for the information. And I guess the same goes with /mnt/local in regards to moving to that first? Just followed watchmeexplodes advice to move downloaded stuff into local first.

I just remembered you writing a post earlier in regards to files not being overwritten correct which caused multiple copies, so I wanted to make sure.

Link to comment
1 hour ago, Bjur said:

Okay, thanks for the information. And I guess the same goes with /mnt/local in regards to moving to that first? Just followed watchmeexplodes advice to move downloaded stuff into local first.

I just remembered you writing a post earlier in regards to files not being overwritten correct which caused multiple copies, so I wanted to make sure.

Not sure of the context of @watchmeexplode5 advice, but it's best to forget /mnt/user/local exists and focus all file management activity on /mnt/user/mount_mergerfs as there's less chance of something going wrong

Link to comment

@francrouge,

Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case.

 

Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones.

Edited by watchmeexplode5
  • Like 1
Link to comment
@francrouge,
Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case.
 
Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones.
Thx i will then try it

Envoyé de mon Pixel 2 XL en utilisant Tapatalk

Link to comment
13 hours ago, watchmeexplode5 said:

@DZMM @Bjur. I often unpack and write to the local mount due to a minor decrease in performance on a fuse filesystem. But that decrease is very minor. It's easiest to follow DZMM's advise and do most your work in the mergerfs mount. 

 

@DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie.

Can you follow?

Link to comment

I love the idea of this plugin and the simplicity to get it all setup and running smoothly. I greatly appreciate the work that has gone into it!

 

One quick question: I have the upload script copying (vs moving) my libraries to a team share now. If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder? This should leave the file in the mergefs and then playback would occur via the team drive, right?
 

I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again!

Edited by pgbtech
Typo
Link to comment
53 minutes ago, pgbtech said:

If I want to give the remote drive playback a trial, could I just delete a sample file from the /mnt/user/local/gcrypt folder?

Yes.  Mergerfs looks at the local location first so if you want to play the cloud copy, you need to delete the local copy.

 

53 minutes ago, pgbtech said:

I am thinking of writing a simple age-off script where files older than 30 days are removed locally (via /mnt/user/local), then living only in the team share. Thanks again!

The script already does this - just set the upload script to 'move' and then the MinimumAge to 30d.

Edited by DZMM
Link to comment

Hi everyone.

 

I am looking for some assistance. I have very slow scan speeds on plex, via SMB on another computer. Plex running in docker on my unraid machine is acceptable.

 

I have quite a few users that have slow connections so I have a second i7 machine running Windows 10 and Plex that I send them too, leaving the Unraid server with some headspace to do everything else it does.

 

The Win PC has a read only mapped network drive to the gdrive folder in mount_mergerfs share from unraid. Browsing this share can be slow, sometimes its fairly fast. Copy from this share can be fast, but it is very intermittent. Most of the time I get full 200meg copy speed though, so this is acceptable.

 

When running the scan, network activity in the Win PC is as expected, fairly low. However, network activity on Unraid and my router is going nuts. What seems to be happening is that Plex on the windows PC is scanning the directory, asking for just a bit of the file, and rclone/unraid is attempting to serve much more of the file, meaning each file takes a long time to scan.

 

I have tested the Win PC with RaiDrive and mounted a drive, and the scans through there are VERY fast and only 1-3meg of my line is used.

 

I think windows and unraid are not playing well in this configuration.

 

Can anyone offer some settings or advise? My mount settings are stock.

 

 

Link to comment

Using 

### Upload Script ####

######################

### Version 0.95.5 ###

 

Script starts fine but doesn't complete i get this on the read out

/usr/sbin/rclone: line 3: 18008 Killed rcloneorig --config $config "$@"
21.05.2020 15:53:14 INFO: Created counter_20 for next upload run.
21.05.2020 15:53:14 INFO: Script complete
Script Finished May 21, 2020 15:53.14

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.