Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

29 minutes ago, Kaizac said:

Does old media also not play? If not you're probably Api banned. You can try it with a second mount based on another client id and account.

local media work fine on plex.

 

How can i know if i'm ban ?

 

 

Just tried to download a file from gdrive and i'm gettint api ban warning..

 

Its probably because i did a full scan of library..

 

Can you tell me what folder i should map in plex please

 

mount_rclone or mount_unionfs

 

thx

Edited by francrouge
Link to comment
2 hours ago, testdasi said:

It sounds like you mapped Sonarr to the upload folder.

Sonarr should be mapped with the unionfs folder (and let unionfs control the use of upload folder).

 

 

 

 

No definitely mapped to the union folder. Sonarr could still see all of the episodes in a series when I did a scan of an episode within Sonarr. If Sonarr was pointed at the upload folder it wouldn't see any of that.

It definitely seems permissions related. The error that it does not have access can also be generated by not being able to write to the destination folder (union folder).

Link to comment
3 hours ago, francrouge said:

local media work fine on plex.

 

How can i know if i'm ban ?

 

 

Just tried to download a file from gdrive and i'm gettint api ban warning..

 

Its probably because i did a full scan of library..

 

Can you tell me what folder i should map in plex please

 

mount_rclone or mount_unionfs

 

thx

Mount unionfs. And doing heavy plex scans gives you a ban. Often lifted around 0:00 AM.

Link to comment
14 hours ago, jude said:

No definitely mapped to the union folder. Sonarr could still see all of the episodes in a series when I did a scan of an episode within Sonarr. If Sonarr was pointed at the upload folder it wouldn't see any of that.

It definitely seems permissions related. The error that it does not have access can also be generated by not being able to write to the destination folder (union folder).

Hmm... that's strange. Maybe try unmount everything and redo permission of all the local folders. Reboot and then rerun the script.

 

 

16 hours ago, francrouge said:

local media work fine on plex.

 

How can i know if i'm ban ?

 

 

Just tried to download a file from gdrive and i'm gettint api ban warning..

 

Its probably because i did a full scan of library..

 

Can you tell me what folder i should map in plex please

 

mount_rclone or mount_unionfs

 

thx

 

A few points:

  • Unless you are doing a full plex library scan from scratch (i.e. blank database), it should NOT get you API ban. I do it all the time and have never run into any issue. So I think you have something else happening. The usual suspect is any kind of subtitle finder dockers. They were known to break the limit.
  • Avoiding accidental ban like your case is why it's recommended to use Team Drive (now aka "Share Drive" - new name, same features) instead. That would allow you to have a "plan B".
  • In terms of mapping, you can decide for yourself.
    • mount_rclone = excluding things not yet uploaded to gdrive (i.e. ONLY files that are actually currently on the gdrive)
    • mount_unionfs = gdrive + things not yet uploaded to gdrive
  • Like 1
Link to comment
4 minutes ago, testdasi said:

A few points:

  • Unless you are doing a full plex library scan from scratch (i.e. blank database), it should NOT get you API ban. I do it all the time and have never run into any issue. So I think you have something else happening. The usual suspect is any kind of subtitle finder dockers. They were known to break the limit.
  • Avoiding accidental ban like your case is why it's recommended to use Team Drive (now aka "Share Drive" - new name, same features) instead. That would allow you to have a "plan B".
  • In terms of mapping, you can decide for yourself.
    • mount_rclone = excluding things not yet uploaded to gdrive (i.e. ONLY files that are actually currently on the gdrive)
    • mount_unionfs = gdrive + things not yet uploaded to gdrive

Don't forget Plex also has SubZero for downloading subtitles which can also cause a spike in activity.

Edited by Kaizac
  • Like 1
Link to comment
3 hours ago, testdasi said:

Hmm... that's strange. Maybe try unmount everything and redo permission of all the local folders. Reboot and then rerun the script.

 

 

Would I run the "Docker Safe New Perms"?

 

Just having a look at the rclone_upload folder that is created by the mount script and it has the following permissions

 

drwxrwxrwx  3 root   root 

 

Shouldn't the script create the upload folder with nobody/users permissions?

Edited by jude
update
Link to comment
23 hours ago, Bluecube said:

I have an issue uploading files. Whenever the upload script is run, it complains that rclone isn't installed and halts. When I look in /user/mount_rclone, where previously the google_vfs directory was mounted containing a mount check file, there is now only the following - ?google_vfs. On using mc to click on this file, an error message appears stating "Cannot change directory".

 

I'm completely at a loss as to how to progress. All scripts are unmodified apart from the Mount script where the docker launches are removed.

 

Anyone?

Link to comment
18 minutes ago, Bluecube said:

 

Anyone?

It could be that rclone is crashing. Does rclone correctly start and allow you to browse the unionfs and rclone folder?

 

I had to lower the buffer flag to --buffer-size 128M or rclone would run out of memory while Plex was scanning the directory.

 

Its worth trying

Link to comment
17 hours ago, jude said:

It could be that rclone is crashing. Does rclone correctly start and allow you to browse the unionfs and rclone folder?

 

I had to lower the buffer flag to --buffer-size 128M or rclone would run out of memory while Plex was scanning the directory.

 

Its worth trying

I’ll give that a whirl later but I doubt it’ll make a difference as I’ve got 32Gb memory with about 60% free.

 

Just a thought - I have the entries for the dockers REM’d out do they don’t load. I have my dockers set to auto start when the array starts. Could this be causing a conflict with the rClone mount script? 

Link to comment
On 10/31/2019 at 5:29 AM, testdasi said:

Hmm... that's strange. Maybe try unmount everything and redo permission of all the local folders. Reboot and then rerun the script.

 

 

 

A few points:

  • Unless you are doing a full plex library scan from scratch (i.e. blank database), it should NOT get you API ban. I do it all the time and have never run into any issue. So I think you have something else happening. The usual suspect is any kind of subtitle finder dockers. They were known to break the limit.
  • Avoiding accidental ban like your case is why it's recommended to use Team Drive (now aka "Share Drive" - new name, same features) instead. That would allow you to have a "plan B".
  • In terms of mapping, you can decide for yourself.
    • mount_rclone = excluding things not yet uploaded to gdrive (i.e. ONLY files that are actually currently on the gdrive)
    • mount_unionfs = gdrive + things not yet uploaded to gdrive

thx a lot for infos

Link to comment
35 minutes ago, Bolagnaise said:

@DZMM I’m sure the answer is in here somewhere but ill ask anyway, i created a 4K movies folder in unionfs, but i accidentally did it before the mount was active, now everytime i do a server reboot, that folder is always there (completely empty) and i have to manually delete it to get the mount to start. how do i fix this?

Make sure all traces of the folder in mount_unionfs and probably rclone_upload are deleted before you mount

  • Like 1
Link to comment
4 minutes ago, DZMM said:

Make sure all traces of the folder in mount_unionfs and probably rclone_upload are deleted before you mount

I found it, you put me on the right path, it was a shares issue and i had a 4k movies folder left over on disk 2 on my array, which was appearing in the union mount before the script ran.

Link to comment

So i finally figured out how i can bandwidth limit the upload, without fuggin the mount up.

 

 

1. specify "server" (rclone listening for commands)

 

rclone move --rc --rc-addr=192.168.86.2:5573  --rc-user=test --rc-pass=test /mnt/user/Archiv/Filme gdrive_media_vfs:Filme -vv --max-size 12G --drive-chunk-size 128M --checkers 4 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y && rclone move --rc-addr=192.168.86.2:5573 --rc-web-gui --rc-user=test --rc-pass=test --rc-web-gui-update --stats=24h /mnt/user/Archiv/Serien gdrive_media_vfs:Serien -vv --max-size 12G --drive-chunk-size 128M --checkers 4 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y && rclone move --rc-addr=192.168.86.2:5573 --rc-web-gui --rc-user=test --rc-pass=test --rc-web-gui-update --stats=24h /mnt/user/Archiv/Musik gdrive_media_vfs:Musik -vv --drive-chunk-size 8M --checkers 4 --fast-list --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y
 

2. make some sort of script like, i made it like if ping is successfull (my pc is on) run this command:

 

### lieber clone bw limitten
rclone rc --rc-addr=192.168.86.2:5573 --rc-user=test --rc-pass=test core/bwlimit rate=50k
###

 

 

Now i have a question, is there a way to compress:

 

rclone move --rc --rc-addr=192.168.86.2:5573  --rc-user=test --rc-pass=test /mnt/user/Archiv/Filme gdrive_media_vfs:Filme -vv --max-size 12G --drive-chunk-size 128M --checkers 4 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y

 

&&

 

rclone move --rc-addr=192.168.86.2:5573 --rc-user=test --rc-pass=test /mnt/user/Archiv/Serien gdrive_media_vfs:Serien -vv --max-size 12G --drive-chunk-size 128M --checkers 4 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y

 

&&

 

rclone move --rc-addr=192.168.86.2:5573 --rc-user=test --rc-pass=test /mnt/user/Archiv/Musik gdrive_media_vfs:Musik -vv --drive-chunk-size 8M --checkers 4 --fast-list --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1y"

 


(3x move command, each startet after the old one) into one move command? (So like can you move 3 directorys via one command?)

 

Does rclone move ... /mnt/user/Archiv/Musik gdrive_media_vfs:Musik /mnt/user/Archiv/Serien gdrive_media_vfs:Serien /mnt/user/Archiv/Filme gdrive_media_vfs:Filme ...

 

work?

Edited by nuhll
Link to comment
  • 2 weeks later...

@DZMM I have a couple of questions that I can't seem to find an answer for. 

 

1) For the media I currently have, where do I move it to:

  A)/mnt/user/mount_unionfs/google_vfs/contentfolder (ie: movies)

  or

  B)/mnt/user/rclone_upload/contentfolder 

Without knowing I created a syslink of one of my content folders to /mnt/user/rclone_upload, as it wasn't clear on github what to do exactly with current media, and wanted to start moving things off the server onto the cloud.  What should I do?

 

2) I would like to use Team Drive instead of Gdrive, and have made an edit in rclone_upload script

/mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs:

to

/mnt/user/rclone_upload/google_vfs/ tdrive_media_vfs:

Is this correct and is it advisable to use Team Drives over GDrive?

 

3) For Sonarr for example the following:

/tv <-> /mnt/user/mount_unionfs/google_vfs/tv

Which when after NZBGet runs the nzbtoMedia script to trigger Sonarr, it will move the files to the above location, and then rclone_upload will move them to the cloud?

 

 

Edited by Roken
tagged OP.
Link to comment
3 hours ago, Roken said:

@DZMM I have a couple of questions that I can't seem to find an answer for. 

 

1) For the media I currently have, where do I move it to:


  A)/mnt/user/mount_unionfs/google_vfs/contentfolder (ie: movies)

  or

  B)/mnt/user/rclone_upload/contentfolder 

Without knowing I created a syslink of one of my content folders to /mnt/user/rclone_upload, as it wasn't clear on github what to do exactly with current media, and wanted to start moving things off the server onto the cloud.  What should I do?

 

2) I would like to use Team Drive instead of Gdrive, and have made an edit in rclone_upload script


/mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs:

to


/mnt/user/rclone_upload/google_vfs/ tdrive_media_vfs:

Is this correct and is it advisable to use Team Drives over GDrive?

 

3) For Sonarr for example the following:


/tv <-> /mnt/user/mount_unionfs/google_vfs/tv

Which when after NZBGet runs the nzbtoMedia script to trigger Sonarr, it will move the files to the above location, and then rclone_upload will move them to the cloud?

 

 

You've got it all right.

 

1. Either - content added to mount_unionfs goes into rclone_upload to be transferred to gdrive.  Best practice is probably to do m_u, in case you're overwriting an existing file and to make sure apps stop/don't work if the mount goes down

 

2. TeamDrives are useful if

 

(I) you have good upload.  Ive added multiple users to mine as each user gets 750GB/day, and my upload script rotates the account in use, so I can remove the bwlimit as well as increase daily upload limit to 750GB x # of users

 

(II) if you want to share a teamdrive e.g. for backup purposes or a different way to share other than via Plex

 

3. Yes all docker/app paths should use m_u paths

 

  • Thanks 1
Link to comment

Can you share your upload script that rotates the account in use?  Maybe name it rclone_upload_rotate? 

Is there any speed difference when streaming from Tdrive or Gdrive?  I'm wondering if I just wait until it's all uploaded then manually move it all over to GDrive since I already started moving to tdrive.

 

Also you may want to add a setup on github about creating a syslink from a users current content to m_u for clarity.

 

*edit.  Looking inside of /mount_unionfs/ I don't see any files in any folders, but I did some digging and I see the files in /mount_rclone (which are on Team drive), is something messed up?  Content is starting to drop from Plex as I've pointed all it's folders to m_u.

 

I've changed the mount script to 

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off tdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

 

Edited by Roken
Link to comment
1 hour ago, Roken said:

Is there any speed difference when streaming from Tdrive or Gdrive?  I'm wondering if I just wait until it's all uploaded then manually move it all over to GDrive since I already started moving to tdrive.

No.  I would just stick with the teamdrive as it gives you more options going forwards.

 

1 hour ago, Roken said:

Also you may want to add a setup on github about creating a syslink from a users current content to m_u for clarity.

 

*edit.  Looking inside of /mount_unionfs/ I don't see any files in any folders, but I did some digging and I see the files in /mount_rclone (which are on Team drive), is something messed up?  Content is starting to drop from Plex as I've pointed all it's folders to m_u.

Probably the symlink - I don't know how unionfs handles these.  I'd just move the files to rclone_upload or just add the current folder location to the unionfs mount - you can include more folders in the union.

 

1 hour ago, Roken said:

Can you share your upload script that rotates the account in use?  Maybe name it rclone_upload_rotate? 

I'm not putting it on github as it goes beyond basic usage, but here it is.  I put a bwlimit of 70M as I don't want to max out my 1G line.  There's probably a clever way to shorten the script and automatically insert the counter, but I'm not a coder so I can't do that:

 

#!/bin/bash

#######  Check if script already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_upload

fi

#######  End Check if script already running  ##########

#######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/tdrive_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

fi

#######  end check if rclone installed  ##########

##Run 1

if [[ -f "/mnt/user/appdata/other/rclone/counter_one" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_one"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user1_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_one

echo "$(date "+%d.%m.%Y %T") creating counter_two"

touch /mnt/user/appdata/other/rclone/counter_two

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 1"

fi

##Run 2

if [[ -f "/mnt/user/appdata/other/rclone/counter_two" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_two"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user2_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_two

echo "$(date "+%d.%m.%Y %T") creating counter_three"

touch /mnt/user/appdata/other/rclone/counter_three

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 2"

fi

##Run 3

if [[ -f "/mnt/user/appdata/other/rclone/counter_three" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_three"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user3_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_three

echo "$(date "+%d.%m.%Y %T") creating counter_four"

touch /mnt/user/appdata/other/rclone/counter_four

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 3"

fi

##Run 4

if [[ -f "/mnt/user/appdata/other/rclone/counter_four" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_four"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user4_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_four

echo "$(date "+%d.%m.%Y %T") creating counter_five"

touch /mnt/user/appdata/other/rclone/counter_five

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 4"

fi

##Run 5

if [[ -f "/mnt/user/appdata/other/rclone/counter_five" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_five"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user5_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_five

echo "$(date "+%d.%m.%Y %T") creating counter_six"

touch /mnt/user/appdata/other/rclone/counter_six

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 5"

fi

##Run 6

if [[ -f "/mnt/user/appdata/other/rclone/counter_six" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_six"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user6_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_six

echo "$(date "+%d.%m.%Y %T") creating counter_seven"

touch /mnt/user/appdata/other/rclone/counter_seven

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 6"

fi

##Run 7

if [[ -f "/mnt/user/appdata/other/rclone/counter_seven" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_seven"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user7_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_seven

echo "$(date "+%d.%m.%Y %T") creating counter_eight"

touch /mnt/user/appdata/other/rclone/counter_eight

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 7"

fi

##Run 8

if [[ -f "/mnt/user/appdata/other/rclone/counter_eight" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_eight"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user8_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_eight

echo "$(date "+%d.%m.%Y %T") creating counter_nine"

touch /mnt/user/appdata/other/rclone/counter_nine

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 8"

fi

##Run 9

if [[ -f "/mnt/user/appdata/other/rclone/counter_nine" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_nine"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user9_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_nine

echo "$(date "+%d.%m.%Y %T") creating counter_ten"

touch /mnt/user/appdata/other/rclone/counter_ten

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 9"

fi

##Run 10

if [[ -f "/mnt/user/appdata/other/rclone/counter_ten" ]]; then

echo "$(date "+%d.%m.%Y %T") found counter_ten"

rclone move /mnt/user/rclone_upload/tdrive_vfs tdrive_user10_vfs: --user-agent="unRAID" -vv --buffer-size 512M --drive-chunk-size 512M --checkers 8 --transfers 6 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 70M --tpslimit 8 --min-age 10m --user-list

rm /mnt/user/appdata/other/rclone/counter_ten

echo "$(date "+%d.%m.%Y %T") creating counter_one"

touch /mnt/user/appdata/other/rclone/counter_one

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

else

echo "$(date "+%d.%m.%Y %T") skipping run 10"

fi

else

# added in case counters missing and script makes it this far

echo "$(date "+%d.%m.%Y %T") creating counter_one as missed counters"

touch /mnt/user/appdata/other/rclone/counter_one

fi

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

 

Edited by DZMM
Link to comment

everything still working fine, today i looked at the uploader logs and found many many movies which got ignored because of "size differ" e.g. 

 

2019/11/24 18:16:02 DEBUG : some (2006)/some 2006.mkv: Sizes differ (src 18491673236 vs dst 2663771332)

 

Shouldnt your cleanup script handle this?

 

Do i really need to go over all these files (its more then 100) and delete it manually? How does it happen? I startet deleting these (files in dst).. but then i watched one of the movies, and they play just fine, so its just a better quality release...

 

I thought if a new file with same name get added, this script handles this?!

Link to comment

So i thought like mabye add --ignore-size

 

but then

 

<lists every movie>

2019/11/27 13:05:54 NOTICE: imagine all your movies in this list (2013): Not deleting as dry run is set
2019/11/27 13:05:54 NOTICE: imagine all your movies in this list 2 (1987): Not deleting as dry run is set
2019/11/27 13:05:54 DEBUG : Local file system at /mnt/user/Archiv/Filme: deleted 1273 directories
2019/11/27 13:05:54 INFO  : 
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:               707 / 707, 100%
Transferred:            0 / 0, -
Elapsed time:       100ms

2019/11/27 13:05:54 DEBUG : 12 go routines active
2019/11/27 13:05:54 DEBUG : rclone: Version "v1.50.2" finishing with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "--rc" "--rc-addr=192.168.86.2:5574" "--rc-user=test" "--rc-pass=test" "/mnt/user/Archiv/Filme" "gdrive_media_vfs:Filme" "-vv" "--drive-chunk-size" "128M" "--checkers" "4" "--fast-list" "--transfers" "4" "--exclude" ".unionfs/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--delete-empty-src-dirs" "--fast-list" "--tpslimit" "3" "--min-age" "1y" "--ignore-size" "--dry-run"]

 

 

This makes no sense to me!?

 


Here they say, it will overwrite files:

https://forum.rclone.org/t/move-overwrite/1141

 

But it doesnt? Do i miss something!?

Edited by nuhll
Link to comment

Hmm seems like rclone gui keeps dying on me.

When I point to the addr in a web browser I get "404 page not found."  It was working fine for about 4 days and now it poop'd itself.

 

*resolved, need to restart rclone rcd prior to the upload script, so I altered it to start on array and then kill the process when mounting.

Edited by Roken
Link to comment
  • 3 weeks later...

So finally got it working just need some help with radarr/sonarr and the rclone_upload.

 

Radarr Path: /Movies /mnt/user/mount_unionfs/google_vfs/Movies/  Sonarr Path: /TV /mnt/user/mount_unionfs/google_vfs/TV/ rclone upload: rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs:

 

My question is, how is Sonarr and Radarr know when to grab the files from this location? Is there something I'm missing?

 

Thanks

Link to comment
19 hours ago, kagzz said:

Yes i know sir im a noob :'(, so its dosent work for me

 

Whilst Space Invader's Video and @DZMM' guide are great.  There is an element of know how required.  I would not consider this a beginner level thing.  You have a very real possibility of deleting everything on your array with a rouge command.  Even i managed to delete 4TB of local data the other day with a missing /.

 

If you follow the guide to the letter it will work, but if you miss one thing it might not so be careful.  Also there are only a handful of people using this process so support is pretty much limited to this thread.  That being said i'm happy to help where i can.

 

I'm also working on a rClone script to move data from my cache to the array based on modified date.  Pretty much a copy pasta of this script but i want granular control of what is on the SSD cache rather than just using mover to move everything once a month.

Edited by sauso
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.