Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

Just now, Kaizac said:

What is your desired file structure/topology? Do you want to upload everything to the cloud eventually or do you have part of your library you want to keep local>

 

 

I want to have all whats older then 1 year uploaded.

 

So i would setup a cronjob which moves files older then 1 year to the upload directory.

 

Then the upload script can upload files older then 30m to the cloud.

 

If its possible i really would like to keep the structure i have. If that is not possible:

 

Question is if radarr and sonarr understand when files move after 1 year

 

from

 

/mnt/user/Archiv/Movies/Movie/movie.mkv

 

to

 

/mnt/whatevercloud/Movies/Movie/movie.mkv

 

 

Link to comment

Sonarr and Radarr will understand if you point them to your mount_unionfs folders (so the virtual folder so to speak). The files can move within the folders the union is made of, but it will seem as it didn't move. So you will have to rescan Sonarr and Radarr once on your unionfs folders and then you should be good. Make sure you also add the unionfs folders as RW-Slave to your docker template as DZIMM mentioned.

 

So about your file structure this is what I would do. You need the following folders:

- The mount_unionfs where your local and cloud folders are combined

- Your rclone_upload folder where the files are added to the queue for uploading

- Your mount_rclone folder where your cloud files are accessible after mounting

- Your local folder where your files which should not be uploaded are stored

 

In your case I think you should make your Archiv map your unionfs folder. You're probably used to accessing this map for your files, so you can keep this. Make sure you change the scripts accordingly when you chose to go this route.

You should also move all your files to your local folder (making it an own share might be useful) from your Archiv folder so your Archiv folder is empty.

 

Then in the other 3 folders (rclone_upload, mount_rclone and local) you have to make sure they have the same file structure below the top level. This makes sure that your unionfs picks up the different folders and combines them as one. This will also allow you to create a union on the top level and not having to do this on the sublevel (thus Movies/Series/Etc.).

 

For the unionfs command I would use the following:

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/LocalStorage=RW:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/mnt/user/rclone_upload/google_vfs=RO /mnt/user/Archiv

Be aware that you need to change LocalStorage to the share/folder you are using for your local_not_to_be_uploaded files. Sonarr and Radarr will also move downloaded files to this folder.

 

If you are unsure about this (although you shouldn't destroy anything doing this) you can try it on different folders. Make sure you disable your Sonarr and Radarr folders when using the above command first. After the union succeeded and is to your liking put them back on. Maybe with the setup I gave you they don't even need to rescan and will just see the same files as before since you kept your folder intact.

Link to comment

So.

" The mount_unionfs where your local and cloud folders are combined"

Can i access that via SMB?

 

"Your rclone_upload folder where the files are added to the queue for uploading"

Does it keep file structure intact like if i add /Movies/Moviename/moviename.mkv will it upload it so to unionfs/Movies/Moviename/moviename.mkv ?

 

I still dont understand how to use 2 different directories, for plex DZMM helped me how it should work. But radarr and so on, you cant add multiple directories!?

 

edit:

ok, ill understand now i guess.

 

U want me to change paths on plex, radarr, sonarr to unionfs share (where local and remote are combined).

 

That makes sense.

 

Ill come back to this, first i bought the g suite account and need to reset the api key and such things. In a few days i have nearly 1,5 weeks vacation then i will complete this all. 

 

Can i add you on discord or somethign?

Edited by nuhll
Link to comment
Quote

So.

" The mount_unionfs where your local and cloud folders are combined"

Can i access that via SMB?

I don't know what you mean with SMB. You can access it through browsing your network and going to your Unraid server and then shares if that's what you mean. And otherwise you can create an SMB drive of it through the settings on the share it's on.

Quote

 

"Your rclone_upload folder where the files are added to the queue for uploading"

Does it keep file structure intact like if i add /Movies/Moviename/moviename.mkv will it upload it so to unionfs/Movies/Moviename/moviename.mkv ?

 

The file stays in unionfs/Movies/Moviename/moviename.mkv if you look at the union folder. If you go to the seperate folders you will see it move from Local to your Upload folder. Unionfs makes is that you don't see a difference in what location the file is, as long as it's part of the union.

 

Quote

I still dont understand how to use 2 different directories, for plex DZMM helped me how it should work. But radarr and so on, you cant add multiple directories!?

You don't need to add multiple directories. For Radarr you add your unionfs/Archiv/Movies folder and for Radarr unionfs/Archiv/Series. With the unionfs command I provided you with, the system will know to write to the Local drive when moving files. The other files are Read Only (RO).


 

Quote

 

U want me to change paths on plex, radarr, sonarr to unionfs share (where local and remote are combined).


 

Yes, in your case I would make your Archiv folder your unionfs folder so you don't need to change paths I think (don't know your file structure fully). But if you want to start and use the tutorial of DZMM then using mount_unionfs is easier and you will need to change your docker templates to the folders within mount_unionfs.

 

Quote

Can i add you on discord or somethign?

Sure, just send me a PM when you get stuck and need assistance and I'll provide you with my Discord username (don't want to put it publicly here).

Link to comment

So, i  bought a g suite account, and it says unlimited storage, your right. :)

 

But you cant migrate your standard account to it.. (how DUMB is that)

 

Whats the corret way to remove the old google drive account and add the new? (ok add the new is just like descriped on page 1 i guess)

 

BTW. im testing that upload start/stop script since a few days, it works perfectly. If any of the IPs are online, it stops download and if all ips are unreachable it starts the upload :)

 

Maybe i find a way to stop (or throttle) NZBget also... :)

Edited by nuhll
Link to comment
18 minutes ago, nuhll said:

Whats the corret way to remove the old google drive account and add the new?

Safest way is to create a new remote gdrive_new: and add this to your scripts

 

18 minutes ago, nuhll said:

BTW. im testing that upload start/stop script since a few days, it works perfectly. If any of the IPs are online, it stops download and if all ips are unreachable it starts the upload

IMO the script sounds like a bad idea.  Upload doesn't resume so if you've spent 2 hours to reach 99% of a file uploaded and then stop, you'll have to start all over again.  I'd recommend using bwlimit to schedule peak and off-peak upload speeds, or just do overnight.   If you have to police your upload i.e. you can't comfortably upload 500-750GB/day then the service becomes less useful.

 

Edit: or find a way to traffic shape the rclone upload e.g. pfsense

Edited by DZMM
Link to comment

It does resume, i testet this already. (if i interpreted the output correct) Im not sure if its only resume per chunk (so set chunk lower on slow speeds), but it doesnt start from scratch every time. Also if not, better start 3 or 4 times a day new, instead of all time bw limiting... or to specific times... on my lane a chunk takes 4 or 5 minutes. (didnt lowered it till now)

 

Also normally you dont turn your PC on and off..., so lets say upload gets cut 3 or 4 times a day (if u have 4 pcs)

 

Can i just remove it under plugins rclone and start from scratch?

 

I already tried prioriting traffic, but thats all shit and time waste. I have very special internet... (2x LTE & 2x DSL bundled)

Edited by nuhll
Link to comment
23 minutes ago, nuhll said:

It does resume, i testet this already. (if i interpreted the output correct) Im not sure if its only resume per chunk (so set chunk lower on slow speeds), but it doesnt start from scratch every time. Also if not, better start 3 or 4 times a day new, instead of all time bw limiting... or to specific times... on my lane a chunk takes 4 or 5 minutes. (didnt lowered it till now)

 

I wouldn't rely on resume working with gdrive as it doesn't work yet https://github.com/ncw/rclone/issues/87

Link to comment

Yeah, even if it doenst resume (but i saw it not starting at 0% when aborting/starting). Its the best way i think. We are a 2 family home, so i cant be bothered to start and stop every time someone is home.. i think if a file has 4 chunks and i upload 2, it will start with the 3. chunk the next time (thats how i thinka tleast)

Edited by nuhll
Link to comment

I see this post is really taking off, good job @DZMM. So far I have had zero issues with my setup thus far. The only limiting factor as of right now is my upload speed from Spectrum is terrible. Sitting at 400Mbps down, 20Mbps up. I was thinking of getting the gig line, but the price is terrible and I can't justify spending that for 940 down, 40 up.  So I had to get a VPS to take care of that ( which is cheaper than getting the gig line + what I have anyway ). 

Did you update your Pfsense to 2.4.4_1? I've been reading about issues people are having with it and I'm kind of waiting a bit longer.

Edited by slimshizn
Link to comment
6 hours ago, slimshizn said:

I see this post is really taking off, good job @DZMM. So far I have had zero issues with my setup thus far. The only limiting factor as of right now is my upload speed from Spectrum is terrible. Sitting at 400Mbps down, 20Mbps up. I was thinking of getting the gig line, but the price is terrible and I can't justify spending that for 940 down, 40 up.  So I had to get a VPS to take care of that ( which is cheaper than getting the gig line + what I have anyway ). 

Did you update your Pfsense to 2.4.4_1? I've been reading about issues people are having with it and I'm kind of waiting a bit longer.

What's your VPS spec and how did you set it up?  I might have to do this next year as I might be moving from 300/300 to an area where the upload is only about 30.

 

Going to read up on 2.4.4_1 now

Link to comment

I fixed my small upload (10mbits) by buying 2 "slow" lines. So i doubled my UL speed. Ofc, it cost double. "load" balancing can be done in network and works good. (question is now is 2 slow lines cheaper then 1 gig line)

 

If you have DSL line u can do that without any new cables because in a DSL cable there are 4 pairs and for 1 dsl you only need 2, so 2 are free. 

Edited by nuhll
Link to comment
3 minutes ago, slimshizn said:

https://www.wholesaleinternet.net/ and I'm spending about 60 a month. I'll send more info later I have to work all day today.

That's a lot - check out https://www.hetzner.com/sb or https://github.com/Admin9705/PlexGuide.com-The-Awesome-Plex-Server/wiki/Recommend-Hosting-Servers

 

@nuhll that's a bad way to do it IMO.  How much are you spending?  You could probably spend the same on a dedicated server from the links above and get gigabit speeds and processing power to run your plex server without using any of your home bandwidth

Link to comment

@DZMMI've been looking for a write-up like this.  Thank you!  Quick question, would it make sense to store any of the directories on my SSD cache drive?  I can modify the scripts, just wondering if there is a benefit, and if so, which directories.  

 

EDIT:

 

/mnt/user/appdata/other/rclone 

 

Looks like it's only used to create the file that the script uses to determine if rclone is running

rclone_mount_running

 

What I'm trying to do is avoid having HDD running all the time.

 

I supposed I wouldnt even need to modify the script if I just manually create the share for the appropriate directory(ies) and set it to cache only?

 

The first time I tried to run the script, it failed because the /mnt/user/appdata/other/rclone directory didn't exist.

 

Just need to move

 

mkdir -p /mnt/user/appdata/other/rclone

 

above 

 

if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running."

 

 

Edited by bender1
  • Like 1
Link to comment
7 hours ago, DZMM said:

That's a lot - check out https://www.hetzner.com/sb or https://github.com/Admin9705/PlexGuide.com-The-Awesome-Plex-Server/wiki/Recommend-Hosting-Servers

 

 @nuhll that's a bad way to do it IMO.  How much are you spending?  You could probably spend the same on a dedicated server from the links above and get gigabit speeds and processing power to run your plex server without using any of your home bandwidth

Before this all here, i didnt know that this would be possible with google drive. And renting servival TBs of space would defintily not cheaper. ;)

Also i already own a root server (at hetzner) (but i want to switch that all back to home)

 

Besides this, double dsl lines also give double download speed, all my neibors have 6-16mbits dl 2mbit ul. I have managed to bundle 2x dsl, 2x LTE to 150Mbits dl and 20 mbit ul... for under 80€. (which is okay for 2 families)

Edited by nuhll
Link to comment
3 hours ago, bender1 said:

What I'm trying to do is avoid having HDD running all the time.

if you set your appdata, mount_rclone and mount_unionfs to cache only that should do the trick.

 

3 hours ago, bender1 said:

The first time I tried to run the script, it failed because the /mnt/user/appdata/other/rclone directory didn't exist.

I thought touch created the necessary directory tree?  If not, glad you figured out how to fix.

Link to comment
14 hours ago, DZMM said:

That's a lot - check out https://www.hetzner.com/sb or https://github.com/Admin9705/PlexGuide.com-The-Awesome-Plex-Server/wiki/Recommend-Hosting-Servers

 

@nuhll that's a bad way to do it IMO.  How much are you spending?  You could probably spend the same on a dedicated server from the links above and get gigabit speeds and processing power to run your plex server without using any of your home bandwidth

Honestly it's not alot considering spectrum wants 120 more a month for gig speeds with SLOW upload. I bought a decent VPS and it's doing its job very well.

Link to comment

Someone knows a good bash/shell script editor which syntax highlighting and else if fi coloring? Ive installed notepad++, but its just crap how complicated it is and the highlight for shell doesnt highlight else if...

 

I rearranged my upload start/stop script.... but ive got some else if fi statement in wrong order.... 

 

Quote

 

#!/bin/sh
#rm /mnt/user/downloads/pingtest
#touch /mnt/user/downloads/pingtest

### hosts
host=192.168.86.1
host2=192.168.86.48
host3=192.168.86.154
### hosts

### Debug can be removed
current_date_time="`date "+%Y-%m-%d %H:%M:%S"`";
echo $current_date_time >> /mnt/user/downloads/pingtest;

echo "host 1" >> /mnt/user/downloads/pingtest;
ping -c 1 -W1 -q $host > /dev/null
echo "$?" >> /mnt/user/downloads/pingtest;

echo "host 2" >> /mnt/user/downloads/pingtest;
ping -c 1 -W1 -q $host2 > /dev/null
echo "$?" >> /mnt/user/downloads/pingtest;

echo "host 3" >> /mnt/user/downloads/pingtest;
ping -c 1 -W1 -q $host3 > /dev/null
echo "$?" >> /mnt/user/downloads/pingtest;
### Debug can be removed

### Ping 3 hosts
ping -c 1 -W1 -q $host || ping -c 1 -W1 -q $host2 || ping -c 1 -W1 -q $host3 > /dev/null
      if [ $? == 0 ]; then

#######  Check if script is already running  ##########
if [[ -f "/mnt/user/appdata/other/nzbget/upload_stoppen" ]]; then
logger $(date "+%d.%m.%Y %T") NzbGet bereits begrenzt / rclone bereits gekillt.
exit

fi
else

touch /mnt/user/appdata/other/nzbget/upload_stoppen
fi
### check

logger ping erfolgreich, upload stoppen
killall -9 rclone
docker exec nzbget /app/nzbget/nzbget -c /config/nzbget.conf -R 10000
rm -f /mnt/user/appdata/other/nzbget/upload_starten  

else

rm -f /mnt/user/appdata/other/nzbget/upload_stoppen
#######  Check if script is already running  ##########
if [[ -f "/mnt/user/appdata/other/nzbget/upload_starten" ]]; then
logger $(date "+%d.%m.%Y %T") NzbGet bereits unbegrenzt.
# upload script
######  Check if script already running  ##########

if [[ -f "/mnt/user/appdata/other/rclone/rclone_upload" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running."

exit

else

touch /mnt/user/appdata/other/rclone/rclone_upload

fi

######  End Check if script already running  ##########

######  check if rclone installed  ##########

if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then

echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."

else

echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."

rm /mnt/user/appdata/other/rclone/rclone_upload

exit

fi

######  end check if rclone installed  ##########

# move files

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 1m 

# remove dummy file

rm /mnt/user/appdata/other/rclone/rclone_upload

#upload script ende
exit
else
touch /mnt/user/appdata/other/nzbget/upload_starten
fi
### check
logger ping fehlgeschlagen, upload starten
docker exec nzbget /app/nzbget/nzbget -c /config/nzbget.conf -R 0
 

 

 

 

i took the if "file there" from your scripts. To prevent unneccessary commands to nzbget and so on. - but the upload script should run always. (it checks itself if it needs to run)

 

 

Edited by nuhll
Link to comment

I just made a very useful change to my scripts that has solved my problem with the limit of only being able to upload 750GB/day, which was creating bottlenecks on my local server as I couldn't upload fast enough to keep up with new pending content. 

 

I've added a Teamdrive remote to my setup, that allows me to upload another 750GB/day in addition to the 750GB/day to my existing remote.  This is because the 750GB/day limit is per account - by sharing the teamdrive created by my google apps account with another google account I can upload more.  Theoretically I could repeat for n extra accounts (each one would need a separate token team drive), but 1 is enough for me.

 

Steps:

  1. create new team drive with main google apps account
  2. share with 2nd google account
  3. create new team drive remotes (see first post) - remember to get token from account in #2 not account in #1 otherwise you won't get 2nd upload quota
  4. amend mount script (see first post) to mount new tdrive and change unionfs mount from 2-way union to 3-way including tdrive
  5. new upload script to upload to tdrive - my first upload script moves files from the array, and the 2nd from the cache.  Another way to 'load-balance' the uploads could be to run one script against disks 1-3 and the other against 4-x
  6. add tdrive line to cleanup script
  7. add tdrive line to unmount script
  8. Optional repeat if need more upload capacity e.g. change 3-way union to 4-way
Edited by DZMM
  • Like 1
  • Upvote 2
Link to comment

@DZMM nice find. I started with using the Team Drive, but I found that software like Duplicati doesn't work well with Team Drives. So I moved back to the personal Gdrive. What I found is that you can just move the files from the teamdrive to the personal Gdrive through the webgui of Gdrive. It will take no time and doesn't impact your quota as far as I noticed. If you keep the same file structure as the personal Gdrive and you make sure Rclone uses the same password and salt it will be understood by Rclone and you can browse the files again.

Link to comment
2 minutes ago, Kaizac said:

@DZMM nice find. I started with using the Team Drive, but I found that software like Duplicati doesn't work well with Team Drives. So I moved back to the personal Gdrive. What I found is that you can just move the files from the teamdrive to the personal Gdrive through the webgui of Gdrive. It will take no time and doesn't impact your quota as far as I noticed. If you keep the same file structure as the personal Gdrive and you make sure Rclone uses the same password and salt it will be understood by Rclone and you can browse the files again.

unfortunately I can't move files as the encryption is different - I think because I'm using different tokens.  I've asked on the rclone forums if there's a way around this because I'd like to periodically move all files to one location

Link to comment
16 minutes ago, DZMM said:

unfortunately I can't move files as the encryption is different - I think because I'm using different tokens.  I've asked on the rclone forums if there's a way around this because I'd like to periodically move all files to one location

Just to be sure; you use your own password and own defined salt? Seems weird that it has to do with the token, as changing API on the mount doesn't make encrypted files unreadable...

Link to comment
35 minutes ago, Kaizac said:

Just to be sure; you use your own password and own defined salt? Seems weird that it has to do with the token, as changing API on the mount doesn't make encrypted files unreadable...

I use the same passwords for both mounts, different IDs (tried same didn't fix the problem) but different tokens because using two different google accounts.  I'm not sure what rclone uses to encrypt but I'm guessing the tokens are the reason or the actual remote name. 

 

I'm hoping there is a fix.  But, it's not a big problem as everything 'works' - it'd just be nice to have the option of being able to move files between remotes. 

 

It's made such a big difference today having another another 750GB or 70Mbps upload, as the single 750GB limit has been a pain in the ass for the last couple of months as I've had to manually manage how fast my rclone_upload backlog grew whereas now it's finally shrinking.  I think I'm going to create a 3rd team drive tomorrow.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.