Guide: How To Use Rclone To Mount Cloud Drives And Play Files


DZMM

Recommended Posts

6 hours ago, DZMM said:

Is anyone who is using rotating service accounts getting slow upload speeds?  Mine have dropped to around 100 KiB/s even if I rotate accounts....

 

hmm - not here yet .. sir.

Quote

Transferred: 45.895 GiB / 45.895 GiB, 100%, 47.895 MiB/s, ETA 0s

 

Is this the same problem that others reported with specific servers being slow? 

Link to comment

Hi, first of all I wanted to thank you for your work. I'm a newbie to unraid, scripts, etc .. but I would like to try using your script to mount my team share drives.
I'll explain what I would like to do:
On Google drive I have different team shares.
Of these only the "media_storage" drive is the one I would like to use on Plex with mergefs for the stream, joined to my local library.
Also on this drive I would like to periodically upload the files present in a folder and remove them after uploading.

The other drives, on the other hand, must not be used on Plex, so I don't need mergerfs but only mounts and mirror synchronization with the source.

 

plex.thumb.png.8542618ae5ab771c2fac40a45a785dab.png


1) How should I set up the script? Do I have to create one for each team share?
2) in the Mount script what is meant by LocalFilesShare? is the folder of local files that will be merged into the share?
3) in the upload script for a mirror copy I have to select sync right?

sorry for the stupid questions

Link to comment

Hey everyone, wondering if anyone else has had a similar issue. Everything is set up and running, I have my content downloading into a folder I created inside mergerfs. I use NZB Get as the downloading client and everything works including hardlinks. It just downloads at a very slow rate, approximately 8Mb.

 

Now if I change no settings in nzb other then the path to a basic user share I created, it downloads at about my full bandwidth, approx 50-60 Mb.

 

It seems like the only variable is the mergerfs but I'm not sure why its affecting the speed.

 

Both I had set to use cache.

 

Thanks in advance.

 

 

Link to comment

Having a bit of trouble with getting my files to show up in the right places. @MowMdown has been helping me and has been extraordinarily helpful and patient.

I have rclone gdrive, crypt, and union configured.
after getting my mount and upload script set up, i did a very simple test with a couple of movies.
i modified plex docker /movies/ host file path to /mnt/disks/media/movies/
i added a new plex library to point to the /movies/ docker path
i added the upload script for movies.
i added movies folder = mnt/user/media/movies
I copied 1 movie to mnt/user/media/movies

i ran the upload movies script.
It moved the one movie (folder and file)
from mnt/disks/media
to mnt/disks/media_vfs
the movie folder and file is not in /mnt/disks/media/movies
the movie folder and file is displayed in /mnt/disks/media_vfs/movies.

Plex does not see the movie in the library path: /mnt/disks/media/movies
Plex DOES see the movie in the library path: /mnt/disks/media/movies_vfs

I can play and run the movie there (movies/vfs) fine.

Could something be wrong with the union configuration?

[union]
type = union
upstreams = /mnt/user/media /mnt/user/media_vfs:nc
action_policy = ff
create_policy = ff
search_policy = all

 

Link to comment
5 hours ago, daquint said:

Having a bit of trouble with getting my files to show up in the right places. @MowMdown has been helping me and has been extraordinarily helpful and patient.

I have rclone gdrive, crypt, and union configured.
after getting my mount and upload script set up, i did a very simple test with a couple of movies.
i modified plex docker /movies/ host file path to /mnt/disks/media/movies/
i added a new plex library to point to the /movies/ docker path
i added the upload script for movies.
i added movies folder = mnt/user/media/movies
I copied 1 movie to mnt/user/media/movies

i ran the upload movies script.
It moved the one movie (folder and file)
from mnt/disks/media
to mnt/disks/media_vfs
the movie folder and file is not in /mnt/disks/media/movies
the movie folder and file is displayed in /mnt/disks/media_vfs/movies.

Plex does not see the movie in the library path: /mnt/disks/media/movies
Plex DOES see the movie in the library path: /mnt/disks/media/movies_vfs

I can play and run the movie there (movies/vfs) fine.

Could something be wrong with the union configuration?

[union]
type = union
upstreams = /mnt/user/media /mnt/user/media_vfs:nc
action_policy = ff
create_policy = ff
search_policy = all

 

MowM helped me solve this - 
union config error:
upstreams = /mnt/user/media /mnt/user/media_vfs:nc
changed to
upstreams = /mnt/user/media /mnt/disks/media_vfs:nc

all is working!

Link to comment

@DZMMHi 
Im hitting a snag when i try to mount my gdrive: 


Failed to create file system for "gdrive:": drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json: no such file or directory

but i do have my service accounts in the path: 
image.thumb.png.8b72ccab25dc0012a6e9fb10decfe82a.png

 

am i missing something here, i had it working, did a restart and now this. 

Link to comment

A quick question. Is people updating the rclone plugin or leave it on an older version. I haven't updated it since I started using it, because I'm afraid if it gets more unstable.

I've experienced a couple of times now one of my shares dissappear suddenly but a reboot of Unraid solves it.

Perhaps an updated Rclone plugin is more stable?

Link to comment
On 8/7/2022 at 10:08 PM, Bjur said:

A quick question. Is people updating the rclone plugin or leave it on an older version. I haven't updated it since I started using it, because I'm afraid if it gets more unstable.

I've experienced a couple of times now one of my shares dissappear suddenly but a reboot of Unraid solves it.

Perhaps an updated Rclone plugin is more stable?

Turns out I'm getting this out of memery error.

Anyone knows the reason?

I can't get one of my upload scripts to work. It says already running exiting. It has always worked but suddenly don't. 

Any help?

 

 

 

 

21:28:20 Unraid kernel: [ 6608] 0 6608 2061 643 49152 0 0 awk

Aug 9 21:28:20 Unraid kernel: [ 6616] 0 6616 1060 578 45056 0 0 pgrep

Aug 9 21:28:20 Unraid kernel: [ 6624] 0 6624 616 184 40960 0 0 sleep

Aug 9 21:28:20 Unraid kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/,task=rcloneorig,pid=6563,uid=0

Aug 9 21:28:20 Unraid kernel: Out of memory: Killed process 6563 (rcloneorig) total-vm:14764432kB, anon-rss:12917756kB, file-rss:4kB, shmem-rss:33508kB, UID:0 pgtables:27584kB oom_score_adj:0

Aug 9 21:29:41 Unraid emhttpd: read SMART /dev/sdb

Aug 9 21:58:24 Unraid webGUI: Successful login user root

Link to comment

Hello all together, and thanks first of all for this great manual and the work on this topic.

I tried to install the script as well, but run into problems. Everything seems to work, but when I try to upload the data (from local or mergefs drive it doesnt matter) it will start a process and then deletes everything. Nothing will be uploaded or moved. Just deleted.

 

Script location: /tmp/user.scripts/tmpScripts/RCLONE Upload Script/script
Note that closing this window will abort the execution of this script
16.08.2022 15:34:20 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/drivecrypted for drivecrypted ***
16.08.2022 15:34:20 INFO: *** Starting rclone_upload script for drivecrypted ***
16.08.2022 15:34:20 INFO: Script not running - proceeding.
16.08.2022 15:34:20 INFO: Checking if rclone installed successfully.
16.08.2022 15:34:20 INFO: rclone installed successfully - proceeding with upload.
16.08.2022 15:34:20 INFO: Uploading using upload remote drivecrypted
16.08.2022 15:34:20 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2022/08/16 15:34:20 INFO : Starting bandwidth limiter at 30Mi Byte/s
2022/08/16 15:34:20 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2022/08/16 15:34:20 DEBUG : --min-age 15m0s to 2022-08-16 15:19:20.331489773 +0200 CEST m=-899.967578627
2022/08/16 15:34:20 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/drivecrypted" "drivecrypted:" "--user-agent=drivecrypted" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,30M 16:00,30M" "--bind=" "--delete-empty-src-dirs"]
2022/08/16 15:34:20 DEBUG : Creating backend with remote "/mnt/user/local/drivecrypted"
2022/08/16 15:34:20 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2022/08/16 15:34:20 DEBUG : Creating backend with remote "drivecrypted:"
2022/08/16 15:34:20 DEBUG : Creating backend with remote "drivecrypt"
2022/08/16 15:34:20 DEBUG : fs cache: renaming cache item "drivecrypt" to be canonical "/drivecrypt"
2022/08/16 15:34:20 DEBUG : downloads: Excluded
2022/08/16 15:34:20 DEBUG : Encrypted drive 'drivecrypted:': Waiting for checks to finish
2022/08/16 15:34:20 DEBUG : Encrypted drive 'drivecrypted:': Waiting for transfers to finish
2022/08/16 15:34:22 DEBUG : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: md5 = 91056f9150b7523bdc784e0da01cd411 OK
2022/08/16 15:34:22 INFO : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: Copied (new)
2022/08/16 15:34:22 INFO : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: Deleted
2022/08/16 15:34:22 INFO : folder1: Removing directory
2022/08/16 15:34:22 INFO : folder2: Removing directory
2022/08/16 15:34:22 INFO : folder3: Removing directory
2022/08/16 15:34:22 INFO : folder4: Removing directory
2022/08/16 15:34:22 INFO : folder5: Removing directory
2022/08/16 15:34:22 INFO : folder6: Removing directory
2022/08/16 15:34:22 DEBUG : Local file system at /mnt/user/local/drivecrypted: deleted 6 directories
2022/08/16 15:34:22 INFO :
Transferred: 53.330 MiB / 53.330 MiB, 100%, 32.537 MiB/s, ETA 0s
Checks: 2 / 2, 100%
Deleted: 1 (files), 6 (dirs)
Renamed: 1
Transferred: 1 / 1, 100%
Elapsed time: 1.9s

2022/08/16 15:34:22 DEBUG : 7 go routines active
16.08.2022 15:34:22 INFO: Not utilising service accounts.
16.08.2022 15:34:22 INFO: Script complete

 

I cannot spot the failure. Did someone else have similar problems? Thanks in advance.

 

Link to comment
22 minutes ago, Paff said:

Hello all together, and thanks first of all for this great manual and the work on this topic.

I tried to install the script as well, but run into problems. Everything seems to work, but when I try to upload the data (from local or mergefs drive it doesnt matter) it will start a process and then deletes everything. Nothing will be uploaded or moved. Just deleted.

 

Script location: /tmp/user.scripts/tmpScripts/RCLONE Upload Script/script
Note that closing this window will abort the execution of this script
16.08.2022 15:34:20 INFO: *** Rclone move selected. Files will be moved from /mnt/user/local/drivecrypted for drivecrypted ***
16.08.2022 15:34:20 INFO: *** Starting rclone_upload script for drivecrypted ***
16.08.2022 15:34:20 INFO: Script not running - proceeding.
16.08.2022 15:34:20 INFO: Checking if rclone installed successfully.
16.08.2022 15:34:20 INFO: rclone installed successfully - proceeding with upload.
16.08.2022 15:34:20 INFO: Uploading using upload remote drivecrypted
16.08.2022 15:34:20 INFO: *** Using rclone move - will add --delete-empty-src-dirs to upload.
2022/08/16 15:34:20 INFO : Starting bandwidth limiter at 30Mi Byte/s
2022/08/16 15:34:20 INFO : Starting transaction limiter: max 8 transactions/s with burst 1
2022/08/16 15:34:20 DEBUG : --min-age 15m0s to 2022-08-16 15:19:20.331489773 +0200 CEST m=-899.967578627
2022/08/16 15:34:20 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rcloneorig" "--config" "/boot/config/plugins/rclone/.rclone.conf" "move" "/mnt/user/local/drivecrypted" "drivecrypted:" "--user-agent=drivecrypted" "-vv" "--buffer-size" "512M" "--drive-chunk-size" "512M" "--tpslimit" "8" "--checkers" "8" "--transfers" "4" "--order-by" "modtime,ascending" "--min-age" "15m" "--exclude" "downloads/**" "--exclude" "*fuse_hidden*" "--exclude" "*_HIDDEN" "--exclude" ".recycle**" "--exclude" ".Recycle.Bin/**" "--exclude" "*.backup~*" "--exclude" "*.partial~*" "--drive-stop-on-upload-limit" "--bwlimit" "01:00,off 08:00,30M 16:00,30M" "--bind=" "--delete-empty-src-dirs"]
2022/08/16 15:34:20 DEBUG : Creating backend with remote "/mnt/user/local/drivecrypted"
2022/08/16 15:34:20 DEBUG : Using config file from "/boot/config/plugins/rclone/.rclone.conf"
2022/08/16 15:34:20 DEBUG : Creating backend with remote "drivecrypted:"
2022/08/16 15:34:20 DEBUG : Creating backend with remote "drivecrypt"
2022/08/16 15:34:20 DEBUG : fs cache: renaming cache item "drivecrypt" to be canonical "/drivecrypt"
2022/08/16 15:34:20 DEBUG : downloads: Excluded
2022/08/16 15:34:20 DEBUG : Encrypted drive 'drivecrypted:': Waiting for checks to finish
2022/08/16 15:34:20 DEBUG : Encrypted drive 'drivecrypted:': Waiting for transfers to finish
2022/08/16 15:34:22 DEBUG : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: md5 = 91056f9150b7523bdc784e0da01cd411 OK
2022/08/16 15:34:22 INFO : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: Copied (new)
2022/08/16 15:34:22 INFO : movies/Dolby_City_Redux_Lossless-thedigitaltheater.mkv: Deleted
2022/08/16 15:34:22 INFO : folder1: Removing directory
2022/08/16 15:34:22 INFO : folder2: Removing directory
2022/08/16 15:34:22 INFO : folder3: Removing directory
2022/08/16 15:34:22 INFO : folder4: Removing directory
2022/08/16 15:34:22 INFO : folder5: Removing directory
2022/08/16 15:34:22 INFO : folder6: Removing directory
2022/08/16 15:34:22 DEBUG : Local file system at /mnt/user/local/drivecrypted: deleted 6 directories
2022/08/16 15:34:22 INFO :
Transferred: 53.330 MiB / 53.330 MiB, 100%, 32.537 MiB/s, ETA 0s
Checks: 2 / 2, 100%
Deleted: 1 (files), 6 (dirs)
Renamed: 1
Transferred: 1 / 1, 100%
Elapsed time: 1.9s

2022/08/16 15:34:22 DEBUG : 7 go routines active
16.08.2022 15:34:22 INFO: Not utilising service accounts.
16.08.2022 15:34:22 INFO: Script complete

 

I cannot spot the failure. Did someone else have similar problems? Thanks in advance.

 

Got it fixed. My config was wrong. Sometimes it can be easy. Forgot in the config crypt section the : remote" ->:crypt"

Hope that helps when someone has the same hicups.

Thanks! 

BR 

Link to comment
On 7/25/2022 at 10:24 AM, DZMM said:

Is anyone who is using rotating service accounts getting slow upload speeds?  Mine have dropped to around 100 KiB/s even if I rotate accounts....

Still having issues? Cause for me it's working fine using your upload script. I have 80-100 rotating SA's though.

 

On 8/14/2022 at 11:00 AM, Sildenafil said:

I can't stop the array because of the script, end up in a loop trying to unmount /mnt/user.
I use the mount script without mergerfs.

Some help on how to set the script to execute at the stop of the array? The one on github creates this problem for me.

I've never been able to stop the array once I started using rclone mounts. I think the constant connections are preventing it. You could try shutting down your dockers first and make sure there are no file transfers going on. But I just reboot if I need the array down.

 

On 8/17/2022 at 11:43 AM, francrouge said:

Hi guys

 

i was wondering if anyone of you map there download docker to be diectly on gdrive and seed from it

 

Question: 

 

#1 Do you crypt the files ?

 

#2 do you use hardlink

 

Any tutorial or additionnal infos maybe how to use it ?

 

 

thx

 

I do use direct mounts for certain processes, like my Nextcloud photo backups go straight into the Team Drive (I would not recommend using the personal Google drive anymore, only Team drives). I always use encrypted mounts, but depending on what you are storing you might not mind that it's unecrypted.

 

I use the normal mounting commands, although I currently don't use the caching ability that Rclone offers.

But for downloading dockers and such I think you need to check whether the download client is downloading in increments and uploading those or first storing them locally and then sending the whole file to the Gdrive/Team Drive. If it's storing in small increments directly on your mount I suspect it could be a problem for API hits. And I don't like the risk of corruption of files this could potentially offer.

 

Seeding directly from your Google Drive/Tdrive is for sure going to cause problems with your API hits. Too many small downloads will ruin your API hits. If you want to experiment with that I suggest you use a seperate service account and create a mount specifically for that docker/purpose to test. I have seperate rclone mounts for some dockers or combination of dockers that can create a lot of API hits and seperate it from my Plex mount so they don't interfere with each other.

  • Like 1
Link to comment
On 8/20/2022 at 4:23 AM, Kaizac said:

Still having issues? Cause for me it's working fine using your upload script. I have 80-100 rotating SA's though.

 

I've never been able to stop the array once I started using rclone mounts. I think the constant connections are preventing it. You could try shutting down your dockers first and make sure there are no file transfers going on. But I just reboot if I need the array down.

 

 

I do use direct mounts for certain processes, like my Nextcloud photo backups go straight into the Team Drive (I would not recommend using the personal Google drive anymore, only Team drives). I always use encrypted mounts, but depending on what you are storing you might not mind that it's unecrypted.

 

I use the normal mounting commands, although I currently don't use the caching ability that Rclone offers.

But for downloading dockers and such I think you need to check whether the download client is downloading in increments and uploading those or first storing them locally and then sending the whole file to the Gdrive/Team Drive. If it's storing in small increments directly on your mount I suspect it could be a problem for API hits. And I don't like the risk of corruption of files this could potentially offer.

 

Seeding directly from your Google Drive/Tdrive is for sure going to cause problems with your API hits. Too many small downloads will ruin your API hits. If you want to experiment with that I suggest you use a seperate service account and create a mount specifically for that docker/purpose to test. I have seperate rclone mounts for some dockers or combination of dockers that can create a lot of API hits and seperate it from my Plex mount so they don't interfere with each other.

do you have any documentation on service account ? i found some but not up to date also for teamdrive  thx

Link to comment
On 8/21/2022 at 7:48 AM, maxse said:

Hi folks, 

Does anyone know how this exact thing can be done on a Synology NAS?

Don't mean to be off-topic but I love unraid and the community (had it since 2014 or so). 

I can't seem to find any guides, just some random posts on reddit of people with synology saying they got it working, but don't provide any instructions how or how they even did it.

I just got a Synology and it would be awesome if I could set it up this way.

 

Really appreciate any help or pointing me in the right direction.

Did you really search? First thing when I google is this:

https://anto.online/guides/backup-data-using-rclone-synology-nas-cron-job/

 

Once you got rclone installed and you I assume you know your way around the terminal then you can follow any Rclone guide and to configure your mount through "rclone config". If there are specific steps you are stuck at then we need more information to help you.

 

On 8/21/2022 at 7:34 PM, francrouge said:

do you have any documentation on service account ? i found some but not up to date also for teamdrive  thx

Probably the same as what you found. If you follow the guide from DZMM and use the AutoRclone generator you should have a group of 100 service accounts who have access to your teamdrives. Then just put them in a folder and while configuring your mounts remove all the client id info and such and just point to the service account. For example: "/mnt/user/appdata/rclone/service_accounts/sa_tdrive_plex.json". This way I have multiple mounts for the same teamdrive, but based on different service account.

 

This way when you hit an API quota you can swap your mergerfs folder for example from "/mnt/user/mount_rclone/Tdrive_Plex" to "/mnt/user/mount_rclone/Tdrive_Plex_Backup". The dockers won't notice and your API quota is resetted again.

Link to comment
4 minutes ago, maxse said:

Yes, of course I searched a few days. I saw the backups but could not find anything on how to actually set it up for streaming and radarr/sonarr how you guys did here. 
 

is it really the same script once I install rclone on synology? I can just copy/paste the scripts as long as I use the same share GSE I’ve names? Because that would be amazing. 
 

I know that synology uses different ways to point to shares and the syntax it uses in the path is different. So I’m worried that I may not be able to figure it out and wasn’t sure if you guys would be able to help me on the synology since it’s not unfair anymore. 

All the rclone mount mentions are the same, they are not system based. However the paths and use of mergerfs can differ.

 

I found this for mergerfs. https://github.com/trapexit/mergerfs/wiki/Installing-Mergerfs-on-a-Synology-NAS.

If you have that working you just need to get the right paths in the scripts.

 

But in the beginning just use simple versions of the script. Run the commands and then see if it's working. The scripts of DZMM are quite complex if you want to translate them to your system.

Link to comment
18 hours ago, Kaizac said:

All the rclone mount mentions are the same, they are not system based. However the paths and use of mergerfs can differ.

 

I agree.  The only "hard" bits on other systems is installing rclone and mergerfs.  But, once you've done that the scripts should work on pretty much any platform if you change the paths and can setup a cron job.

 

E.g. I now use a seedbox to do my nzbget and rutorrent downloads and I then move completed files to google drive using my rclone scripts, with *arr running locally sending jobs to the seedbox and then managing the completed files on gdrive i.e. I don't really have a "local" anymore as no files exist locally but the scripts can still handle this.

Edited by DZMM
  • Like 1
Link to comment
5 hours ago, DZMM said:

I agree.  The only "hard" bits on other systems is installing rclone and mergerfs.  But, once you've done that the scripts should work on pretty much any platform if you change the paths and can setup a cron job.

 

E.g. I now use a seedbox to do my nzbget and rutorrent downloads and I then move completed files to google drive using my rclone scripts, with *arr running locally sending jobs to the seedbox and then managing the completed files on gdrive i.e. I don't really have a "local" anymore as no files exist locally but the scripts can still handle this.

hi DZMM 

 

so just to be sure you transfert the file from youre seedbox directly to Gdrive to do encrypt ?

 

thx a lot

Link to comment
5 hours ago, francrouge said:

hi DZMM 

 

so just to be sure you transfert the file from youre seedbox directly to Gdrive to do encrypt ?

 

thx a lot

yes seedbox ---> gdrive and unraid server organises, renames, deletes files etc on gdrive - no use of local bandwidth or storage except to play files as normal.  One day I might move Plex to a cloud server, but that's one for the future (or maybe sooner than expected if electricity prices keep going up!)

Edited by DZMM
Link to comment
52 minutes ago, DZMM said:

yes seedbox ---> gdrive and unraid server organises, renames, deletes files etc on gdrive - no use of local bandwidth of storage except to play files as normal.  One day I might move Plex to a cloud server, but that's one for the future (or maybe sooner than expected if electricity prices keep going up!)

Yeah, we have noticed a jump in electricity here also since a couple years ago. I'm doing some serious monitoring with grafana of my setup to see what I can do to reduce that overall.

Link to comment

wow, my mind is still blown with all this lol.

I think Im just going to stick to unraid, it's too confusing too learn the paths on synology, and I dont want to spend all that time learning and then with no place to troubleshoot..

 

Quick question, will this work with an education account? I currently use it for backup with rclone, but I havent seen anyone use it with streaming like this. Will this work, or do some more features need to be enabled on an enterprise gdrive that I can't use on an edu account?

 

And lastly, if I name my folders the same way it's basically just a copy/paste of the scripts, correct?

 

Can someone please post a screen shot of the paths to set in *.arr apps and sab? I remember having an issue years ago, and followed spaceinvader one's guide, but now since the paths are going to be different, I want to make sure the apps all know where to look...

 

*edit*

Also, can someone explain the BW limits, and the time how that works? I don't understnad it exactly. Like if I don't want to have it time based, but just upload at 20MB/s until 750GB is reached starting at say 2AM. How would I set the parameters?

Edited by maxse
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.