-
Posts
14 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by takeover
-
-
23 hours ago, remedy said:
hmm still not working. it prints everything after the transfer is complete, but until then it still sits at the rclone debug line. i removed "--stats 9999m" and tried "-vvv" and tried "-P, same result.
is there no way to get it to output the upload periodically during the actual upload? i tried "-v" with "--stats 5m" instead and that didn't work either.
Hmm works for me but I am using scripts on a VPS machine rather than Unraid. I know Userscripts recently got an update but not sure if that is stopping scripts from displaying live progress
-
On 4/1/2020 at 10:59 AM, remedy said:
the upload script doesn't show any progress output for me until the transfer is complete, it just sits at "====== RCLONE DEBUG ======"
any ideas? i'd like to be able to see the live progress.
The reason for this is because of the Discord notifications. If you don't care for Discord notifications then you can remove "--stats 9999m" and change "-vP" to -vvv or -P
- 1
-
4 minutes ago, Tuftuf said:
thanks I've seen that now, but I'm still getting stuck at almost the first step.
Tried a working client id/key to test and created a new one.
Completed the remote auth and provided response.
I've selected the correct team drive once it was listed.
But verifying the mount fails.
root@Firefly:~# rclone lsd tdrive
2020/03/06 22:14:06 ERROR : : error listing: directory not found
2020/03/06 22:14:06 Failed to lsd with 2 errors: last error was: directory not foundDid you forget the colons?
rclone lsd tdrive:
- 1
-
If yout want to add the root folder ID manually you can find it when you go to your Google Drive in you web browser, the folder ID is the number in the URL. Or you can skip and when it asks if it is a Team Drive you select yes and it will let you pick your Team Drive. Either way same result
-
-
4 hours ago, Roken said:
I have a 250gb SSD cache drive that is capable of saturating my bandwidth when downloading.
I'm currently using the mount location of /mnt/user/... for NZBGet, Sonarr etc, but when downloading because /mnt/user/ is located on my spinners it's makes it cap at around 16 MBps (I have gigabit). Is it advisable to switch from /mnt/user to /mnt/cache on a 250gb SSD to speed up downloading?
I have never had to use /mnt/cache to max out my download speed. I have set my downloads folder to use ssd cache to 'yes' in share settings and have done without cache as well and my download speed didn't change.
these are my mappings
/user > /mnt/user
-
42 minutes ago, DZMM said:
Nice way to get a 1G connection. Which provider did you go with?
Got the deal on black friday with ChicagoVPS, also I am using a Google Cloud VPS which is free for the first year with the $300 credit trial
-
So I got the idea from someone here to use a VPS... and I managed to sucessfully set this up and it is working great! Using the scripts to upload to my teamdrive, freeing up my bandwidth!
So the VPS I use is a 1core, 1g ram, 50gb storage, 1g connection, 15tb/month for $35 a year. The setup is to just install docker and try to keep it minimal with letsencrypt, nzbget, and sonarr.
Pros
- scripts work great with a few tweaks to get best performance
- free up your bandwidth or help upload even more
Cons
- basic knowlege with linux, terminal, docker
- your providers restrcitions
- storage space for initial download (can't download anything bigger than your drive supports)
- extra cost to use this setup
-
Found this script for getting around the limit if anyone is interested https://github.com/xyou365/AutoRclone/
-
Thats fantastic news! I look forward to testing them!
-
1 hour ago, Spladge said:
Perhaps mention what those variables are if they need to be used / changed?
There is also an rclone docker from hotio available that I am meaning to try. May make scripting easier.I set what the variables do at the top in each script. As far as using hotio docker, it has a different purpose unlike the scripts.
-
So I modified the scripts for myself and a few of my friends but I wanted to see if anyone can spot anything wrong with them or ways to improve it. I also wanna say thanks alot for these scripts they been so useful. Also I am a novice when it comes to coding so keep that in mind Thanks
- 1
- 1
-
I wanna say I appreciate these awesome scripts! They have been working flawlessly for me so far.
Now for my question, can I restructure the directories? I know to change each script and the corresponding directory path. Would this cause any problems?
mkdir -p /mnt/user/appdata/other/rclone mkdir -p /mnt/user/mount_rclone/google_vfs mkdir -p /mnt/user/mount_unionfs/google_vfs mkdir -p /mnt/user/rclone_upload/google_vfs CHANGE TO mkdir -p /mnt/user/appdata/other/rclone mkdir -p /mnt/user/plexdrive/media_rclone/media mkdir -p /mnt/user/plexdrive/media_unionfs/media mkdir -p /mnt/user/plexdrive/media_upload/media
Guide: How To Use Rclone To Mount Cloud Drives And Play Files
in Plugins and Apps
Posted
So you just need to do the next step which is to create an encrypted remote. I would also recommend setting up Service Accounts if you plan on exceeding 750gb/day otherwise here is an example of what your rclone config should look like if you want an encrypted Plex folder