Kaizac

Members
  • Posts

    297
  • Joined

  • Days Won

    1

Everything posted by Kaizac

  1. You don't have pip installed on your server. Get it through nerdpack.
  2. @DZMM did you try --vfs-cache-poll-interval duration? Or the normal poll-interval? https://forum.rclone.org/t/cant-get-poll-interval-working-with-union-remote/13353
  3. More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change. I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi. I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.
  4. Thanks for the explanation, I've got my Matrix Synapse server running! Only problem I have is creating an admin account. When I use your code (or 0.0.0.0 adjusted to my matrix docker's IP) I get a console full of errors. Could you elaborate on this process and what I should fill in on the questions? It's a lot of connection refused error and now max retries errors. See below: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 387, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib/python3.7/http/client.py", line 1244, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1290, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1239, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib/python3.7/http/client.py", line 966, in send self.connect() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 184, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 169, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/register_new_matrix_user", line 22, in <module> main() File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 225, in main args.user, args.password, args.server_url, secret, admin, args.user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 138, in register_new_user user, password, server_location, shared_secret, bool(admin), user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 47, in request_registration r = requests.get(url, verify=False) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))
  5. @xthursdayxthanks for this docker! What happens when I leave DB ip on 127.0.0.1? It seems to work so far, but I didn't have a sqlite db installed AFAIK. Is that created within the docker?
  6. Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts. When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script. Just make sure you change the folder paths to your situation and put in your dockers. #!/bin/bash if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mount connected." else touch /mnt/user/appdata/other/rclone/mount_disconnected echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers." docker stop plex nzbget rm /mnt/user/appdata/other/rclone/dockers_started fi
  7. So it's working. With 12mbs it's not going to be fast. So you have to wait until it's finished.
  8. Go to that upload.log file it should show what is happening. It's in appdata other rclonr
  9. I think you made a spelling error somewhere. In your earlier posts you wrote gdrive_vsf instead of vfs
  10. Yeah upload running is the checker file for uploads. Delete it and you should be able to upload
  11. Delete the checker files in the appdata other rclone folder. Something like upload running
  12. Ok maybe you should write that down differently then. I now assumed correctly but what I was reading was that I had to disable the autostart of the docker daemon in the docker settings page. You mean the docker overview page and the autostart for those specific dockers, not the daemon. Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?
  13. @DZMM in your mounting script you have the following: Remember to disable AUTOSTART in docker settings page Are you talking about disabling autostart for the specific dockers or for the whole docker module?
  14. Because you had 5 projects probably. I had 28, so got 2800 SA's hahaha. Anyways I discovered it was a remote to my Gdrive (not team drive) was giving the errors. Everything has been mounted fine now. Will use my own mount script since I have 10 remotes, so using 10 scripts seems excessive. Maybe I can find a way to convert your script to a multiple remote script.
  15. @DZMM, for the AutoRclone part did you let the script create a new project? And did you change anything to in your Gsuite developer/admin console to have it work? I read this on the rclone page but that seems to be too much work for 100 SA's. 1. Create a service account for example.com To create a service account and obtain its credentials, go to the Google Developer Console. You must have a project - create one if you don’t. Then go to “IAM & admin” -> “Service Accounts”. Use the “Create Credentials” button. Fill in “Service account name” with something that identifies your client. “Role” can be empty. Tick “Furnish a new private key” - select “Key type JSON”. Tick “Enable G Suite Domain-wide Delegation”. This option makes “impersonation” possible, as documented here: Delegating domain-wide authority to the service account These credentials are what rclone will use for authentication. If you ever need to remove access, press the “Delete service account key” button. 2. Allowing API access to example.com Google Drive Go to example.com’s admin console Go into “Security” (or use the search bar) Select “Show more” and then “Advanced settings” Select “Manage API client access” in the “Authentication” section In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer Console under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service account. It is a ~21 character numerical string. In the next field, “One or More API Scopes”, enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.
  16. Yeah SA's are created, also the new project. SA's are added to a group which is added as member to the team drive. When going in to the dev console of Google I don't see an o-auth module though, not sure if it's needed. My rclone config looks like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts_tdrive/sa_tdrive.json team_drive = XX server_side_across_configs = true [tdrive_crypt] type = crypt remote = tdrive:Archief filename_encryption = standard directory_name_encryption = true password = XX password2 = XX Really starts to annoy me that it's so complicated.
  17. I'm getting the following error when mounting my remotes: INFO : Google drive root 'Archief': Failed to get StartPageToken: Get "https://www.googleapis.com/drive/v3/changes/startPageToken?alt=json&prettyPrint=false&supportsAllDrives=true": oauth2: cannot fetch token: 401 Unauthorized Response: { "error": "deleted_client", "error_description": "The OAuth client was deleted." } Do you also get that? And is there an easy way to use your mount script for multiple remotes?
  18. Did you configure the path and file to the json through rclone config or did you just add the line to the rclone config after setting it up. When I try it through the rclone config way through SSH it says: Failed to configure team drive: config team drive failed to create oauth client: error opening service account credentials file: open sa_tdrive.json: no such file or directory
  19. Ok so the remote you set up with one of the SA's you create. So number 1 of 100 for example. And then for uploading you rotate within the service accounts folder between the 100 SA's? Am I understanding well it correctly then? And if I want to have another remote to seperate my bazarr traffic. Do I then create a new project or do I just use a different SA? I'm not sure on what level the api ban is registered.
  20. So how does rclone when streaming media know to use the service accounts then?
  21. That doesn't answer my question unfortunately. In your readme you mention this: So it seems in your example you don't configure your client id and password. But then later on you mention you do need it.
  22. I've tried finding the final consensus in this topic, but it's becoming a bit too large for easy finding. I've created 100 service accounts now, added them to my teamdrives. How should I now setup my rclone remote? I should only need 2 right (1 drive and 1 crypt of that drive)? And should I set it up with it's own client id/secret when using SA's. According to your github it seems like I just create a remote with rclone's own ID and secret, so no defining on my side.
  23. I have no idea how you manage to do all those 4 steps. Care to share some parts of those scripts/merger commands?