Jump to content

Kaizac

Members
  • Content Count

    279
  • Joined

  • Days Won

    1

Everything posted by Kaizac

  1. You're erroring on this part: ####### check if rclone installed ########## echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName exit So it can't find the $RcloneMountLocation/mountcheck file. RcloneMountLocation is the same as your RcloneShare, so I would start tracing back to there to check if you can find that file and whether all the $'s are correctly in this script.
  2. Maybe you can seperate the 2 use cases? Get a hertzner server for your Plex and a seedbox for your torrents?
  3. You are renaming them to gdrive_upload.json through the renaming DZMM mentions. So if you want them to be called sa_gdrive.json you have to define that in your rename script. Dry Run: n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done Mass Rename: n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done Don't just copy and paste the codes in githubs, but also try to understand what they are doing. Otherwise you have to idea where to troubleshoot and end up breaking your setup.
  4. Did you read the github of DZMM? https://github.com/BinsonBuzz/unraid_rclone_mount
  5. Then just open console and type "python3 install pip"
  6. You don't have pip installed on your server. Get it through nerdpack.
  7. @DZMM did you try --vfs-cache-poll-interval duration? Or the normal poll-interval? https://forum.rclone.org/t/cant-get-poll-interval-working-with-union-remote/13353
  8. More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change. I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi. I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.
  9. Thanks for the explanation, I've got my Matrix Synapse server running! Only problem I have is creating an admin account. When I use your code (or 0.0.0.0 adjusted to my matrix docker's IP) I get a console full of errors. Could you elaborate on this process and what I should fill in on the questions? It's a lot of connection refused error and now max retries errors. See below: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 387, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib/python3.7/http/client.py", line 1244, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1290, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1239, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib/python3.7/http/client.py", line 966, in send self.connect() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 184, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 169, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/register_new_matrix_user", line 22, in <module> main() File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 225, in main args.user, args.password, args.server_url, secret, admin, args.user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 138, in register_new_user user, password, server_location, shared_secret, bool(admin), user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 47, in request_registration r = requests.get(url, verify=False) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))
  10. @xthursdayxthanks for this docker! What happens when I leave DB ip on 127.0.0.1? It seems to work so far, but I didn't have a sqlite db installed AFAIK. Is that created within the docker?
  11. Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts. When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script. Just make sure you change the folder paths to your situation and put in your dockers. #!/bin/bash if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mount connected." else touch /mnt/user/appdata/other/rclone/mount_disconnected echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers." docker stop plex nzbget rm /mnt/user/appdata/other/rclone/dockers_started fi
  12. So it's working. With 12mbs it's not going to be fast. So you have to wait until it's finished.
  13. Go to that upload.log file it should show what is happening. It's in appdata other rclonr
  14. I think you made a spelling error somewhere. In your earlier posts you wrote gdrive_vsf instead of vfs
  15. Yeah upload running is the checker file for uploads. Delete it and you should be able to upload
  16. Delete the checker files in the appdata other rclone folder. Something like upload running
  17. Ok maybe you should write that down differently then. I now assumed correctly but what I was reading was that I had to disable the autostart of the docker daemon in the docker settings page. You mean the docker overview page and the autostart for those specific dockers, not the daemon. Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?
  18. @DZMM in your mounting script you have the following: Remember to disable AUTOSTART in docker settings page Are you talking about disabling autostart for the specific dockers or for the whole docker module?
  19. Because you had 5 projects probably. I had 28, so got 2800 SA's hahaha. Anyways I discovered it was a remote to my Gdrive (not team drive) was giving the errors. Everything has been mounted fine now. Will use my own mount script since I have 10 remotes, so using 10 scripts seems excessive. Maybe I can find a way to convert your script to a multiple remote script.
  20. @DZMM, for the AutoRclone part did you let the script create a new project? And did you change anything to in your Gsuite developer/admin console to have it work? I read this on the rclone page but that seems to be too much work for 100 SA's. 1. Create a service account for example.com To create a service account and obtain its credentials, go to the Google Developer Console. You must have a project - create one if you don’t. Then go to “IAM & admin” -> “Service Accounts”. Use the “Create Credentials” button. Fill in “Service account name” with something that identifies your client. “Role” can be empty. Tick “Furnish a new private key” - select “Key type JSON”. Tick “Enable G Suite Domain-wide Delegation”. This option makes “impersonation” possible, as documented here: Delegating domain-wide authority to the service account These credentials are what rclone will use for authentication. If you ever need to remove access, press the “Delete service account key” button. 2. Allowing API access to example.com Google Drive Go to example.com’s admin console Go into “Security” (or use the search bar) Select “Show more” and then “Advanced settings” Select “Manage API client access” in the “Authentication” section In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer Console under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service account. It is a ~21 character numerical string. In the next field, “One or More API Scopes”, enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.
  21. Yeah SA's are created, also the new project. SA's are added to a group which is added as member to the team drive. When going in to the dev console of Google I don't see an o-auth module though, not sure if it's needed. My rclone config looks like this: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts_tdrive/sa_tdrive.json team_drive = XX server_side_across_configs = true [tdrive_crypt] type = crypt remote = tdrive:Archief filename_encryption = standard directory_name_encryption = true password = XX password2 = XX Really starts to annoy me that it's so complicated.