Jump to content

Kaizac

Members
  • Content Count

    278
  • Joined

  • Days Won

    1

Kaizac last won the day on March 5 2019

Kaizac had the most liked content!

Community Reputation

29 Good

About Kaizac

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Maybe you can seperate the 2 use cases? Get a hertzner server for your Plex and a seedbox for your torrents?
  2. You are renaming them to gdrive_upload.json through the renaming DZMM mentions. So if you want them to be called sa_gdrive.json you have to define that in your rename script. Dry Run: n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done Mass Rename: n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done Don't just copy and paste the codes in githubs, but also try to understand what they are doing. Otherwise you have to idea where to troubleshoot and end up breaking your setup.
  3. Did you read the github of DZMM? https://github.com/BinsonBuzz/unraid_rclone_mount
  4. Then just open console and type "python3 install pip"
  5. You don't have pip installed on your server. Get it through nerdpack.
  6. @DZMM did you try --vfs-cache-poll-interval duration? Or the normal poll-interval? https://forum.rclone.org/t/cant-get-poll-interval-working-with-union-remote/13353
  7. More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change. I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi. I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.
  8. Thanks for the explanation, I've got my Matrix Synapse server running! Only problem I have is creating an admin account. When I use your code (or 0.0.0.0 adjusted to my matrix docker's IP) I get a console full of errors. Could you elaborate on this process and what I should fill in on the questions? It's a lot of connection refused error and now max retries errors. See below: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 157, in _new_conn (self._dns_host, self.port), self.timeout, **extra_kw File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 84, in create_connection raise err File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 74, in create_connection sock.connect(sa) ConnectionRefusedError: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 672, in urlopen chunked=chunked, File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 387, in _make_request conn.request(method, url, **httplib_request_kw) File "/usr/lib/python3.7/http/client.py", line 1244, in request self._send_request(method, url, body, headers, encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1290, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1239, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output self.send(msg) File "/usr/lib/python3.7/http/client.py", line 966, in send self.connect() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 184, in connect conn = self._new_conn() File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 169, in _new_conn self, "Failed to establish a new connection: %s" % e urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send timeout=timeout File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 720, in urlopen method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2] File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 436, in increment raise MaxRetryError(_pool, url, error or ResponseError(cause)) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/usr/local/bin/register_new_matrix_user", line 22, in <module> main() File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 225, in main args.user, args.password, args.server_url, secret, admin, args.user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 138, in register_new_user user, password, server_location, shared_secret, bool(admin), user_type File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 47, in request_registration r = requests.get(url, verify=False) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get return request('get', url, params=params, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send raise ConnectionError(e, request=request) requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))
  9. @xthursdayxthanks for this docker! What happens when I leave DB ip on 127.0.0.1? It seems to work so far, but I didn't have a sqlite db installed AFAIK. Is that created within the docker?
  10. Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts. When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script. Just make sure you change the folder paths to your situation and put in your dockers. #!/bin/bash if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mount connected." else touch /mnt/user/appdata/other/rclone/mount_disconnected echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers." docker stop plex nzbget rm /mnt/user/appdata/other/rclone/dockers_started fi
  11. So it's working. With 12mbs it's not going to be fast. So you have to wait until it's finished.
  12. Go to that upload.log file it should show what is happening. It's in appdata other rclonr