Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. On 4/1/2020 at 1:21 PM, PSYCHOPATHiO said:

    It is not so complicated to get it up & running, you just need to know where to start

     

    First thing you need to edit the homeserver.yaml located in /appdata/matrix

     

    Locate

    
    - port: 8008
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['::1', '127.0.0.1'] 

    Change the bind address to your docker IP address or to 0.0.0.0, otherwise you wont be able to connect.

     

    Then start the docker, head to console & insert the following:

    
    cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

    Copy the result & back to homeserver.yaml locate these 2 lines

    
    enable_registration: False <-- True if you want users to register
    registration_shared_secret: "past the code here"

    apparently in the docker the registration shared secret is already generated but I advise you use your own.

     

    You can add your own ReCAPTCHA & TUEN_uri server settings to the homeserver.yaml.

    just restart the matrix docker & redirect your reverse proxy to the IP & port, you can test it https://riot.im/app

     

    To register an admin account:

    
    register_new_matrix_user -c /data/homeserver.yaml http://0.0.0.0:8008

    This can be HTTPS 8448 if you have the certs installed

     

    There is one more step that involves creating an SRV record in your DNS if you want to connect your server to the federation.

     

    _matrix._tcp.matrix.example.com the wight and priority is your choice as for the port if your behind a reverse proxy you can go with 443.

     

    Once the server up & running you can head back to homeserver.yaml read through the settings & adjust the server to what you need.

     

    Thanks for the explanation, I've got my Matrix Synapse server running! Only problem I have is creating an admin account. When I use your code (or 0.0.0.0 adjusted to my matrix docker's IP) I get a console full of errors. Could you elaborate on this process and what I should fill in on the questions? It's a lot of connection refused error and now max retries errors. See below:

     

    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 157, in _new_conn
        (self._dns_host, self.port), self.timeout, **extra_kw
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 84, in create_connection
        raise err
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 74, in create_connection
        sock.connect(sa)
    ConnectionRefusedError: [Errno 111] Connection refused
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 672, in urlopen
        chunked=chunked,
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 387, in _make_request
        conn.request(method, url, **httplib_request_kw)
      File "/usr/lib/python3.7/http/client.py", line 1244, in request
        self._send_request(method, url, body, headers, encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1290, in _send_request
        self.endheaders(body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1239, in endheaders
        self._send_output(message_body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
        self.send(msg)
      File "/usr/lib/python3.7/http/client.py", line 966, in send
        self.connect()
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 184, in connect
        conn = self._new_conn()
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 169, in _new_conn
        self, "Failed to establish a new connection: %s" % e
    urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send
        timeout=timeout
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 720, in urlopen
        method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 436, in increment
        raise MaxRetryError(_pool, url, error or ResponseError(cause))
    urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/bin/register_new_matrix_user", line 22, in <module>
        main()
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 225, in main
        args.user, args.password, args.server_url, secret, admin, args.user_type
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 138, in register_new_user
        user, password, server_location, shared_secret, bool(admin), user_type
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 47, in request_registration
        r = requests.get(url, verify=False)
      File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get
        return request('get', url, params=params, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request
        return session.request(method=method, url=url, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request
        resp = self.send(prep, **send_kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send
        r = adapter.send(request, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send
        raise ConnectionError(e, request=request)
    requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))

     

  2. Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts.

     

    When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script.

    Just make sure you change the folder paths to your situation and put in your dockers.

    #!/bin/bash
    
    if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount connected."
    else
    	touch /mnt/user/appdata/other/rclone/mount_disconnected
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers."
    	docker stop plex nzbget
        rm /mnt/user/appdata/other/rclone/dockers_started
    fi

     

    • Thanks 1
  3. 10 minutes ago, faulksy said:

    Big thank you for all the hard work into this container and the scripts. Posts here have helped me a lot understand and resolve issues I have had previously on mount_unionfs and mount_mergerfs. 

     

    Last night I got mount_mergerfs up and running 5 folders/files uploaded successfully to mount_rclone. There is a couple hundred gb waiting in the local mount. A further 5 but empty folders uploaded but keep receiving this in the upload log.

     

     

    I did a couple of shutdowns last night and not sure if this error is a result of any unclean shutdowns. I have a stock upload scripts besides changing RcloneUploadRemoteName="gdrive_vfs" to match the RcloneRemoteName

     

    What should I be doing to fix it? Thanks

    Delete the checker files in the appdata other rclone folder. Something like upload running

  4. 2 hours ago, DZMM said:

    For the dockers who are starting with the script

    Ok maybe you should write that down differently then. I now assumed correctly but what I was reading was that I had to disable the autostart of the docker daemon in the docker settings page. You mean the docker overview page and the autostart for those specific dockers, not the daemon.

     

    Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?

  5. 2 minutes ago, DZMM said:

    It's been a while but I didn't do anything clever - I just followed these instructions https://github.com/xyou365/AutoRclone/blob/master/Readme.md.  somehow I ended up with 500 not 100 though

    Because you had 5 projects probably. I had 28, so got 2800 SA's hahaha.

     

    Anyways I discovered it was a remote to my Gdrive (not team drive) was giving the errors. Everything has been mounted fine now.

    Will use my own mount script since I have 10 remotes, so using 10 scripts seems excessive. Maybe I can find a way to convert your script to a multiple remote script.

  6. @DZMM, for the AutoRclone part did you let the script create a new project? And did you change anything to in your Gsuite developer/admin console to have it work?

     

    I read this on the rclone page but that seems to be too much work for 100 SA's.

     

    1. Create a service account for example.com
    
        To create a service account and obtain its credentials, go to the Google Developer Console.
        You must have a project - create one if you don’t.
        Then go to “IAM & admin” -> “Service Accounts”.
        Use the “Create Credentials” button. Fill in “Service account name” with something that identifies your client. “Role” can be empty.
        Tick “Furnish a new private key” - select “Key type JSON”.
        Tick “Enable G Suite Domain-wide Delegation”. This option makes “impersonation” possible, as documented here: Delegating domain-wide authority to the service account
        These credentials are what rclone will use for authentication. If you ever need to remove access, press the “Delete service account key” button.
    
    2. Allowing API access to example.com Google Drive
    
        Go to example.com’s admin console
        Go into “Security” (or use the search bar)
        Select “Show more” and then “Advanced settings”
        Select “Manage API client access” in the “Authentication” section
        In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer Console under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service account. It is a ~21 character numerical string.
        In the next field, “One or More API Scopes”, enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.

     

  7. Yeah SA's are created, also the new project. SA's are added to a group which is added as member to the team drive. When going in to the dev console of Google I don't see an o-auth module though, not sure if it's needed.

     

    My rclone config looks like this:

     

    [tdrive]
    type = drive
    scope = drive
    service_account_file = /mnt/user/appdata/other/rclone/service_accounts_tdrive/sa_tdrive.json
    team_drive = XX
    server_side_across_configs = true
    
    [tdrive_crypt]
    type = crypt
    remote = tdrive:Archief
    filename_encryption = standard
    directory_name_encryption = true
    password = XX
    password2 = XX

    Really starts to annoy me that it's so complicated.

  8. I'm getting the following error when mounting my remotes:

     

    INFO : Google drive root 'Archief': Failed to get StartPageToken: Get "https://www.googleapis.com/drive/v3/changes/startPageToken?alt=json&prettyPrint=false&supportsAllDrives=true": oauth2: cannot fetch token: 401 Unauthorized
    Response: {
    "error": "deleted_client",
    "error_description": "The OAuth client was deleted."
    }

    Do you also get that?

     

    And is there an easy way to use your mount script for multiple remotes?

  9. 1 hour ago, DZMM said:

    No need for new project if the SA group has been added to the respective teamdrives - think of SAs as normal accounts, that don't need credentials/client_ids setting up i.e. bans work the same - on the offending SA.

     

    They're good for efficiently handling multiple accounts for rotating etc once they are setup

     

     

    Did you configure the path and file to the json through rclone config or did you just add the line to the rclone config after setting it up. When I try it through the rclone config way through SSH it says:

     

    Failed to configure team drive: config team drive failed to create oauth client: error opening service account credentials file: open sa_tdrive.json: no such file or directory

     

  10. 36 minutes ago, DZMM said:

    ok, I see where your confusion is coming from

     

    
    Or, like this if using service accounts:
    
    [gdrive]
    type = drive
    scope = drive
    service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json
    team_drive = TEAM DRIVE ID
    server_side_across_configs = true
    
    [gdrive_media_vfs]
    type = crypt
    remote = gdrive:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = PASSWORD1
    password2 = PASSWORD2

    Fixed the readme - glad someone is reading it!

    Ok so the remote you set up with one of the SA's you create. So number 1 of 100 for example. And then for uploading you rotate within the service accounts folder between the 100 SA's? Am I understanding well it correctly then?

     

    And if I want to have another remote to seperate my bazarr traffic. Do I then create a new project or do I just use a different SA? I'm not sure on what level the api ban is registered.

  11. 1 hour ago, DZMM said:

    That doesn't answer my question unfortunately. In your readme you mention this:

     

    Quote

    Or, like this if using service accounts:

     

    [gdrive]
    type = drive
    scope = drive
    team_drive = TEAM DRIVE ID
    server_side_across_configs = true

     

    [gdrive_media_vfs]
    type = crypt
    remote = gdrive:crypt
    filename_encryption = standard
    directory_name_encryption = true
    password = PASSWORD1
    password2 = PASSWORD2

     

    If you need help doing this, please consult the forum thread above.

    It is advisable to create your own client_id to avoid API bans. More Details

    So it seems in your example you don't configure your client id and password. But then later on you mention you do need it.

  12. I've tried finding the final consensus in this topic, but it's becoming a bit too large for easy finding. I've created 100 service accounts now, added them to my teamdrives.

     

    How should I now setup my rclone remote? I should only need 2 right (1 drive and 1 crypt of that drive)? And should I set it up with it's own client id/secret when using SA's. According to your github it seems like I just create a remote with rclone's own ID and secret, so no defining on my side.

  13. 45 minutes ago, DZMM said:

    You only need 1 project.  SAs are associated with your google account, so they can be shared between teamdrives if you want to.

     

    Of the 500 or so I created, I assign 16 to each upload script (sa_tdrive1.json ----> sa_tdrive16.json, sa_cloud1.json ----> sa_cloud16.json etc etc)   - don't need that many, but means I've got enough to saturate a gigabit line if I need to.  All you have to do is rename the file, so you might as well assign 16 to each script.

     

    If you want to reduce the number of scripts, you could do what I've done:

     

    1. I've added the additional rclone mounts as extra mergerfs locations, so that I only have one master mergerfs share for say teamdrive1, td2 etc etc - saves a bit of ram

    2. I have one upload moving local files to teamdrive1 - saves a bit of ram and easier to manage bandwidth

    3. overnight I do a server side move from td1-->td2, td1-->td3 etc etc for the relevant folders - limited ram and no bandwidth hit as done server-side

    4. all files still accessible to mergerfs share in #1 - files are just picked up from their respective rclone mounts, rather than local or the td1 mount

     

     

    I have no idea how you manage to do all those 4 steps. Care to share some parts of those scripts/merger commands?

  14. 1 hour ago, bryansj said:

    I started as far back as MP3s in the 1990s with Usenet and moved to private trackers a few years ago.  You might have a different opinion, but I'm not going back.  I'm not talking about crappy public trackers here.  I've done seed boxes, but they don't really meet my use case anymore.

    Well to each his own. For mainstream media usenet is vastly superior if set up right. If you have access to private trackers and also need non-mainstream media then torrents can bring more to the table.

    Either way I think with your setup/wishes you can use rclone for your backups and replace Crashplan with it. But you don't need all this elaborate configuration for it. Just create a Gdrive/Team Drive, DO NOT mount it. Just upload to it, and let removed/older data be written to a seperate folder within Gdrive. If you get infected then it can't directly access your mount files. And in case of encrypted/infected files being uploaded you will have your old media to rollback to.

     

    Just have to remember that when you want to access your backups you have to mount the rclone mount/gdrive first to see the files. Or if you don't use encryption you can just see them through the browser.

  15. 46 minutes ago, bryansj said:

    I remember from my attempt a couple years ago that gdrive and downloads didn't get along, but I couldn't remember where the problem was between them.  The API ban would cause plenty of headaches.

     

    I also remember there was a catch-22 back when Plex would work straight from a gdrive before they canned that service.  You could point Plex to gdrive and the users would be able to stream from there and not use your bandwidth.  However, you couldn't encode your media and you risked Google deleting your content.  If you encode your media it has to pass through your pipe to decode so you are back to using Plex "locally".

    Why are you on torrents? Move to usenet and get rid of that seeding bullshit. Also you can just direct play 4k from your gdrive. I do with up to 80 Gb files and it's fine. You might consider a seed box though. You can use torrents and move with gigabit speed to gdrive.