Jump to content

Kaizac

Members
  • Content Count

    280
  • Joined

  • Days Won

    1

Posts posted by Kaizac


  1. On 8/11/2020 at 5:32 AM, Emilio5639 said:

    Scripts

    You're erroring on this part:

     

    #######  check if rclone installed  ##########
    echo "$(date "+%d.%m.%Y %T") INFO: Checking if rclone installed successfully."
    if [[ -f "$RcloneMountLocation/mountcheck" ]]; then
        echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload."
    else
        echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later."
        rm /mnt/user/appdata/other/rclone/remotes/$RcloneUploadRemoteName/upload_running$JobName
        exit

    So it can't find the $RcloneMountLocation/mountcheck file. RcloneMountLocation is the same as your RcloneShare, so I would start tracing back to there to check if you can find that file and whether all the $'s are correctly in this script.


  2. 4 minutes ago, privateer said:

    Yes I read it, and yes I ran those commands.

     

    the files I have are named sa_gdrive_upload[X].json, but there's no sa_gdrive.json file in there. They are in the correct folder and there's 100 of them. This is the error I've been getting:

     

    Failed to create file system for "gdrive_media_vfs:": failed to make remote gdrive:"crypt" to wrap: drive: failed when making oauth client: error opening service account credentials file: open /mnt/user/appdata/other/rclone/service_accounts/sa_gdrive.json: no such file or directory

    You are renaming them to gdrive_upload.json through the renaming DZMM mentions. So if you want them to be called sa_gdrive.json you have to define that in your rename script.

    Dry Run:
    n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done
    
    Mass Rename:
    n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done
    

     

    Don't just copy and paste the codes in githubs, but also try to understand what they are doing. Otherwise you have to idea where to troubleshoot and end up breaking your setup.


  3. 2 hours ago, privateer said:

    Hopefully progressing onward...but with a new issue.

     

    Where should I get a copy of the sa_gdrive.json file? I have the remotes but not sure about that file...

     

    I don't know what the SharedTeamDriveSrcID or the SharedTeamDriveDstID are. Is the DstID the folder inside the teamdrive where I'm going to store things (e.g. teamdrivefolder/crypt)? What should go here...wondering if this is why I don't have the .json file.

    Did you read the github of DZMM?

    https://github.com/BinsonBuzz/unraid_rclone_mount


     

    Quote

     

    Optional: Create Service Accounts (follow steps 1-4).To mass rename the service accounts use the following steps:

    Place Auto-Genortated Service Accounts into /mnt/user/appdata/other/rclone/service_accounts/

    Run the following in terminal/ssh

    Move to directory: cd /mnt/user/appdata/other/rclone/service_accounts/

    Dry Run:

    n=1; for f in *.json; do echo mv "$f" "sa_gdrive_upload$((n++)).json"; done

    Mass Rename:

    n=1; for f in *.json; do mv "$f" "sa_gdrive_upload$((n++)).json"; done

     

     


  4.  

    4 minutes ago, privateer said:

    python-pip-20.0.2-x86_64-1.txz is installed on my server. Only one I see with pip in it (unless I've missed something). Shouldn't need to reboot or anything after an install right?

     

    pip3 returns this error:

    
    Traceback (most recent call last):
      File "/usr/bin/pip3", line 6, in <module>
        from pkg_resources import load_entry_point
    ModuleNotFoundError: No module named 'pkg_resources'

     

    Then just open console and type "python3 install pip"


  5. 7 minutes ago, privateer said:

    I've been successfully running the original version (unionfs) for a while and finally decided to make the plunge to team drive, service accounts, and mergerfs.

     

    While trying to upgrade, I ran the following command as listed on the AutoRclone git page:

    
    sudo git clone https://github.com/xyou365/AutoRclone && cd AutoRclone && sudo pip3 install -r requirements.txt

    The output for this command resulted in an error: 

    
    sudo: pip3: command not found

    The rest of the command worked fine. Any idea what's going on here?

    You don't have pip installed on  your server. Get it through nerdpack.


  6. Just now, Bjur said:

    I don't know I just started, so that's why I'm asking people why has more experience with this.

    If Google stops the unlimited service because of people encrypting, would there then be a longer grace period to get the stuff local or will they just freeze peoples things?

    Will this be a likely scenario.

    More likely would be that they enforce the 5 user requirement to actually have unlimited. And after that they might raise prices. Both scenario's is personal for each person if it's worth it. And I think they will give a grace period if things do drastically change.

     

    I'm using my drive both for my work related storage as personal. Don't forget there are many universities and data-driven companies who store TB's of data each day. We're pretty much a drop in the bucket for Google. Same with mobile providers. I have an unlimited plan, extra expensive, but most months I don't even use 1gb (especially now, being constantly at home). And then other days I rake in 30GB per day because I'm streaming on holiday or working without wifi.

     

    I did start with cleaning up my media though. I was storing media I will never watch, but because it got downloaded by my automations it got in. It gives too much of that Netflix effect: scrolling indefinitely and never watching an actualy movie or show.


  7. On 4/1/2020 at 1:21 PM, PSYCHOPATHiO said:

    It is not so complicated to get it up & running, you just need to know where to start

     

    First thing you need to edit the homeserver.yaml located in /appdata/matrix

     

    Locate

    
    - port: 8008
        tls: false
        type: http
        x_forwarded: true
        bind_addresses: ['::1', '127.0.0.1'] 

    Change the bind address to your docker IP address or to 0.0.0.0, otherwise you wont be able to connect.

     

    Then start the docker, head to console & insert the following:

    
    cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

    Copy the result & back to homeserver.yaml locate these 2 lines

    
    enable_registration: False <-- True if you want users to register
    registration_shared_secret: "past the code here"

    apparently in the docker the registration shared secret is already generated but I advise you use your own.

     

    You can add your own ReCAPTCHA & TUEN_uri server settings to the homeserver.yaml.

    just restart the matrix docker & redirect your reverse proxy to the IP & port, you can test it https://riot.im/app

     

    To register an admin account:

    
    register_new_matrix_user -c /data/homeserver.yaml http://0.0.0.0:8008

    This can be HTTPS 8448 if you have the certs installed

     

    There is one more step that involves creating an SRV record in your DNS if you want to connect your server to the federation.

     

    _matrix._tcp.matrix.example.com the wight and priority is your choice as for the port if your behind a reverse proxy you can go with 443.

     

    Once the server up & running you can head back to homeserver.yaml read through the settings & adjust the server to what you need.

     

    Thanks for the explanation, I've got my Matrix Synapse server running! Only problem I have is creating an admin account. When I use your code (or 0.0.0.0 adjusted to my matrix docker's IP) I get a console full of errors. Could you elaborate on this process and what I should fill in on the questions? It's a lot of connection refused error and now max retries errors. See below:

     

    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 157, in _new_conn
        (self._dns_host, self.port), self.timeout, **extra_kw
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 84, in create_connection
        raise err
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/connection.py", line 74, in create_connection
        sock.connect(sa)
    ConnectionRefusedError: [Errno 111] Connection refused
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 672, in urlopen
        chunked=chunked,
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 387, in _make_request
        conn.request(method, url, **httplib_request_kw)
      File "/usr/lib/python3.7/http/client.py", line 1244, in request
        self._send_request(method, url, body, headers, encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1290, in _send_request
        self.endheaders(body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1239, in endheaders
        self._send_output(message_body, encode_chunked=encode_chunked)
      File "/usr/lib/python3.7/http/client.py", line 1026, in _send_output
        self.send(msg)
      File "/usr/lib/python3.7/http/client.py", line 966, in send
        self.connect()
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 184, in connect
        conn = self._new_conn()
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connection.py", line 169, in _new_conn
        self, "Failed to establish a new connection: %s" % e
    urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 449, in send
        timeout=timeout
      File "/usr/local/lib/python3.7/dist-packages/urllib3/connectionpool.py", line 720, in urlopen
        method, url, error=e, _pool=self, _stacktrace=sys.exc_info()[2]
      File "/usr/local/lib/python3.7/dist-packages/urllib3/util/retry.py", line 436, in increment
        raise MaxRetryError(_pool, url, error or ResponseError(cause))
    urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))
    
    During handling of the above exception, another exception occurred:
    
    Traceback (most recent call last):
      File "/usr/local/bin/register_new_matrix_user", line 22, in <module>
        main()
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 225, in main
        args.user, args.password, args.server_url, secret, admin, args.user_type
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 138, in register_new_user
        user, password, server_location, shared_secret, bool(admin), user_type
      File "/usr/local/lib/python3.7/dist-packages/synapse/_scripts/register_new_matrix_user.py", line 47, in request_registration
        r = requests.get(url, verify=False)
      File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 76, in get
        return request('get', url, params=params, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/api.py", line 61, in request
        return session.request(method=method, url=url, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 530, in request
        resp = self.send(prep, **send_kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/sessions.py", line 643, in send
        r = adapter.send(request, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/requests/adapters.py", line 516, in send
        raise ConnectionError(e, request=request)
    requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8008): Max retries exceeded with url: /_matrix/client/r0/admin/register (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x14bb337d2320>: Failed to establish a new connection: [Errno 111] Connection refused'))

     


  8. Just thought I would share this little script. It can probably be integrated with DZMM's scripts, but I'm not using all his scripts.

     

    When a mount drops, the script should automatically pick it up, but when this is not possible the dockers will just continue to fill the merger/union folder making the remount impossible (you get the error that the mount is not empty). To make sure all dockers stop which are using the union I made the following script. Just run it every minute as well. When the mount is back again it should start your dockers again from your mount script.

    Just make sure you change the folder paths to your situation and put in your dockers.

    #!/bin/bash
    
    if [[ -f "/mnt/user/mount_rclone/Tdrive/mountcheck" ]]; then
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount connected."
    else
    	touch /mnt/user/appdata/other/rclone/mount_disconnected
    	echo "$(date "+%d.%m.%Y %T") INFO: Mount disconnected, stopping dockers."
    	docker stop plex nzbget
        rm /mnt/user/appdata/other/rclone/dockers_started
    fi

     

    • Thanks 1

  9. 10 minutes ago, faulksy said:

    Big thank you for all the hard work into this container and the scripts. Posts here have helped me a lot understand and resolve issues I have had previously on mount_unionfs and mount_mergerfs. 

     

    Last night I got mount_mergerfs up and running 5 folders/files uploaded successfully to mount_rclone. There is a couple hundred gb waiting in the local mount. A further 5 but empty folders uploaded but keep receiving this in the upload log.

     

     

    I did a couple of shutdowns last night and not sure if this error is a result of any unclean shutdowns. I have a stock upload scripts besides changing RcloneUploadRemoteName="gdrive_vfs" to match the RcloneRemoteName

     

    What should I be doing to fix it? Thanks

    Delete the checker files in the appdata other rclone folder. Something like upload running


  10. 2 hours ago, DZMM said:

    For the dockers who are starting with the script

    Ok maybe you should write that down differently then. I now assumed correctly but what I was reading was that I had to disable the autostart of the docker daemon in the docker settings page. You mean the docker overview page and the autostart for those specific dockers, not the daemon.

     

    Regarding the SA rotation for uploading, does it now rotate automatically when 750gb is maxxed or does it just move up to the next SA when a new upload is started because of timing. Ie. it's only suitable for continuous download/uploading and not for uploading a backlog on full gigabit speed?


  11. 2 minutes ago, DZMM said:

    It's been a while but I didn't do anything clever - I just followed these instructions https://github.com/xyou365/AutoRclone/blob/master/Readme.md.  somehow I ended up with 500 not 100 though

    Because you had 5 projects probably. I had 28, so got 2800 SA's hahaha.

     

    Anyways I discovered it was a remote to my Gdrive (not team drive) was giving the errors. Everything has been mounted fine now.

    Will use my own mount script since I have 10 remotes, so using 10 scripts seems excessive. Maybe I can find a way to convert your script to a multiple remote script.


  12. @DZMM, for the AutoRclone part did you let the script create a new project? And did you change anything to in your Gsuite developer/admin console to have it work?

     

    I read this on the rclone page but that seems to be too much work for 100 SA's.

     

    1. Create a service account for example.com
    
        To create a service account and obtain its credentials, go to the Google Developer Console.
        You must have a project - create one if you don’t.
        Then go to “IAM & admin” -> “Service Accounts”.
        Use the “Create Credentials” button. Fill in “Service account name” with something that identifies your client. “Role” can be empty.
        Tick “Furnish a new private key” - select “Key type JSON”.
        Tick “Enable G Suite Domain-wide Delegation”. This option makes “impersonation” possible, as documented here: Delegating domain-wide authority to the service account
        These credentials are what rclone will use for authentication. If you ever need to remove access, press the “Delete service account key” button.
    
    2. Allowing API access to example.com Google Drive
    
        Go to example.com’s admin console
        Go into “Security” (or use the search bar)
        Select “Show more” and then “Advanced settings”
        Select “Manage API client access” in the “Authentication” section
        In the “Client Name” field enter the service account’s “Client ID” - this can be found in the Developer Console under “IAM & Admin” -> “Service Accounts”, then “View Client ID” for the newly created service account. It is a ~21 character numerical string.
        In the next field, “One or More API Scopes”, enter https://www.googleapis.com/auth/drive to grant access to Google Drive specifically.