[Support] Rclone (beta)


Recommended Posts

Thanks for the help. Restarting the plex docker didn't do anything but restarting the docker service was the trick!

 

I have another ubuntu server with plex that I setup with acd_cli/encfs.

But then with rclone fixing the seek issue and your plugin I decided to try just rclone to not need to switch to a seperate server.

I mounted my unraid /mnt/acd share to that server, and it saw into it fine and I was able to add it to that plex instance. So that's when i restarted docker and then came here and saw the same suggestion.

 

Thanks for the quick responses and help guys.

 

 

Link to comment

Another caveat I just thought of. Even if I mount the share before Plex starts, there is still the issue of adding content. There needs to be a way for the Plex docker to get the changes without the need to restart it. Having to restart it every time new content is added is not practical. Not sure how that would work. I might have to switch to the Plex plugin if a solution cannot be found.

Link to comment

Another caveat I just thought of. Even if I mount the share before Plex starts, there is still the issue of adding content. There needs to be a way for the Plex docker to get the changes without the need to restart it. Having to restart it every time new content is added is not practical. Not sure how that would work. I might have to switch to the Plex plugin if a solution cannot be found.

Thanks for the help. Restarting the plex docker didn't do anything but restarting the docker service was the trick!

Hmm I would try mounting it in /mnt/user/acd/unraidcrypt and see if that makes a difference

Thanks for the help. Restarting the plex docker didn't do anything but restarting the docker service was the trick!

 

I have another ubuntu server with plex that I setup with acd_cli/encfs.

But then with rclone fixing the seek issue and your plugin I decided to try just rclone to not need to switch to a seperate server.

I mounted my unraid /mnt/acd share to that server, and it saw into it fine and I was able to add it to that plex instance. So that's when i restarted docker and then came here and saw the same suggestion.

 

Thanks for the quick responses and help guys.

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

Link to comment

Anyone know how to get rclone to sync one file at a time? When I run a sync it seems to be uploading many at the same time. My internet has been flaky lately so when uploading many files at once it can take 8+ hours and I am certain to loose connection at some point and have to start over.

 

I checked the documentation and it looks like the --transfers int option is what I need but it says the default is 4 and it seems to be doing more than that so I'm not sure if that it the option I am looking for.

 

Edit: Nevermind, that was it. Maybe the beta change the default or something. Anyway, you can use that option to change number of transfers for anyone else that is wondering.

Link to comment

No it's part of the plugin - actually one of the things I'm changing before release :)

 

Yeah, that was my other guess. I figured it was part of the myrclone script. I really hate having to use an alternate command but I get that it was done to save having to specify the config file every time. There's got to be a better way for that.

 

Thanks for your work on updating the plugin Waseh. I tested Plex and things seem to be working great. Just need to upload my 24TB of data to Amazon, then again to Google, and then whatever else gets added to that after those are finished and I should be good to go. At 5Mbit upload speed, it should take too long...

Link to comment

I completely agree!

In the version im working on now you call it with rclone and still get the same features. It is completely unintuitive that you download a plugin called rclone but have to use it with the command myrclone, so thats gone in the next version unless i end up finding errors with the implementation.

Link to comment

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

 

Question about that, what is slave mode and do we have to specify it when mounting? Also, what's the purpose of the disks share?

Link to comment

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

 

Question about that, what is slave mode? Also, what's the purpose of the disks share?

Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

 

However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

 

http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

Link to comment

Plugin is now live: https://lime-technology.com/forum/index.php?topic=53365 :)

 

In regards to the risk of overlapping cron jobs this is not something i've thought about. My use case doesnt really put me at risk of that problem.

 

Well, it's always a good idea to plan for contingencies but mostly I'm just trying to figure out a way to create a daemon until one is properly added. Cron jobs are easy enough but I'd rather have something more efficient and was just wondering if anyone else has tried anything other than a cron job.

Link to comment

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

 

Question about that, what is slave mode? Also, what's the purpose of the disks share?

Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

 

However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

 

http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

 

It seems I edited my statement while you were replying so I'll ask again in case you didn't see the edit, do we have to do anything to specify slave mode when we mount it?

 

Ok, so disks was added specifically for the purposes of slave mode then? Does it work the same as the user share in terms of nfs/smb mounting?

Link to comment

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

 

Question about that, what is slave mode? Also, what's the purpose of the disks share?

Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

 

However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

 

http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

 

It seems I edited my statement while you were replying so I'll ask again in case you didn't see the edit, do we have to do anything to specify slave mode when we mount it?

 

Ok, so disks was added specifically for the purposes of slave mode then? Does it work the same as the user share in terms of nfs/smb mounting?

In more general terms any path that you cannot guarantee is mounted before the docker service is started you should make the mount point within /mnt/disks and use slave mode within the docker template if a docker app needs to access it

 

Sent from my LG-D852 using Tapatalk

 

 

Link to comment

Make the mount points within /mnt/disks

 

Then on the Plex template, set that container/host volume with a mode of Read Write,Slave

 

Should fix your problems  (The only mount point that supports Slave modes are within /mnt/disks, and slave mode is your fix to any app not seeing the mounts properly if they are mounted after docker is started)

 

Question about that, what is slave mode? Also, what's the purpose of the disks share?

Slave mode lets the docker system see (and more to the point, populate) the contents of the host path if the path is mounted after the docker service starts.

 

However, unRaid's implementation only supports Slave Mode when the mount point is within /mnt/disks (it was added due to Unassigned Devices)

 

http://lime-technology.com/forum/index.php?topic=40937.msg465348#msg465348

 

It seems I edited my statement while you were replying so I'll ask again in case you didn't see the edit, do we have to do anything to specify slave mode when we mount it?

 

Ok, so disks was added specifically for the purposes of slave mode then? Does it work the same as the user share in terms of nfs/smb mounting?

In more general terms any path that you cannot guarantee is mounted before the docker service is started you should make the mount point within /mnt/disks and use slave mode within the docker template if a docker app needs to access it

 

Sent from my LG-D852 using Tapatalk

 

That makes sense but I guess what I am getting at is that I have my user share mapped over smb to my Windows machine. Can I do the same with shares under the disks share or would that not really make any sense in this use case since I am essentially remotely mounting and already remotely mounted share? I just want a way to view the encrypted Amazon share in a decrypted state on my Windows VM in the same manner I view my unraid share. Ultimately I suppose I could install rclone on the Windows machine but I'd rather not have to if unraid can manage.

Link to comment

That makes sense but I guess what I am getting at is that I have my user share mapped over smb to my Windows machine. Can I do the same with shares under the disks share or would that not really make any sense in this use case since I am essentially remotely mounting and already remotely mounted share? I just want a way to view the encrypted Amazon share in a decrypted state on my Windows VM in the same manner I view my unraid share. Ultimately I suppose I could install rclone on the Windows machine but I'd rather not have to if unraid can manage.

 

As for sharing any particular mount on unRaid that you have created with your windows machine, you just need to edit the smb-extra.conf file on the flash drive (/boot/config)

 

This is the contents of mine:

 

[global]
  security = USER
  guest account = nobody
  public = yes
  guest ok = yes
  map to guest = bad user
  writeable = yes

[uSR]
    path=/usr/local/emhttp/plugins
        valid users = andrew
        write list = andrew

I'm sharing /usr/local/emhttp/plugins (Shows up in the Network as USR) over the network to my windows machine.  The same user "Andrew" exists on both the windows box and the unraid box

 

 

 

Link to comment

@squid Thanks for the tips re /mnt/disks and dockers, this has fixed the issue I was having with mount points in docker using my acd_cli\encfs scripts.

 

I am testing the performance of acd_cli\encfs and rclone mount points, I have posted about it in the rclone plug-in thread, so I won't repeat myself here.

 

Wob.

Link to comment

Hey,

 

Have been using the normal sync and it has been working.  Wanted to change to copy instead of sync so did this:

 

/data = /mnt/user/

SYNC_DESTINATION = secret

SYNC_DESTINATION_SUBPATH = plex

 

So I made:

SYNC_COMMAND = rclone copy /mnt/user/ secret:plex

 

I am not seeing new files come in though and it has been about 24 hours (I have about 8TB of data).  I didn't see any errors in my code in the logs, but let me know if my syntax is incorrect.

 

How does it work because I notice it has re-run multiple times and still no changes (I see it has restarted 4 hours ago in the log and is up to 4 hours running).  Does it time out after a certain amount of time or will it start back up from where it left off or is that log file not accurate?  Anyone have any idea how long it generally takes to scan through around 8TB of data and then copy any  new files over?  I was going to manually run it later tonight or tomorrow not through the docker but through putty to see if it works that way but figured I'd check here while I wait to give it more time.

Link to comment

Hey,

 

Have been using the normal sync and it has been working.  Wanted to change to copy instead of sync so did this:

 

/data = /mnt/user/

SYNC_DESTINATION = secret

SYNC_DESTINATION_SUBPATH = plex

 

So I made:

SYNC_COMMAND = rclone copy /mnt/user/ secret:plex

 

I am not seeing new files come in though and it has been about 24 hours (I have about 8TB of data).  I didn't see any errors in my code in the logs, but let me know if my syntax is incorrect.

 

How does it work because I notice it has re-run multiple times and still no changes (I see it has restarted 4 hours ago in the log and is up to 4 hours running).  Does it time out after a certain amount of time or will it start back up from where it left off or is that log file not accurate?  Anyone have any idea how long it generally takes to scan through around 8TB of data and then copy any  new files over?  I was going to manually run it later tonight or tomorrow not through the docker but through putty to see if it works that way but figured I'd check here while I wait to give it more time.

 

I'm not using the copy option myself - I'm using sync. This is probably better to ask on the rclone github page, or maybe at the unRAID rclone plugin page - I see some people over there are trying to sync/copy lots of data like you.

Link to comment
  • 3 weeks later...

I am missing something, but I can't figure out how to authorize amazon cloud drive. Following the remote_setup instructions at http://rclone.org/remote_setup/, when I follow these instructions "If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth", I get a "Hmm, we can't reach this page." error.  I know I am missing something, but reading through this forum and the instructions, I can't figure it out.  My default gateway is 192.168.29.1, not sure if that has anything to do with it.  I tried altering the address in the instructions, but nothing has worked.  Thanks in advance for your help.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.