Jump to content

bobbintb

Members
  • Posts

    1,417
  • Joined

  • Last visited

Posts posted by bobbintb

  1. I am missing something, but I can't figure out how to authorize amazon cloud drive. Following the remote_setup instructions at http://rclone.org/remote_setup/, when I follow these instructions "If your browser doesn't open automatically go to the following link: http://127.0.0.1:53682/auth", I get a "Hmm, we can't reach this page." error.  I know I am missing something, but reading through this forum and the instructions, I can't figure it out.  My default gateway is 192.168.29.1, not sure if that has anything to do with it.  I tried altering the address in the instructions, but nothing has worked.  Thanks in advance for your help.

     

    Use the other option. The one that says something like "use this option if running headless or the first option doesn't work". Unraid is headless.

  2. Bobbitb, for your request, couldn't you pull the amount transferred out of the log file?  Maybe not ideal, but something to work in the meantime? 

     

    I guess the downside is you would have to know what log file it was being written to and only have 1 instance running at a time?

     

    cat rclone_output.log | grep Transferred: | tail -2 | head -1
    Transferred:   329.954 GBytes (2.197 MBytes/s)

     

    Yes and no. It would kind of be a pain for a number of reason like as you mentioned. There is no native, intuitive way to get the information that is available. It's just kind of a "we have to work with what we've got" sort of situation. Also, what we got is not sufficient. Rclone does not calculate the overall progress. It provides the progress of the current working file, but not of the entire job. One probably could calculate that by first capturing the dry run list of files, comparing it with what has been completed, and with the progress of the current file, but that is not accurate, as the ratio of completed files transferred to those left to transfer is not truly representative of percentage complete. Say I am transferring 100 files, 99 of which are 1 megabyte and 1 that is 20 gigabytes. If it transfers the small files first, well, it's transferred 99 out of 100 files so it would show 99% complete, which would not be even close to accurate. We would need to know the size of all the files to be transferred to calculate that. I'm not able to verify at the moment but if I recall, a dry run does not supply that information. I don't think rclone supplies the information for the user to calculate that themselves. I was just kind of hoping he would add a few more outputs to make the progress a little more transparent to the user so someone else could take on the rest but it seems he's willing to be a little more comprehensive that that. The big thing I am wanting at this point is just a progress bar of some sort on the plugin page of UnRAID.

     

    edit: come to think of it though, I suppose if the dry run has a list of the local files to be transferred but does not give size information, there really is no reason that can't be gleaned from the OS itself. Duh. But I don't know if that applies to remotes as well. If the job was syncing from a remote to local instead, that might pose a problem if rclone does not supply the file size information as there wouldn't really be any other way to obtain it.

  3. When I create a copy script, once I abort the script still seems to run.  Is there a way to stop all rclone scripts running or some way to stop copying if the script isn't running in the background?

     

    I'm assuming you mean you are using the user script plugin and aborting that. If that doesn't work just kill the process.

  4. It would be empty if you just created it and haven't mounted anything to it yet. If things got messed up with the mount you can either reboot or force kill the mount command. I'm pretty sure you don't need to reboot for the SMB to work. You may have to reboot SMB/stop and start the array but you might not even need to do that. If you want to kill the mount command that is stuck find the process id with

     

    ps aux | grep mount

     

    then just use the "kill" command followed by the process number. You may need to do a "kill -9". Run the first command again to see if it was stopped.

  5. You can make an SMB share just fine from the /mnt/disks mount point. In unraid go to the settings tab then SMB and add the share under the Samba extras configuration. Here's my example:

     

    [Amazon]
       path = /mnt/disks/Amazon
       read only = yes
       guest ok = yes

     

    Of course that is a somewhat round about way of doing it, mounting a network share of a network share, but I didn't want to have to install rclone on another machine just to view the files. I'm only doing it for a visual reference .

     

    Also look at this:

    https://lime-technology.com/forum/index.php?topic=45880.0

  6. Would it be possible to use my mounted rclone ACD location to download directly to ACD, or would it download to my server and then upload? I am just wondering if I can avoid getting a VPS.

     

    I'm kind of confused on what you are asking.  ???

  7. The userscripts plugin was recently updated to include variables that are preceded with a comment "#". That might have something to do with it as I have not yet updated my userscripts plugin and haven't heard of anyone coming across the issue you are describing.

     

    The reason people are mounting in /mnt/disks is because it is necessary if you want your dockers to have access to the mounts, such as Plex. If this doesn't matter to you then you can mount it under /mnt/user. I've got user and user0 as well.

     

    It's not on the OP yet but I have a custom script that will run sync as a daemon so you do not need to schedule it. You would have to adjust the sync command to your liking however, such as if you wanted to throttle bandwidth as you mentioned in the other thread. I've been using it and it's been working fine for me. The only thing needed to backup to restore is your config file, or really just your encryption password(s). It would be easiest to just backup the config file by zipping it up in a password protected file like you said.

  8. Well, they were instructions to change the variables. I said to use the included scripts and just change the source and destination. I didn't say anything about adding your script to the end of the existing script. You really have a talent for over complicating things but I'm glad you got it working.  ;)

  9. Hey,

     

    I don't understand your directions on how to mount: I created a folder called mount and tried running this command:

    rclone mount --allow-other secret: /mnt/user/mount/ &

     

    I get this error:  Fatal error: failed to mount FUSE fs: mountpoint does not exist: /mnt/user/mount

     

    What am I missing here?

     

    It's just a hunch but I'd say the mointpoint does not exist.  ;)

    Also, update your plugin if you don't have the latest. The OP needs to be updated to reflect this but the latest version supplies scripts for mounting and unmounting (for the user scripts plugin). All you have to do is specify the source and destination so you don't have to deal with things like forgetting to create the mountpoint first. If you do have the latest version just go into the settings page for the plugin to activate the script.

     

    For this part of the instructions: Set the container/host volume with a mode of Read Write,Slave, else the files will not show up inside the container.

     

    That is when I am creating the container for plex?

     

    Yes, or editing an existing container.

  10. I have made a daemon for user scripts. It requires inotify, which can be found in nerd tools.

     

    #DECLARE VARIABLES
    backup_dir=""
    server_backup_dir=""
    log_dir="/boot/config/plugins/rclone-beta/logs/backuperrors.txt"
    
    trap 'kill -HUP 0' EXIT
    
    #FUNCTION TO USE RYSNC TO BACKUP DIRECTORIES
    function backup () {
    
    if rclone sync $backup_dir $server_backup_dir --transfers 1  2>&1 >>$log_dir
    then
    echo "backup succeeded $backup_dir"
    else
    echo "rclone failed on $backup_dir"
    return 1
    fi
    }
    
    #CHECK IF INOTIFY-TOOLS IS INSTALLED
    type -P inotifywait &>/dev/null || { echo "inotifywait command not found."; exit 1; }
    
    #INFINITE WHILE LOOP
    while true
    do
    
    #START RCLONE AND ENSURE DIR ARE UPTO DATA
    backup  || exit 0
    
    #START RCLONE AND TRIGGER BACKUP ON CHANGE
    inotifywait -r -e modify,attrib,close_write,move,create,delete  --format '%T %:e %f' --timefmt '%c' $backup_dir  2>&1 >>$log_dir && backup
    
    done
    

     

    Just fill in the two variables. I have partially testing it with user scripts. It works if manually run but I have not set it to run at array start so let me know if there are issues. I suspect there might be if this script somehow gets ran before the mount script. If so, a simple sleep at the beginning of this script should fix that. Waseh, feel free to include this in the plugin if you'd like.

  11. Love the additions so far. I'd like to request that I be able to go to the GUI page by clicking on the icon on the plugins page. I think most of the plugins work that way in addition to going through the settings page. It would be nice for consistency and ease of use.

  12. WARNING - do not run your rclone scripts at the start of the array.  I did this and it froze my server - the jobs worked, but my server wasn't accessible:

     

     

     

     

    [/size]My first guess was right and the problem was my rclone script which was doing my backup job at the start of the array.  Previously I was using overnight cron jobs, so having the script run at the start of the array was allowing the cron job run but was freezing the machine.

     

    What script did you run? Sync cron? I did mount that way with no problem. There is a sync script that runs when changes are made instead on cron.

  13. I can't seem to find any sort of changelog for rclone beta. Not the plugin but the program itself. I can find the regular changelog but nothing for the beta version. Can someone direct me?

     

    Beta releases are generated from each commit to master. Note these are named like

     

    Source: http://rclone.org/downloads/

     

    Still not quite seeing it. I see that there is a gitlog but that really just shows commits. I can download the beta and there is a changelog in the zip file but it only shows the stable release. Is there any place that shows what has changed between beta versions? I'm still not finding one.

  14. I have tried with a setting of 50Mb but no luck.

     

    As another test I also mounted rclone under my encfs (replacing acd_cli) and I still get similar speeds starting playback, so it doesn't appear to be the encryption layer. Trying to use verbose to debug, but I might need to post on the rclone logs for help with that.

     

    Thanks,

    Wob

     

    So you are getting similar speeds using rclone to mount and encfs to encrypt as you are with using rclone for both mounting and encrypting? Is anyone else getting similar results? Hopefully I'll be able to help test soon. Just too busy lately. That would be very telling because if it is the encryption layer, I don't know if much can be done to improve the performance as doing so would likely have serious repercussions on existing setups. Maybe I'm wrong about that. If it is mounting, I think there is room for improvement. Personally, even if acd_cli/encfs is better now, I still might bank on the fact that rclone is able to improve performance.  acd_cli and encfs being two separate projects, I don't expect much to change in relation to each other.

     

    Edit: One of posts I mentioned earlier, I forget which thread I mention it in, seems to indicate it is indeed crypt causing the issue.

     

    https://forum.rclone.org/t/plex-server-with-amazon-drive-rclone-crypt-vs-encfs-speed/106/17

     

    The author of rclone is looking into it. Just based on the fact that the performance is decent enough and the fact that there is active and significant work being done on rclone, I think I'll stick with that. I'm fairly confident the performance issues are just due to how new this is and will be resolved eventually.

  15. Thanks for your help bobbintb and Waseh. I hope they support rsync etc. in future but for now rclone copy is working and the mount remains working. :)

     

    I doubt they will. Rclone is "rsync for the cloud". It is meant to satisfy a role that rsync does not. It doesn't really make sense to be using them together like that.

  16. Yeah, it might and that was kind of my concern as well. But then I thought, people are going to be asking you (or this thread) about mounting/user_scripts issues regardless of whether or not it is supported. Even though it's stated in the OP there will still be problems with docker not seeing it because they didn't mount to /mnt/disks/ or they forgot to use the --allow-other option or forget the & at the end or whatever. This might minimize that if it's planned out well enough and all they would need to specify is the remote server and the name of the mount point and it will automatically be mounted to /mnt/disks.

     

    If it were done I am thinking it could either wait until a gui is made and be done through that or do one now but have it commented out and just mention that it has to be uncommented once config is done. Just having two variables at the beginning of the script for mount point name and remote location would minimize the amount of editing a user would need to do. Just food for thought.

  17. I have a request. Well, more like a topic for discussion. Would it be possible to include mount/unmount scripts for the user scripts plugin in this plugin? It might be more feasible once a gui is made because adding the scripts before rclone is configured might cause problems. Or maybe including a script but commentating it out and the user can un-comment it once it is configured.

     

    I realize it is fairly trivial to make them yourself and I already have. But for one, it would make it easier on the user and for another, likely cut back on trouble shooting.

×
×
  • Create New...