Jump to content

Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 4 minutes ago, cammelspit said:

    Hi! I am trying to access my gdrive from this plugin and I am using spaceinvader's video as a rough guide. I am getting issues listing the directories as a test to make sure everything works. 

    
    rclone lsd google
    2018/08/27 12:01:50 ERROR : : error listing: directory not found
    2018/08/27 12:01:50 Failed to lsd: directory not found

     

    I set the remote to have access to everything during setup, I believe this was option 1 at the point. Is this an issue with the plugin or the version the plugin is using or something with rclone itself. If this isn't the plugins fault, and to this point I am not sure, it would still be nice to get a quick hand figuring out what I am doing wrong. If I explicitly set a directory such as 

    
    rclone ls google:Client Data Backup

    It does work and it lists the files as expected but I can't seem to LS or LSD the root. Should I try using the beta plugin instead? I may just be stupid here or I may be wasting your time with problems that I shouldn't be bringing to you but I am super new to Linux in general and this is my first time ever using rclone so I am not sure if the problem is me or not. If it is me then you can please just tell me to take a hike and I will, I promise. ?

     

    You forgot the ":". So type in rclone lsd google:

    • Like 1
  2. 1 minute ago, Waseh said:

    Configs are kept even when uninstalling so you don't have to worry about removing and reinstalling the plugin :)

     

    Thanks! I just deleted and reinstalled through the apps library and not it's working again. I will do a reboot again of my server to see if it works well this time. Will report back in about 5 minutes.

  3. Just now, Waseh said:

    That would suggest that the plugin didn't install correctly since only the wrapper seems to be present. 

     

    Is this new behavior and/or is it connected to updating/install on reboot while offline? 

     

    New behaviour and I always install with internet connected. I'm not sure if I can force a reïnstall without losing all my configuration?

  4. 14 hours ago, Waseh said:

    Hey guys

    Sorry for the lack of updates in a (long) while ? Real life has been taking up a lot of time and my own install of rclone has been sufficient for my needs.
    However both the stable branch as well as the beta branch should now be able to survive a reboot even if no internet connection is available. Please test it out and see if it's working as intended. I also fixed the missing icon on the settings page.

     

    Cheers

     

    Thanks for all your work! I just used rclone config through Putty (terminal gives the same error) as I always do and it gives me the following error:


    root@Unraid:~# rclone config
    /usr/sbin/rclone: line 21: rcloneorig: command not found

     

    So I can't configure anything at the moment. Is this a bug, or on my side?

     

    Edit: did some further checks and mounting also does not work.

     

  5. I'm trying to use the Cloudflare DNS + HTTP proxy which I can't seem to get working as I want.

     

    What I have is a Let's Encrypt set up, in Cloudflare I created CNAME's which redirect to my DuckDNS domain. When I put off the proxy mode in Cloudflare my personal domain and subdomains resolve to the right dockers. However, when I put on the HTTP proxy in Cloudflare, the (sub)domain can't resolve anymore.

     

    Should what I'm trying to do work, or is it impossible? I'd prefer not to put my WAN IP out in the open, so the HTTP proxy would be very useful.

  6. 20 hours ago, DZMM said:

    I've updated my script post with a few updates I've made:

    1. new rclone mount settings to improve start times and reduce API calls
    2. I run uninstall script at array start as well in case of a unclean shutdown
    3. upload script now excludes .unionfs/ folder ( @slimshizn I think this might be your problem)
    4. upload script alternates between cache and one array drive at a time, to try and reduce pointless transfers to the array and also multiple array drives spinning up at the same time for the 4 transfers

     

     

    Thanks for the updates. Some questions:

    - I can't see what you've changed on the rclone mount. I've put your previous version next to the new one but I can't see a difference. Could you direct me to where/what you changed?

    - Did you just make extra scripts in User Scripts which you time at start/stop of array? Won't the uninstall and install conflict when you're running both at start of array?

    - For the upload script, why did you go with checkers 10 and transfers 4? Default is 8 and 4, but I don't really understand what checkers do and what raising the amount accomplishes. For the transfers 4 I'm wondering why Rclone puts that as default. It seems to me when the script suddenly stops during an upload you only lose progress of one upload. If you transfer 4 at the same time, all 4 are wasted. The time in transit is longer with 4 which seem undesired to me. But I don't know if multiple transfers give some benefits I can't see now.

    - For the cleanup script, why do you look for .unionfs? I've noticed this file is often lacking which makes the cleanup fail on the gdrive part (local still works).

  7. 9 hours ago, DZMM said:

    I'm trying a new mount which makes sense based on what I learnt last night in this thread:

     

    https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz

     

    and here:

     

    https://github.com/ncw/rclone/pull/2410

     

     

    Apparently for vfs only the buffer is what's stored in memory.  The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it.

     

    To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down

     

    Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder but other users have higher even 900, which seems excessive to me.

     

    So, I'm going with:

    
    rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

     

     

    Have you been able to test it already? I'm very pleased with these settings. Both movies and series start in Plex after around 3 seconds. Emby seems a bit slower with around 4 seconds. Still a big improvement from my previous start times. Curious what your start times are (without any caching beforehand).

  8. 21 minutes ago, DZMM said:

    I'm trying a new mount which makes sense based on what I learnt last night in this thread:

     

    https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz

     

    and here:

     

    https://github.com/ncw/rclone/pull/2410

     

     

    Apparently for vfs only the buffer is what's stored in memory.  The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it.

     

    To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down

     

    Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder - 900 seems excessive to me.  I used to have 600

     

    So, I'm going with:

    
    rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m

     

     

    I also saw that info this morning (also following the VFS sweetspot topic) and will be trying different mount options. Thing I run into now is that Plex takes very long to direct stream or doesn't start at all. Emby does direct stream (both on nvidia shield as on pc/laptop) within a few seconds. I think Emby is known for working better for using cloud streaming, but I want them both to work reliably and quick enough.

  9. 6 minutes ago, DZMM said:

    I realised it was easier just to write the files direct to /mnt/mount_unionfs to have them in /unionfs !!!

     

    I switched  my upload remote to my vfs remote - the old upload remote was a carryover from when I used a rclone cache remote before I switched to a vfs remote

     

    Sorry didn't see that in your edit. Thanks for clarifying.

  10. 3 hours ago, DZMM said:

     

    Easy to do if you want one folder view e.g. for plex.  I don't think having the local media as RW i.e. 2x RW folders would work as I'm not sure how unionfs would know which RW folder to add content to.

    
    unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/upload_folder/=RW:/mnt/user/local_media/=RO:/mnt/user/google_media=RO /mnt/user/unionfs_mount

     

    I've got a couple of TBs queued, so at the moment it works as the upload is running constantly so over the course of a day it never uploads more than 750GB.  It runs the rclone move commands sequentially, so it'll never go over the 750GB as each job goes no faster than 8MB/s

     

    On my low-priority to-do list is finding a way to do one rclone move that doesn't remove the top-level folders if they are empty.

     

    Awesome, using unionfs like this is so convenient! And good to know that the upload jobs run sequentially and not parallel. I'll put more time in the different rclone jobs (like sync and move) when I have my fibre acces. Thanks for the help again!

     

  11. 11 hours ago, DZMM said:

     

    I've got the same directory structure/folder names in /mnt/user/rclone_upload/google_vfs as /mnt/user/mount_rclone/google_vfs so that I only need one unionfs mount to make my life easier i.e. /mnt/user/mount_unionfs/google_vfs/tv_kids_gd or the docker mapping/unionfs/tv_kids_gd is a union of /mnt/user/mount_rclone/googe_vfs/tv_kids_gd and /mnt/user/rclone_upload/google_vfs/tv_kids_gd.

     

     

     

    Thanks man, that was the trick indeed. Amazing that it works like that! Everything seems to be working fine now. Currently putting the system to the test by doing full downloads, both Emby as Plex library updates and playing from local drive. Before my memory usage would go to 70+% but now it's at the normal 20% thus the unionfs is not using memory with all the indexing going on which is good.

     

    Only thing I wonder if that would be possible is to have a union of your gdrive (cloud movies), your local_upload movies and your local_not_to_be_uploaded movies. This creates that 1 folder truly merges all the media you have.

     

    And another thing I was wondering about your upload config. You constrict it to about 8MB/s to prevent the 750gb upload ban for gdrive. But you are putting this limit on multiple upload folders. Does it then still limit your total upload speed to gdrive or does it just limit the seperate upload folders and thus still causing a  ban when you upload >750gb? Can't test it myself since I won't be on fibre in a few months.

     

    Oh and for other people reading this later, I fixed my Sonarr/Radarr access/permission errors by running "New Permissions" (not on the rclone mount and unionfs) and disabling "Set permissions" in Sonarr and Radarr.

  12. 1 minute ago, DZMM said:

    Not quite following you....

     

    My unionfs mount is at level 2 in my user share i.e. /mnt/user/mount_unionfs/google_vfs - within the google_vfs folder are all my merged google and local files.  

     

    I've created the docker mapping /unionfs at the top level /mnt/user/mount_unionfs because I have other things in the /mnt/user/mount_unionfs user share - maybe my naming is confusing you.

     

    Hopefully I can express myself better this time.

     

    You have google_vfs which is a union of your google drive (your cloud files) and your local upload folder (in which you have different upload folders like "tv_kids", "movies_uhd", etc.).

    Normally you would create an union between your google drive tv_kids_gd and tv_kids_upload. So when you download to the union folder tv_kids, Sonarr knows it has to place the files in tv_kids_upload. But since you are not creating a union on subfolder level, how does Sonarr know where to move the files to so it can still see all the series I have (both online as offline)?

  13. Something I might be overthinking. But you are now creating the unionfs on the toplevel. So before I would put a unionfs on the gdrive_movies and the movies_upload folders which combined showed me both off- as online movies. Now there is no unionfs on that level anymore. So is this still going well with Sonarr and Radarr? How does it know it needs movies_upload to upload movies and series_upload for series?

  14. @DZMM: do you just create a folder named /mountcheck within your gdrive crypt folder? The script is running into an error at the vfs mounting even though there is a folder named /mountcheck which I can see from mount_rclone/gdrive.

  15. 5 minutes ago, DZMM said:

    2090057096_FireShotCapture1-Highlander_UpdateContainer_-https___1d087a25aac48109ee9a15217a1.png.1027c93065e2b38bb984ab99f11f7e2e.png

     

     

    Thanks for the info again. Did you do the above in the individual dockers? If so, how would Sonarr look for example? Did you point /tv to /unionfs/TVshows for example and then also add /unionfs as RW Slave like the above?

  16. 10 minutes ago, DZMM said:

    I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings, and then within sonarr etc added the relevant folders e.g. /unionfs/google_vfs/tv_kids_gd and /unionfs/google_vfs/tv_adults_gd and in radarr /unionfs/google_vfs/movies_kids_gd , /unionfs/local_media/movies_hd_kids etc etc

     

    Sorry, I don't understand what you mean with "I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings". Do you mean in the individual docker settings. Or is there some general docker mapping?

     

    And looking through your scripts I only see movies covered. Is that correct? So did you stop splitting you upload folders for movies/shows?

    And I also don't understand why you are binding. I understand your situation is different with a UD used for torrenting, which I don't have/need. But I'm not sure how to translate it to my situation.

     

    And did you also change something in your Sabnzbd mappings?

     

    Sorry for all the questions, I'm feeling quite imcompetent at the moment.

  17. 39 minutes ago, DZMM said:

     

    have you got any apps other than plex looking at your mounts e.g. kodi, or maybe one of your dockers is not configured correctly and is mapped directly to the vfs mount rather than then unionfs folder

     

    No other apps. I'm going to start over again from the info you just provided. Can you tell me how you did the /tv and /movies mappings in Sonarr and Radarr? And Did you add /unionfs as RW to these dockers just for permissions?

  18. 3 minutes ago, DZMM said:

     

    I think somehow you are writing directly to the vfs mount and I think the error is showing the write is failing, which means you lose the file I think or at least it can't be retried:

     

    https://rclone.org/commands/rclone_mount/#file-caching

     

    I'm afraid something is going wrong indeed. It's still transferring on the mount logs, Sonarr can't import since it has access denied. But I did everything like you, only I use /mnt/user/Media instead of /mnt/disks. Really frustrating.

×
×
  • Create New...