Jump to content

Thel1988

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by Thel1988

  1. 8 hours ago, Roudy said:

     

    I didn't have to do that for mine. Were you having the same issue until you updated the container? 

    Yes, the UMASK was set to 002 for that specific docker container I couldn't get working.

    But yes I think 

     

    7 hours ago, T0rqueWr3nch said:

    Yes I actually think that this is the cause of the issue, that it is actually never been working as intended.

    • Like 1
  2. 1 hour ago, Roudy said:

     

    Yes, I added them in the "# create rclone mount" section. 

     

     

    Try adding --umask 000 as well. I had that in there from a long time ago and just realized it's not in the original script. So try to add the section below and see if it fixes it for you. I also had to reboot because unmounting and remounting didn't seem to solve the problem.

     

        --umask 000 \
        --dir-perms 0777 \
        --file-perms 0776 \

    I seem to have the same issue with 6.10 RC2, tried with the above settings. Really weird.

     

    UPDATE: Seems the UMASK needed to be corrected on the docker containers as well.

  3. On 7/31/2020 at 11:17 AM, DZMM said:

    Can anyone recommend a good, cheap dedicated server provider?  I'm probably going to be moving home again and it's looking like I'll only get 110 upload at best, which will make running my Plex server on my machine very hard unless I do a lot of transcoding. 

     

    I've looked at Hetzner's auctions, but most of their boxes are SSD only and I need a HDD to hold torrents that are seeding.

    You can look at hetzner and their server auction, they have some pretty good server for around 30-40€ depending of your location ofcause. And they have servers with HDDs.

  4. On 7/4/2020 at 8:39 PM, watchmeexplode5 said:

    @DZMM

    If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build!

     

    Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. 

     

    I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github.

     

    https://github.com/watchmeexplode5/lclone-crop-aio

     

    Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags)

    
    drive_service_account_file_path = /folder/SAs (No trailing slash for service account file)
    service_account_file = /folder/SAs/any_sa.json

     

     

    
    All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested

     

    -----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. 

     

    -----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. 

     

    For more info and options on crop/rclone_gclone config files check out:

    l3uddz Repository https://github.com/l3uddz?tab=repositories

    This is really nice, i'm currently playing around with this, and this will simplify my setup, can you share in which order you run the custom scripts, like in what priority, and which schedule?

     

    Also it would awesome to have some short readme in github, to help with the setup.

  5. 12 minutes ago, Spladge said:

    You just set up mergerfs - you don't really have to do anything :) I use a bunch of different sonarrs and radarrs to manage my splits. Year based is the easiest because you can allow for that in the series level folder. And movie too. If you need some help with moving stuff between team drives I can help you there.

     

    The up loaders I guess can be switched but movies will not ever likely exceed the limit of a single drive.

     

    TV - I currently use four different team drives. five if you count anime.

    Interresting way of doing it by date actually, if you share a bit more details on how you do it, that would be nice :)

    isn't there a limitation on how much you can move between drives?

  6. 1 hour ago, DZMM said:

    @Thel1988 have you made the change above?

    Yeap I have been cloning and merged it with my settings, and it is like this.

    But it kind of make sense from my side:

    This is the output of my find command (before cut)

    /mnt/user/appdata/other/rclone/Upload_user1_vfs/counter_2

    Correct me if i'm wrong, I have been reading into how the CUT command is working:

     cut -d"_" -f4  the -d"_" will tell it what delimter it should, and after that which field it should output, in my example above.

    field1: /mnt/user/appdata/other/rclone/Upload field2: user1 field3:vfs/counter and then field4: 2

    So in my example it will be -f4

  7. Thanks this works great, I still need to adjust the cut command to 4 as it is my fourth field, but really cool on the simplification of the scripts:

     

    find /mnt/user/appdata/other/rclone/upload_user1_vfs/ -name 'counter_*' | cut -d"_" -f3
    Give this output:
    vfs/counter
    
    Changing the cut command to use field number 4 it outputs it correctly:
    find /mnt/user/appdata/other/rclone/upload_user1_vfs/ -name 'counter_*' | cut -d"_" -f4
    2

     

  8. I just wanna say, Thanks @DZMM for the new versions of the scripts combined with the multiple upload accounts, it is just making my life so much easier  :)

     

    Just a little tidbit, if you use the multiple upload feature, and use more than 1 "_" in your remote name, like i did, change the CUT command from field 3 to field 4

    from: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f3
    
    to: find /mnt/user/appdata/other/rclone/name_media_vfs/ -name 'counter_*' | cut -d"_" -f4

    If you don't do the above, it will not rotate the accounts :)

     

  9. 1 hour ago, Thel1988 said:

    Okay good Point.

     

    I have changed to this:

    Radarr:

    /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

    /media <-> /mnt/user/mount_unionfs/google_vfs/

     

    for sabnzb:

    /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

     

    It is still awfully slow, does the cache settings on the local share have anything to say on this?

    Okay I got it to work I needed to delete the extra /media/downloads in the path mappings and then it works :)

    Anways thanks for your help @DZMM you do a fantastic job on these scripts :)

  10. 37 minutes ago, DZMM said:

    Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits.  

     

    Within your containers, point nzbget etc to /media/downloads

    Okay good Point.

     

    I have changed to this:

    Radarr:

    /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

    /media <-> /mnt/user/mount_unionfs/google_vfs/

     

    for sabnzb:

    /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/

     

    It is still awfully slow, does the cache settings on the local share have anything to say on this?

  11. I have migrated from the unionFS to MergerFS, but it seems I have very very slow move speed from like sonarr or radarr to my Media folder. (Taking 3 minutes to move 4 GB file), so i think it is somehow Copying instead of Moving.

    Also if I do the same thing from within the Sonarr docker it also takes this long to MOVE a file.

     

    I'm using the following MergerFS command:

    mergerfs /mnt/user/local/google_vfs:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/media=RO /mnt/user/mount_unionfs/google_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

     

    It seems sonarr is really taking long time to move from the /data to /media in the Sonarr docker.

    Path mappings: 

    /data <-> /mnt/user/mount_unionfs/google_vfs/downloads/

    /media <-> /mnt/user/mount_unionfs/google_vfs/

     

    I can see that the SSD cache is hard at work when it is moving files.

×
×
  • Create New...