Jump to content

DZMM

Members
  • Content Count

    2437
  • Joined

  • Last visited

  • Days Won

    8

DZMM last won the day on June 13 2019

DZMM had the most liked content!

Community Reputation

209 Very Good

4 Followers

About DZMM

  • Rank
    Advanced Member
  • Birthday December 30

Converted

  • Gender
    Male
  • Location
    London

Recent Profile Visitors

4115 profile views
  1. Back then I was in a Gigaclear area and I was loving my 1000/1000 service. Now I only get 360/180, which is adequate. Now if you're really lucky, there are some providers in the UK who are offering 10000/10000 - one day we'll all get speeds like that!
  2. If you're using SAs you don't need APIs. If you're not, then unique client IDs is recommended
  3. Logs from mount script and script options please
  4. Sorry I don't really understand python and I think I fluked completing this step as I didn't really understand what I was doing! Hopefully someone else can help. Or, have you tried asking the autorclone author?
  5. More control over how files are uploaded and when
  6. Multiple mounts, one upload and one tidy-up script. @watchmeexplode5 did some testing and performance gets worse as you get closer to the 400k mark, so you'll need to do something like below soon: 1. My folder structure looks something like this: mount_mergerfs/tdrive_vfs/movies mount_mergerfs/tdrive_vfs/music mount_mergerfs/tdrive_vfs/uhd mount_mergerfs/tdrive_vfs/tv_adults mount_mergerfs/tdrive_vfs/tv_kids 2. I created separate tdrives / rclone mounts for some of the bigger folders e.g. mount_rclone/tdrive_vfs/movies mount_rclone/tdrive_vfs/music mount_rclone/tdrive_vfs/uhd mount_rclone/tdrive_vfs/adults_tv for each of those I created a mount script instance where I do NOT create a mergerfs mount 3. I mount each in turn and for the final main mount add the extra tdrive rclone mounts as extra mergerfs folders: ############################################################### ###################### mount tdrive ######################### ############################################################### # REQUIRED SETTINGS RcloneRemoteName="tdrive_vfs" RcloneMountShare="/mnt/user/mount_rclone" LocalFilesShare="/mnt/user/local" MergerfsMountShare="/mnt/user/mount_mergerfs" # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/mount_rclone/music" LocalFilesShare3="/mnt/user/mount_rclone/uhd" LocalFilesShare4="/mnt/user/mount_rclone/adults_tv" 4. Run the single upload script - everything initially gets moved from /mnt/user/local/tdrive_vfs to the tdrive_vfs teamdrive 5. Overnight I run another script to move files from the folders that are in tdrive_vfs: to the correct teamdrive. You have to work out the encrypted folder names for this to work. Because rclone is moving the files, the mergerfs mount gets updated i.e. it looks to plex etc like they haven't moved #!/bin/bash rclone move tdrive:crypt/music_tdrive_encrypted_folder_name gdrive:crypt/music_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/tv_tdrive_encrypted_folder_name tdrive_t_adults:crypt/tv_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs rclone move tdrive:crypt/uhd_tdrive_encrypted_folder_name tdrive_uhd:crypt/uhd_tdrive_encrypted_folder_name \ --user-agent="transfer" \ -vv \ --buffer-size 512M \ --drive-chunk-size 512M \ --tpslimit 8 \ --checkers 8 \ --transfers 4 \ --order-by modtime,ascending \ --exclude *fuse_hidden* \ --exclude *_HIDDEN \ --exclude .recycle** \ --exclude .Recycle.Bin/** \ --exclude *.backup~* \ --exclude *.partial~* \ --drive-stop-on-upload-limit \ --delete-empty-src-dirs exit
  7. MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount I guess it fails if you only add one folder. just create folder manually in your mergerfs mount.
  8. Probably not up to date. There are comments on the new scripts which make moving easy - just: 1. make sure you haven't got any rclone activity going on - stop old scripts, uploads and any dockers using the mount 2. setup new paths - but /mount_unionfs etc as your mergerfs mount paths etc if that's what you have setup now. Be careful to put your existing paths in 3. choose other script options 4. run scripts and if all ok, launch dockers
  9. What folders are in /mnt/disks/Plex ? I've got mergerfs mounts that include UD and they work fine.
  10. Set /mnt/disks/Plex as your local location in the mount and upload scripts
  11. The new mergerfs based scripts do this and much more...
  12. My current 360/180 ISP sent me an email saying my upload was going at 100% for a few months and they wondered if I'd been hacked. I thanked them for their concern and said I was ok and knew what the traffic was. I use bwlimits so my upload runs now at about 80Mbps average over course of a day as my big upload days are over. My previous 1000/1000 ISP didn't say anything despite my upload running at about 60-70% for over a year. I keep a copy of the most recent vm and appdata backups locally. If there was ....an accidental deletion I probably would just write off all the content as 1) I can't build a 0.5PB array and 2) it'd probably be easier to ....replace the content I want than spend weeks/months downloading it from gdrive. I did look into backing up my tdrives on another user's setup (he currently backs his up to mine), but I stopped as the actual process of downloading off his tdrive would face the same problems.
  13. It's the logical next step. I've ditched my parity drive (I backup to gdrive using duplicati), sold all but 2 of my HDDs that store seeds, pending uploads and my work/personal documents. I don't really use the mass storage functionality anymore other than pooling the 2 HDDs - kinda impossible and would be mega expensive to store 0.5PB+ of content..... My unRAID server main purpose is to power VMs (3xW10 VMs for me and the kids + pfsense VM) and Dockers (plex server with remote storage, Home Assistant, unifi, minecraft server, nextcloud, radarr etc).