Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. I think it was quite fast - just slow with the small files. After first run it does blocks so it's fast thereafter
  2. 5MB/s locally or 5MB/s upload to google? The upload to G is a normal rclone sync, so speed depends on the type of file e.g. lots of photos are slower
  3. yep docker. Free version doesn't come with compression but it does block level backups so it's very efficient after the first backup.
  4. I just use the free version to backup to my local folder that then gets encrypted when stored on google. Until last week I was using Crashplan for a few weeks, but it only managed to upload about 400GB in about 4 months....Duplicati I used for about 2 days and it started throwing up errors - made me lose confidence in it as a backup solution. CB did my local backup run really quickly that's now synced to gdrive. I did a quick fag packet calculation before I started switching to one teamdrive and I'm ok. I don't upload anything other than the media files - I'm not sure what value the nfo and srt files are to Plex etc, especially now that Plex's subtitles are getting better (99.99% of my content is in English anyway)??
  5. lol that's insane. I might have to share servers with you to get some inspiration to beef up my library 🙂 @Spladge are you using multiple uploads to 1 or more teamdrives, or have you just been running one upload for a very long time? @Kaizac I'm moving a few TBs at a time and going very slow so I don't get duplicate folders. My slow method is to move from the old TD/gdrive to a 'move' folder on the destination teamdrive and then move the folders to the right folders within the mount using putty/windows
  6. yes it does. You only need this if you are adding files to the mount on a different server to the one plex is running on e.g. @Spladge I believe has mounted rclone on a remote server and Plex is running on his local server, where he's also mounted rclone @Stupifier got it working - there's also plex_rcs
  7. I've updated my scripts on github to fix a few issues people have been having, notably removing the rc command as I think it causes more problems than it's worth https://github.com/BinsonBuzz/unraid_rclone_mount
  8. 1. Plex treats gdrive files the same as it does local files, so it will use the same rules as to whether or not to direct play, transcode etc 2. as above - Plex does the work 3. No - if you have decent bandwidth you will hardly see a difference e.g. my cloud files launch in an average of about 5 seconds - some faster, some slower, which is a bit longer than it takes for a local disk to spinup
  9. no - something must have gone wrong with your first mount - there normally shouldn't be anything in the unionfs folder before you mount
  10. you've got something in mount_unionfs so the mount failed. Check what it is, delete and then run fusermount -uz /mnt/user/mount_unionfs/google_vfs unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs in a temporary script
  11. I don't quite understand some of the errors google throws out - you can't be over your upload limit as your mount is empty. As @nuhll says, it will fix itself soon. There's nothing to worry about as your usuing your own client_ID not the shared rclone one. your mount is working which is the main thing, so start uploading via the rclone_upload folder not by adding direct to the mount folder
  12. yes. When unionfs 'deletes' a file from the RO directory, it doesn't actually delete the file it just hides it in the mount. What the script does is actually delete the files from the RO directory and then tidy up the hidden unionfs directory. For the RW directory it can delete the files, so the script doesn't need to do anything there.
  13. That was for @nuhll (i) I think because you've got a mountcheck file in rclone_upload it's messing up the unionfs mount as it seeing two copies of the same file in two places. (ii) I'm not sure why the rclone_mount is empty though. Try this: 1. manually delete the encrypted mountcheck file on google drive and empty the trash just to be certain it's gone 2. delete the local file in rclone_upload 3. recreate the mountcheck file using touch in terminal 4. create a quick new temp script to unmount and remount Edit: remember to run script in the background #!/bin/bash fusermount -uz /mnt/user/mount_unionfs/google_vfs fusermount -uz /mnt/user/mount_rclone/google_vfs rclone mount --allow-other --buffer-size 512M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs & sleep 5 unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs exit
  14. 1. there's nothing in your crypt except your mountcheck file..... 2. you shouldn't be seein the mountcheck file in rclone_upload/google_vfs though as it's already in the cloud. What does your /mnt/user/mount_rclone/google_vfs look like?
  15. Hmm. everything looks correct. Apologies if these questions seem obvious: 1. Are you definitely using the right password and password2? 2. the files that are already in gdrive are encrypted (goobledegook when you look in google)? 3. Are the google files in a folder called crypt on google i.e. not encrypted so you can see my drive/crypt when you look at Google?
  16. no - just substitute in your existing remote name in the mount script for my gdrive_media_vfs remote. Or, rename the remote in rclone config might be easier
  17. post your rclone config (remove passwords and IDs) and mount script
  18. Unionfs is useful is if you want to add one monitored folder for sonarr & radarr that combines files that are on your local drive with files that have already been uploaded e.g. if you had tv_show1 spread across /mnt/user/rclone_upload/google_vfs/tv_shows/tv_show1 and /mnt/user/mount_rclone/google_vfs/tv_shows/tv_show1 sonarr can only monitor one folder at a time, so if when files are moved to the cloud it would think they are missing and download them again, meaning you could end up with multiple copies on the cloud. Also, sonarr wouldn't be able to upgrade files. For radarr, the problems are similar. Even if you're not using radarr or sonarr and you are just using Plex, it's cleaner to add the unionfs folder to Plex rather than both the upload and cloud folder, because plex won't waste time re-scanning files when it spots they've moved to the cloud, as to plex they won't have moved as the unionfs folder 'masks' the real location
  19. @Kaizac how are you getting on with your td-->td transfers? I'm still nervous about doing them e.g. I just moved a two movie folders between td1/adults/ td1/kids using putty and the folders moved ok, but the file inside disappeared! I just did moved an individual file using putty and that went ok. I think I'm going to stick with consolidating all my files to gdrive once they've uploaded and doing my moves/edits there. The lesson here is make sure everything is organised correctly before uploading. Edit: Looks like it was a false alarm. Not sure what how, but the file was also in td2/adults - the file in td1/adults must have been a phantom or something....I just did the same transfer from td2/adults to td2/kids and all was ok - phew!
  20. It won't work for you as you've added a 2nd RO directory. You need: find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs} newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~} newPath1=/mnt/user/rclone_upload/google_vfs${oldPath%_HIDDEN~} rm "$newPath" rm "$newPath1" rm "$line" done find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete Just add another move: rclone move /mnt/user/Archiv/Musik gdrive_media_vfs:Musik -vv --drive-chunk-size 128M --checkers 1 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 8y && rclone move /mnt/user/Archiv/Serien gdrive_media_vfs:Serien -vv --drive-chunk-size 128M --checkers 1 --fast-list --transfers 4 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --tpslimit 3 --min-age 8y I haven't uploaded music and I'm not sure if that's a good idea as even an extra pause of a second between tracks would be annoying. Let me know what the experience is like as I'm curious. There's something odd about the file permissions - just delete manually via terminal
  21. use the commands via ssh to create the mountcheck file and to transfer it's foolproof. If you're not seeing your other files then something's gone wrong with your encryption - are you sure the new vfs mount is using the same keys as your old encrypted mount?
×
×
  • Create New...