Jump to content

axeman

Members
  • Posts

    538
  • Joined

  • Last visited

Everything posted by axeman

  1. So I find that the path rclone\remotes\gdrive_media_vfs\gdrive\ has some files in it. Probably the scrap stuff that RClone was working on encrypting before upload? Also see an empty file called "upload_running_daily_upload" is that what's used to see if something is in progress? I was able to figure out the team drive thing... it was because I created the client id outside of the project that has the service accounts. thought I read somewhere that it didn't matter. It does.
  2. So it can't seem to shutdown properly. And even though I have the schedule completely turned off, even after a reboot, I can't seem to start the uploader script. It says the script is already running. starting to feel like I might have bit off more than I can chew with this setup. 🥵
  3. I still think I'm doing something wrong. When I try to list the directories using rclone gives me an error that the team drive doesn't exist. What's funny is that when I tried it earlier I was uploading somewhere I just don't know where those files went. Hope you are at least enjoying your time while awake.
  4. awesome... hold that thought though. That's a future problem lol. I don't have my cameras fully writing to a network share yet. in a Post Covid world, I'm looking at obtaining an IP camera system. OT question - when the hell do you sleep? you seem to be online 24/7
  5. Thanks - will think about this after getting the videos going. Now - for the exclude directive, can I put a bunch of --exclude in one comannd? Command1 = "--exclude animation/* --exclude tv_series --exclude my_other_folder --exclude this_folder_too" ? I have many folders to exclude (more than I can with the 8 commands).
  6. heh, okay - I was afraid of rebooting. Will try that. Can I get stupid(er) for a minute? This could be a killer setup for a camera DVR, if we can set the upload time separately based on share. Like for a DVR setup, push up to cloud as quickly as possible, in case of a robbery, data has already been pushed to cloud. Also wondering if it's OK to point the mount to a cache drive. So that when we're streaming a movie, it doesn't write to an array share?
  7. Thank you for your continued patience with this... I agree - the approach was wrong. I'm going to restart this. What's the best way to stop the upload that's currently running? should I run the clean up script, or is there another way?
  8. So my UnRaid Array looks like this: \\server\Videos\3D_Movies \\server\Videos\Animation \\server\Videos\TV_Shows For now, I just started with /videos/3D_Movies to see how this all works. Say a month from now, I say, this is all great, I want to do all of my array. Can I just change the path on rclone_mount to the /videos ? So I ran the script through UnRaid gui... i see it reporting 14.874 MBytes/s ... i'm guessing that's mbits? My upload speed is only 40mbit max, which translates to really 5MBytes/sec
  9. Thanks - I was afraid of that... but figured, hey it's in quotes so it should be fine. But just so I understand, the individual movie folders underneath that are fine to have spaces, right? Also, assuming I do go full tilt a year from now... and want to do the full "videos" share... will that end up creating duplicates of the 3D Movies folder?
  10. Thanks, that's what I thought too, but double checked, and looked ok. Note, I'm only trying one subfolder in my share, perhaps that's the issue? I have a videos share, and then folders like 3D Movies, Animation, HD Movies, TV Series. I started out with the smallest of those. rclone_mount: #!/bin/bash ###################### #### Mount Script #### ###################### ### Version 0.96.6 ### ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="gdrive_media_vfs" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page LocalFilesShare="/mnt/user/Videos/3D movies" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="ignore" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8="" CreateBindMount="N" # Y/N. Choose whether to bind traffic to a particular network adapter RCloneMountIP="192.168.1.252" # My unraid IP is 172.30.12.2 so I create another similar IP address NetworkAdapter="eth0" # choose your network adapter. eth0 recommended VirtualIPNumber="2" # creates eth0:x e.g. eth0:1. I create a unique virtual IP addresses for each mount & upload so I can monitor and traffic shape for each of them ####### END SETTINGS ####### ############################################################################### ##### DO NOT EDIT ANYTHING BELOW UNLESS YOU KNOW WHAT YOU ARE DOING ####### ############################################################################### ####### Preparing mount location variables ####### LocalFilesLocation="$LocalFilesShare/$RcloneRemoteName" # Location for local files to be merged with rclone mount RcloneMountLocation="$RcloneMountShare/$RcloneRemoteName" # Location for rclone mount MergerFSMountLocation="$MergerfsMountShare/$RcloneRemoteName" # Rclone data folder location ####### create directories for rclone mount and mergerfs mounts ####### mkdir -p /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName #for script files if [[ $LocalFileShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating local folders as requested." else echo "$(date "+%d.%m.%Y %T") INFO: Creating local folders." eval mkdir -p $LocalFilesLocation/"$MountFolders" fi mkdir -p $RcloneMountLocation mkdir -p $MergerFSMountLocation ####### Check if script is already running ####### echo "$(date "+%d.%m.%Y %T") INFO: *** Starting mount of remote ${RcloneRemoteName}" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script as already running." exit else echo "$(date "+%d.%m.%Y %T") INFO: Script not running - proceeding." touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running fi ####### Create Rclone Mount ####### # Check If Rclone Mount Already Created if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Success ${RcloneRemoteName} remote is already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: Mount not running. Will now mount ${RcloneRemoteName} remote." # Creating mountcheck file in case it doesn't already exist echo "$(date "+%d.%m.%Y %T") INFO: Recreating mountcheck file for ${RcloneRemoteName} remote." touch mountcheck rclone copy mountcheck $RcloneRemoteName: -vv --no-traverse # Check bind option if [[ $CreateBindMount == 'Y' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: *** Checking if IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" ping -q -c2 $RCloneMountIP > /dev/null # -q quiet, -c number of pings to perform if [ $? -eq 0 ]; then # ping returns exit status 0 if successful echo "$(date "+%d.%m.%Y %T") INFO: *** IP address ${RCloneMountIP} already created for remote ${RcloneRemoteName}" else echo "$(date "+%d.%m.%Y %T") INFO: *** Creating IP address ${RCloneMountIP} for remote ${RcloneRemoteName}" ip addr add $RCloneMountIP/24 dev $NetworkAdapter label $NetworkAdapter:$VirtualIPNumber fi echo "$(date "+%d.%m.%Y %T") INFO: *** Created bind mount ${RCloneMountIP} for remote ${RcloneRemoteName}" else RCloneMountIP="" echo "$(date "+%d.%m.%Y %T") INFO: *** Creating mount for remote ${RcloneRemoteName}" fi # create rclone mount rclone mount \ --allow-other \ --buffer-size 256M \ --dir-cache-time 720h \ --drive-chunk-size 512M \ --log-level INFO \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --vfs-cache-mode writes \ --bind=$RCloneMountIP \ $RcloneRemoteName: $RcloneMountLocation & # Check if Mount Successful echo "$(date "+%d.%m.%Y %T") INFO: sleeping for 5 seconds" # slight pause to give mount time to finalise sleep 5 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "$RcloneMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Successful mount of ${RcloneRemoteName} mount." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mount failed - please check for problems." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi ####### Start MergerFS Mount ####### if [[ $MergerfsMountShare == 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Not creating mergerfs mount as requested." else if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount in place." else # check if mergerfs already installed if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs already installed, proceeding to create mergerfs mount" else # Build mergerfs binary echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs not installed - installing now." mkdir -p /mnt/user/appdata/other/rclone/mergerfs docker run -v /mnt/user/appdata/other/rclone/mergerfs:/build --rm trapexit/mergerfs-static-build mv /mnt/user/appdata/other/rclone/mergerfs/mergerfs /bin # check if mergerfs install successful echo "$(date "+%d.%m.%Y %T") INFO: *sleeping for 5 seconds" sleep 5 if [[ -f "/bin/mergerfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Mergerfs installed successfully, proceeding to create mergerfs mount." else echo "$(date "+%d.%m.%Y %T") ERROR: Mergerfs not installed successfully. Please check for errors. Exiting." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi # Create mergerfs mount echo "$(date "+%d.%m.%Y %T") INFO: Creating ${RcloneRemoteName} mergerfs mount." # Extra Mergerfs folders if [[ $LocalFilesShare2 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare2} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare2=":$LocalFilesShare2" else LocalFilesShare2="" fi if [[ $LocalFilesShare3 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare3} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare3=":$LocalFilesShare3" else LocalFilesShare3="" fi if [[ $LocalFilesShare4 != 'ignore' ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Adding ${LocalFilesShare4} to ${RcloneRemoteName} mergerfs mount." LocalFilesShare4=":$LocalFilesShare4" else LocalFilesShare4="" fi # mergerfs mount command mergerfs $LocalFilesLocation:$RcloneMountLocation$LocalFilesShare2$LocalFilesShare3$LocalFilesShare4 $MergerFSMountLocation -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true # check if mergerfs mount successful echo "$(date "+%d.%m.%Y %T") INFO: Checking if ${RcloneRemoteName} mergerfs mount created." if [[ -f "$MergerFSMountLocation/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, ${RcloneRemoteName} mergerfs mount created." else echo "$(date "+%d.%m.%Y %T") CRITICAL: ${RcloneRemoteName} mergerfs mount failed." rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running exit fi fi fi ####### Starting Dockers That Need Mergerfs Mount To Work Properly ####### # only start dockers once if [[ -f "/mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started." else touch /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start $DockerStart fi rm /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName/mount_running echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit
  11. Thanks ... moving right along ... I had a failure because Docker wasn't enabled. Did that, and now, I've got an error: FUSE library version: 2.9.7-mergerfs_2.29.0 using FUSE kernel interface version 7.31 'build/mergerfs' -> '/build/mergerfs' 24.05.2020 23:39:55 INFO: *sleeping for 5 seconds 24.05.2020 23:40:00 INFO: Mergerfs installed successfully, proceeding to create mergerfs mount. 24.05.2020 23:40:00 INFO: Creating gdrive_media_vfs mergerfs mount. fuse: bad mount point `movies/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs': No such file or directory 24.05.2020 23:40:00 INFO: Checking if gdrive_media_vfs mergerfs mount created. 24.05.2020 23:40:00 CRITICAL: gdrive_media_vfs mergerfs mount failed. This if from rclone_mount script.
  12. Thanks! i'm still missing something. I went through everything twice. When I tried to configure my remote as a team drive, I get an error: "No team drives found in your account" What could I be missing? When I login to drive.google.com, I have a shared drive that I've created. It's shared with the group that was created during step 2 of the service account setup. I ran "python3 add_to_team_drive.py -d XXXXXX" and added the shared drive ID there.
  13. Thanks. I keep starting this and stopping it. I'm going to go back and re-read everything to make sure I'm not missing anything. Meanwhile... I see a "backup" option what's the difference between that and Copy/Sync mode? My goal is to end up with a mirrored copy of some of my unraid shares (not all of them) on GDrive, and then use that first found option you'd mentioned, to primarily serve files from cloud drive.
  14. I'm still working on getting this setup.. just realized my service_accounts folder doesn't have a sa_gdrive.json file... do i just put the path in rclone? also, confused about team_drive .. I thought the service accounts mean we don't need a team drive? sorry for the newb questions.
  15. Okay - so here's my first actual implementation question. I don't have the appdata folder ... is this because I don't have Docker enabled? I ran my service accounts outside of UnRaid. I'm pretty sure the RClone installation went OK, because if i type in RClone at the command prompt, I get the usage screen. I installed CA user scripts and Rclone via CA Apps. Should I just create the path above? follow-up when I created the service accounts, does that already create a unique client_id? or do I need to create another one?
  16. Thanks ... I ended up doing it in a VM. I have the accounts created.. honestly, I feel like a script kiddie on this one. I normally understand what I'm doing, but with this, I'm just copy/pasta. Slowly but surely, getting there.
  17. hi - when creating the service accounts from the optional section... we can do that on another machine other than UnRaid, right? I'm thinking of spinning up a VM just to get the accounts created.
  18. Thanks, I ended up going back to ESXi.
  19. yeah - I agree we are certainly nothing in the grand scheme of things. Just worried about not having a fallback plan. Just like how Wink is not extorting it's users. I am going to get started, at least at first with a copy mode. I will have actual, real, script related questions when I start down the path. Once again, thank you for your time, and willingness to share.
  20. Okay - I think I'm over complicating it. Will probably just keep it in copy mode, use the UnRaid shares and then decide what to do from there. Worst case if a drive locally fails, I can make a decision to either manually restore it, or just let the cloud hold that drives data, and shrink the array. Perhaps like you say, eventually i'll be all cloud and only have a small array for irreplaceable data. My fear is that Google being the only bastion of hope, puts a limit, or restriction of some sort (like Amazon did) and having to build an array up before the data gets deleted.
  21. Yeah, I went through the Readme... and even glanced through the code and still not sure how it all works. Want to get a good understanding before jumping in. Thanks! That would be crazy if i can save some wear/tear on my whole array. So in my use case, I want this to be sort of a second (exact) copy of my array. This way, when I have a disk failure, I can just redownload all the data for that disk. I know that's really not the goal of this project, but I think having a backup of my array, in the "cloud" AND being able to stream off that is some crazy cool stuff. Yes... sort of, at least in my use, I'd still maintain my UnRaid local array. Either way, if/when I get this working, I'll be sending something your way. Great - You should post that on GIT ... Probably won't make you a millionaire, but hey, a little bit is better than nothing.
  22. First big Thanks to @DZMM on posting this here. Could've easily just kept it to yourself. Sharing is caring! I've been reading bits and pieces of this topic, and admittedly have not gone through all 68 pages, so please forgive me if these have been covered. My media tool will be Emby. My primary purpose for this would be to use it as sort of a backup for my physical media in the house. Ideally though, when requesting a show or movie, priority would be given to the uploaded data instead of my local drive spinning up. So it's almost like my UnRaid array would the backup for the cloud data. I saw one of the replies mention setting the option to copy instead of move. I'm guessing that's answers a part of this requirement, but is there a way to make the uploaded data preferential? Is it just a matter of NOT using MergerFS and just using the rclone vfs mount? The trickier part is, I'd like to upload the data as Disk shares However, my Emby server is set to find media via the user shares (Videos, TV Series, etc). Is this too complicated of a scenario? Reason for this is, if I have a local disk failure, I'd want to re-download the data that's been pushed up to Gdrive. It's a lot easier to just grab everything from Disk 3 instead of trying to rebuild from directory structure. I juggle my data around a lot between disks. Will this end-up with multiple versions on GDrive, or will it just be recognized as a move? Finally @DZMMWhere the hell is the donate button? People that get this thing going should send some kinda fiat or coin your way. This has tremendous implications. Edit: The readme mentions use of dockers... if my Emby server is on a VM that's on the same network, does that make a difference (meaning can I access the mounts that this script uses)?
  23. I disabled IOMMU in the CPU settings and that seems to allow the machine to boot natively off the USB, with a LSI card passthru no issues. Going to be testing some more in terms of performance, but boot is MUCH snappier than the old (BIOS based) VM booting.
  24. I was able to do this about a week ago... My CPU was unsupported to so I had to add the allowLegacyCPU option to the boot config file AND each VM's advanced configuration parameter needs an monitor.allowLegacyCPU = True That leads to SUPER slow boot times on older VMS and even VMs upgraded to 7.0. however, newly created VMs are fine. I'm trying to do USB boot and that works, but for some reason, USB passthrough with my SAS card does NOT work. Unraid seems to find the devices ... but right before the login prompt crashes. I can't seem to get any info before the crash happens. Anyone else having trouble with passthrough on Esxi 7.0 and UnRaid? The reddit thread linked above seems to be able to even see the card as passthrough capable, which I don't think applies to my situation.
  25. Thanks for getting back to me. I ended up sticking with Unraid as a VM inside ESXi for the time being.
×
×
  • Create New...