Jump to content

Cliff

Members
  • Content Count

    30
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Cliff

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ok, so I can write files directly to that folder and they get transferred to Google drive? Sorry for being slow but in that case why are the rclone_upload folder needed? And hopefully one last question, when installing the dockers for sonarr/radarr I should provide folder-mappings for "tv" and "downloads". From the first page I understand that the tv-folder sould be mapped to in my case: "/mnt/user/mount_unionfs/google_vfs/Media/Tv" But do I need any "downloads" -mapping?
  2. I tried following the spaceinvader one seedbox-guide on youtube and use it with this script. But I have some problems that I hope someone has a solution to. After a torrent is finished and unpacked on the seedbox I use syncthing to transfer the file back to the unraid-server. I mapped the folder "rclone_upload/google_vfs" in the syncthing container and added the path "/Media/Movies" inside the syncthing container as it matches my google drive paths. The first time everything works great and all files are uploaded to the correct google drive folder. But then the script removes the folders and syncthing breakes down as the "folder-markers" are deleted. If I remove the "--delete-empty-src-dirs" from the rclone_upload script nothing gets deleted and the folders will fill upp. Does anyone have any solution to how to fix this problem?
  3. I need to install a python plugin in domoticz but I need to install both "python3-httplib2" and "git" to be able to run it. Is this possible someway using the docker? I tried opening the console but could not use apt-get inside the docker to install the required packages.
  4. Ok thanks for all the help. I am trying it out now. Just one more question that is slightly of topic. Before when I was experimenting with a cache I was running into some problems where I ran out of RAM or hdd space when I was using plex to scan and add all my media through the remote. After that unraid stopped responding and I had to reboot to be able to get unraid to work again. Is there any settings I need to change in the plex docker when scanning my remotes to prevent it from filling up the docker container? I have only 12GB of RAM in my current server, do I need to lower any settings in the mountscript? or should it work anyway?
  5. Ok, thanks for the answer. But if I mount gdrive directly I guess that I will not be able to take advantage of the crypt-cache stuff that improved plex-streaming? As it says on the first page? "I use a rclone vfs mount as opposed to a rclone cache mount as this is optimised for streaming, has faster media start times, and limits API calls to google to avoid bans."
  6. I just noticed when running the uploadscript and looking at the log file it seams too loop through my entire media library on google drive with "skipping undecryptable filename" ? has that something to do with it? I already have a couple of TB with unencrypted media on my google drive that I want to show up in my mount folder
  7. the mount script is exactly as in github, the only thing I modified was commenting out the starting of the docker-containers
  8. #!/bin/bash ####### Check if script is already running ########## if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/appdata/other/rclone/rclone_mount_running fi ####### End Check if script already running ########## ####### Start rclone gdrive mount ########## # check if gdrive mount already created if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs." # create directories for rclone mount and unionfs mount mkdir -p /mnt/user/appdata/other/rclone mkdir -p /mnt/user/mount_rclone/google_vfs mkdir -p /mnt/user/mount_unionfs/google_vfs mkdir -p /mnt/user/rclone_upload/google_vfs rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs & # check if mount successful # slight pause to give mount time to finalise sleep 5 if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End rclone gdrive mount ########## ####### Start unionfs mount ########## if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs already mounted." else unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Remount failed." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End Mount unionfs ########## ############### starting dockers that need unionfs mount ###################### # only start dockers once if [[ -f "/mnt/user/appdata/other/rclone/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started" else touch /mnt/user/appdata/other/rclone/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." # docker start plex # docker start ombi # docker start tautulli # docker start radarr # docker start sonarr fi ############### end dockers that need unionfs mount ###################### exit
  9. I am using all scripts from github unmodified. If I try the same thing using a cache instead and remove the -vfs options in the mountscript it seams to work but when using the crypt nothing gets mounted in any of the created folders. I tried placing a small -nfo file in the upload-folder and after running the upload-script I have a small encrypted file in my media-folder, and also a mountcheck file.
  10. I have tried like 5 times now following the guide exactly. I even moved to a new server with new hardware with same result. The closest I have got is that I now got a "crypt file" with random letters in my media folder on google drive which is the mountcheck I believe. If I add another random file to /user/mount_rclone/google_vfs/ I get another encrypted file in my media folder. But nothing from google drive shows up on my server in any of the folders. And if I try with a standard rclone mount command I can mount my google drive to a folder without problems.
  11. I tried changing to a cache instead and mounted manually. Now I get a folder and can see all my google drive media. But when I add that folder to Radarr and try to do a bulk-import of movies the log get flooded with: "Unraid emhttpd: error: get_filesystem_status, 6512: Operation not supported (95): getxattr: /mnt/user/media" Does anyone know why?
  12. Ok, but all folders are empty and nothing gets uploaded or downloaded from the crypt-folder on my google drive. And where am I suppose to see all my mediafiles from the google drive to be able to add them to plex?
  13. Ok, I changed my mind and tried using the crypt. But I don't understand how to get my google drive media to show up in any of the folders. I noticed that I get a new "crypt" folder on google drive, but if I add any files to it nothing shows up in any of the unraid folders.
  14. What modifications do I need to make if I want to use this method but without crypt and teamdrives?
  15. I have mounted my google drive folder using a cache and can see all my media files in it. But when I use radar and bulk-import on the path I get "Array-undefined" after a while and have to restart unraid to get my disks back. I have also tried the same thing on another server with the same result. Does anyone know why this happens? this is the mount-command: And this is the rclone config: [gdrive] type = drive client_id = <> client_secret = <> scope = drive root_folder_id = service_account_file = token = {"access_token":"<>"} [gdrive_media] type = cache remote = gdrive:Media chunk_size = 16M info_age = 2d chunk_total_size = 20G