Jump to content

Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. Did you create the file "mountcheck" on your Gdrive mount?
  2. Are you accessing your Emby server through your browser? Are you using a reverse proxy by any chance?
  3. Currently setting this up with in total 5 API's. While doing this I wonder if we can use this method to seperate streaming and docker activities with seperate API's. Currently programs like Bazarr cause a lot of API hits, often resulting in bans. Which causes the playback to fail. Maybe if we use a mount for isolated streaming and an API for docker activities like Bazarr it will not be a problem anymore? Just not sure how to set this up yet, but with a Team Drive this should work with seperate accounts I think.
  4. Do manually populate your rclone_upload_tdrive folders to have those files moved to the cloud or did you automate this somehow? And how do you use the multiple accounts for the tdrive? Do you create multiple rclone mounts for each user?
  5. Maybe the passwords during rclone config didn't register correctly. I've had that before with my client id and secret that copy paste through Putty doesn't always register correctly making mounts work but not correctly. Glad you got this fixed. Currently filling up my downloading backlog again so this will be a nice way to upload it quickly.
  6. I don't think the name of the tdrive itself matters, since you start your crypt a level below it (at least thats how I set it up). The file structure below that does matter since based on the password + salt the directories get their own unique names which continues to the levels below. I'm curious what your findings will be!
  7. And you created the exact same file structure before you started populating?
  8. You say same passwords, but did you also define the salt yourself during mount setup? If you did, then it is very strange. Encryption should only be based on the password + salt. I will do some testing myself tomorrow.
  9. Just to be sure; you use your own password and own defined salt? Seems weird that it has to do with the token, as changing API on the mount doesn't make encrypted files unreadable...
  10. @DZMM nice find. I started with using the Team Drive, but I found that software like Duplicati doesn't work well with Team Drives. So I moved back to the personal Gdrive. What I found is that you can just move the files from the teamdrive to the personal Gdrive through the webgui of Gdrive. It will take no time and doesn't impact your quota as far as I noticed. If you keep the same file structure as the personal Gdrive and you make sure Rclone uses the same password and salt it will be understood by Rclone and you can browse the files again.
  11. Is it possible to install things like DNScrypt and Unbound with the docker version or do we need the "normal" installations for that (thus VM or raspberry pi)?
  12. I don't know what you mean with SMB. You can access it through browsing your network and going to your Unraid server and then shares if that's what you mean. And otherwise you can create an SMB drive of it through the settings on the share it's on. The file stays in unionfs/Movies/Moviename/moviename.mkv if you look at the union folder. If you go to the seperate folders you will see it move from Local to your Upload folder. Unionfs makes is that you don't see a difference in what location the file is, as long as it's part of the union. You don't need to add multiple directories. For Radarr you add your unionfs/Archiv/Movies folder and for Radarr unionfs/Archiv/Series. With the unionfs command I provided you with, the system will know to write to the Local drive when moving files. The other files are Read Only (RO). Yes, in your case I would make your Archiv folder your unionfs folder so you don't need to change paths I think (don't know your file structure fully). But if you want to start and use the tutorial of DZMM then using mount_unionfs is easier and you will need to change your docker templates to the folders within mount_unionfs. Sure, just send me a PM when you get stuck and need assistance and I'll provide you with my Discord username (don't want to put it publicly here).
  13. Sonarr and Radarr will understand if you point them to your mount_unionfs folders (so the virtual folder so to speak). The files can move within the folders the union is made of, but it will seem as it didn't move. So you will have to rescan Sonarr and Radarr once on your unionfs folders and then you should be good. Make sure you also add the unionfs folders as RW-Slave to your docker template as DZIMM mentioned. So about your file structure this is what I would do. You need the following folders: - The mount_unionfs where your local and cloud folders are combined - Your rclone_upload folder where the files are added to the queue for uploading - Your mount_rclone folder where your cloud files are accessible after mounting - Your local folder where your files which should not be uploaded are stored In your case I think you should make your Archiv map your unionfs folder. You're probably used to accessing this map for your files, so you can keep this. Make sure you change the scripts accordingly when you chose to go this route. You should also move all your files to your local folder (making it an own share might be useful) from your Archiv folder so your Archiv folder is empty. Then in the other 3 folders (rclone_upload, mount_rclone and local) you have to make sure they have the same file structure below the top level. This makes sure that your unionfs picks up the different folders and combines them as one. This will also allow you to create a union on the top level and not having to do this on the sublevel (thus Movies/Series/Etc.). For the unionfs command I would use the following: unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/LocalStorage=RW:/mnt/user/mount_rclone/google_vfs=RO:/mnt/user/mnt/user/rclone_upload/google_vfs=RO /mnt/user/Archiv Be aware that you need to change LocalStorage to the share/folder you are using for your local_not_to_be_uploaded files. Sonarr and Radarr will also move downloaded files to this folder. If you are unsure about this (although you shouldn't destroy anything doing this) you can try it on different folders. Make sure you disable your Sonarr and Radarr folders when using the above command first. After the union succeeded and is to your liking put them back on. Maybe with the setup I gave you they don't even need to rescan and will just see the same files as before since you kept your folder intact.
  14. What is your desired file structure/topology? Do you want to upload everything to the cloud eventually or do you have part of your library you want to keep local>
  15. So it doesn't come up as an update in my plugins section? No trigger that there is a new version? And I've just tried uninstalling and installing the beta again and I get this error: plugin: installing: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg plugin: downloading https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg plugin: downloading: https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg ... done +============================================================================== | Installing new package /boot/config/plugins/rclone-beta/install/rclone-2018.08.25-bundle.txz +============================================================================== Verifying package rclone-2018.08.25-bundle.txz. Installing package rclone-2018.08.25-bundle.txz: PACKAGE DESCRIPTION: Package rclone-2018.08.25-bundle.txz installed. Downloading rclone Downloading certs Download failed - No existing archive found - Try again later plugin: run failed: /bin/bash retval: 1 Any idea what this means? EDIT: I seem others have had this problem aswell. I've removed the plugin folders, rebooted several times. Tried the rclone stable as the rclone beta. All give this error. I can ping 8.8.8.8 fine, but 1.1.1.1 fails. Maybe it's down? And I also did the curl to ping the download page, also succeeded. So I don't know what to do anymore. Hopefully this can be fixed soon, as I can't access my cloud data now. EDIT 2: It's my Pfsense box which has PFblockerNG which blocks 1.1.1.1. Currently finding a solution. Will report later with solution. EDIT 3: Changed DNS resolver to DNS forwarder. This fixed it. Then I changed it back to DNS resolver and reinstalled PfblockerNG. Think my Pfsense setup was corrupted. For now it's working.
  16. @Waseh would it be possible to update the beta to 1.45?
  17. After the mount the mountcheck file should be created if the script is working. When mount again it would say you're already mounted. These checks are more for a server that is running well. If you are still first getting the mount/upload to work you can remove the check parts from the script and just use the rclone commands.
  18. DZMM has google_vfs I use Gdrive. So in your case you can change it to google_vfs if use the file structure of DZMM.
  19. Ok lets check a few things first: - You installed the plugin Rclone beta from Waseh? - You installed the nerpack plugin? - Within nerdpack you installed unionfs? - You created the shares mount_rclone, mount_unionfs and rclone_upload? - You created a rclone mount in rclone and when you browse to mount_rclone you see your Gdrive mount there and you can copy a file through it and you see it appear on your Gdrive? - You've created these scripts under Settings > User Scripts in Unraid? If so try my scripts, they are adapted versions of DZMM and maybe a bit more for general purpose. Mount: #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/mount_rclone/rclone_install_running fi ####### End Check if script already running ########## mkdir -p /mnt/user/mount_rclone/Gdrive mkdir -p /mnt/user/mount_unionfs/Gdrive ####### Start rclone_vfs mounted ########## if [[ -f "/mnt/user/mount_rclone/Gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") INFO: installing and mounting rclone." rclone mount --rc --rc-addr=10.0.0.60:5572 --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_crypt: /mnt/user/mount_rclone/Gdrive --stats 1m & rclone rc vfs/refresh recursive=true # pausing briefly to give mount time to initialise sleep 10 if [[ -f "/mnt/user/mount_rclone/Gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ####### End rclone_vfs mount ########## ####### Start Mount unionfs ########## if [[ -f "/mnt/user/mount_unionfs/Gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else # Unmount before remounting fusermount -uz /mnt/user/mount_unionfs/Gdrive #fusermount -uz /mnt/user/mount_unionfs/Gdrive_Backup unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/Gdrive=RW:/mnt/user/mount_rclone/Gdrive=RO /mnt/user/mount_unionfs/Gdrive if [[ -f "/mnt/user/mount_unionfs/Gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ####### End Mount unionfs ########## rm /mnt/user/mount_rclone/rclone_install_running exit Upload: #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/mount_rclone/rclone fi ####### End Check if script already running ########## ####### check if rclone installed ########## if [[ -f "/mnt/user/mount_rclone/Gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/mount_rclone/rclone_upload exit fi ####### end check if rclone installed ########## # move media rclone move /mnt/user/rclone_upload/Gdrive/ gdrive_upload: -vv --drive-chunk-size 512M --checkers 4 --transfers 1 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k # remove dummy file rm /mnt/user/mount_rclone/rclone_upload exit
  20. Run it in the background and then on the right you can download the log. Can you share that here? And your mounting config and upload config?
  21. I'm having difficulty understanding what exactly your problem is. But seeing an earlier post of you it appears to me you execute the scripts by clicking "Run script" which you shouldn't do. Use the "Run in the background" button and then check log.
  22. Thanks! I was already transferring it from mount_rclone, but then I transferred it to the mount_unionfs which caused the problem. Transferring with Krusader from mount_rclone to mount_rclone works perfect.
  23. @DZMM I've changed directories for some series in Sonarr. So these should move from one folder on the Gdrive to another. It seems so far that I can't just move them, so I would essentially be a download and re-upload. Is that correct or is there a way to transfer files easily?
  24. What is the difference between using this docker and the official Emby docker?
  25. Silly question, did you reboot your Unraid box after every change to the scripts? And are you running the scripts "In the background"? I've noticed myself that just unmounting and remounting the scripts doesn't work without rebooting it. After the reboot download the logs and see what the mounting scripts give for error. When you get the error your fuse folders are not empty you should start up Krusader. Go to your mount_unionfs folder and clear it all. You can just move the files with Krusader to your rclone_upload folders. Only folders which can be populated (thus not empty) are your rclone_upload folders (your folder where your local folders are stored waiting to be uploaded) and your mount itself (thus your cloud storage). Your unionfs folder is not a real directory and just a merge of the local and cloud folders and thus should be empty.
×
×
  • Create New...