Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. Crypt shouldn't matter. Post your mount and upload scripts please. What do you mean with "can't access the file anymore"?
  2. I'm not talking about Rsync, but about Rclone. Rclone can both sync, copy and move. The upload script from this topic uses rclone move which will move the file and then delete it at source when it's validated to have been moved correctly. With copy you still keep the source file, which could be what you want. Rclone sync just keeps 2 folders in sync one way to another. So am I understanding it right, you are using the copy script to copy directly into your rclone google drive mount? Or are you using an upload script for that as well? Copying directly into the rclone mount (would be mount_rclone/) is problematic. Copying into mount_mergerfs/ and then using the rclone upload script is fine. In general, I would really advise against using cp since it's not validating the transfer and is basically copy-paste. You can then also put in values as minimum file age to transfer. Rsync is also a possibility when you are familiar with it, but for me rclone is easier because I know how to run it.
  3. The context canceled is usually an error reporting a timeout. So maybe the files can not be accessed yet or can't be deleted? I don't understand why you are using simple copy instead of rclone to move the files? With rclone you are certain that files arrive at their destination and has better error handling in case of problems.
  4. Hard to troubleshoot without the scripts you're running.
  5. I didn't get the e-mail (yet). I'm also actually using Google Workspace for my business, so maybe they check for private and business accounts/bank accounts? Could be different factors, I have no clue. But in my admin console it says unlimited storage still, so they would be breaking the contract I suppose by disconnecting people. I've already anticipated a moment where it would shut down though, it's been happening with all the other services. I wouldn't count on Dropbox to stay unlimited. You'll have to ask yourself as well what your purpose for it is. Depending on your country's prizes, you can buy 50-100TB of drives each year you use Dropbox. That's permanent storage you can use for years to come. And for media consumption, I pretty much moved over to Stremio with Torrentio + Debrid (Real-Debrid is a preferred one). For 3-4 euro's a month, you can watch all you want. The only caveat is that you can only use most Debrid services from 1 IP at the same time, but no limit on amount of users from that same IP. There is already a project called plex_debrid which you can use to run your Debrid through your Plex, so it will count as 1 IP for users from outside as well. To answer your question, Dropbox has different API limits. I think (but it might have changed) they don't have the limits on how much you can upload and download, only on how often you hit the API. So using rotating service accounts and multiple mounts won't be needed (big plus). But you need to rate limit the API hits correctly to prevent temporary bans. You can always request the trial and ask for more than the 5TB you get in the trial to see how things are. I've seen different experiences with Dropbox, sometimes it's very easy to get big amounts of storage immediately (100-200TB) other times you'll have to ask for every 5TB extra storage. Sometimes it is added automatically, apparently. So everyone's miles will vary, basically ;).
  6. Currently, I have 1 cache pool of 2 SSD's mirrored in BTRFS. I'm thinking about the configuration of my new system in which I expect to have at least 4 NVME drives and perhaps 2 SATA SSD's. I know it is possible to make multiple cache pools right now, but is that also possible with ZFS mirroring? So I would have 2 NVME as 1 mirrored ZFS cache pool, then the 2 other NVME drives would be another ZFS mirrored cache pool. And perhaps the SATA SSD's will be another ZFS cache pool, or just single drives. The array would stay in XFS as I want to keep the flexibility to grow my array easily. Would this be possible, or is the cache pool with ZFS limited to 1?
  7. I'm reading your post back, and I now see I'm missing the conclusion. Did you get it to work as you wanted? If not, let me know what your issue is. By the way regarding the /mnt/user and the performance increase of moving files, it's called "atomic moving". You can look that up if you want to know more about it.
  8. You probably rebooted your server and the checker file has not been deleted. Should be in your /mnt/user/appdata/other/rclone/remotes/gdrive_vfs/ directory called upload_running_daily (or something along those lines).
  9. Same problem here on 6.11.5. It's not a problem on the Linuxserver dockers for me, but on the non-linuxserver ones mostly. Connectivity is fine, so no idea why that's pointed to as the issue.
  10. In your mount script change RcloneRemoteName to "crypt". I'm wondering about your crypt config though. I'm not sure if the / before Dropbox works. Did you test that? I would just have dropbox:crypt for the crypt remote, / is not needed afaik. Regarding the /user path. You have to create the path named /user which points to /mnt/user on your filesystem on your docker templates. Then within the docker containers (the actual software like Plex) you start your file mappings from /user. For example /user/mount_mergerfs/crypt/movies for the movies path in Plex. You do the same for Radarr. This helps performance because the containers are isolated, so they have their own paths and file system which they also use to communicate with each other. So if you have plex user /Films but Radarr /Movies then they won't be able to find that path because they don't know it. And this also gives the system the idea that it's 1 share/folder/disk so it will move files directly instead of creating overhead by copy writing the file. This is not a 100% technical accurate explanation, but I hope it makes some sense. Another thing: I'm not sure if the dockers for the docker start need a comma in between. You might want to test/double check that first as well.
  11. No why would that be needed? You're just mounting all the folders you have on this Google Drive folder. This will also be available through the windows explorer for example. So if that is not a problem then you can just leave as is.
  12. You have to create a path in your docker templates to /mnt/user/ as /user. So within the dockers you have to point to /user/mount_mergerfs/gdrive_vfs/XX for the folder you want to be in. But if you want to download to /user/downloads/ that's also fine, just make sure you start all paths from /user so the file system will see it as the same storage. Otherwise it will copy + paste then delete instead of just moving straight away. For plex you use the media folders for example /user/mount_mergerfs/gdrive_vfs/Movies for your movies and /user/mount_mergerfs/gdrive_vfs/Shows for your TV-shows. Make sure you read up in the start post and the github about which settings you should disable in Plex to limit your api hits.
  13. Are you checking the logs? And do you run the script through User Scripts as "run in background"?
  14. What do you mean with "script from here"? When you run the mount script it should show your Google drive files in /user/mount_rclone/gdrive_vfs/ and your google drive files and local files combined in /user/mount_mergerfs/gdrive_vfs/. Your mount_mergerfs and mount_rclone folders should be empty before the mount script runs. Only /user/local/ is allowed to have file before the mount script is started.
  15. You're mounting /mnt/user/mount_rclone/ instead of /mnt/user/mount_mergerfs/ in that command so that's the reason why it does work. It's just mounting the Gdrive. I think the problem lies in that you added --allow-non-empty in your mount script. That's not recommended and I see no reason why you would use it. Try removing that from the script, disable your dockers (from the settings menu, so they won't autostart at reboot). Then restart your server and run the mount script. See what it does, check the logging.
  16. Paste your mount script. And explain which folders you use. Does the mount script give any errors?
  17. This normally happens when you have a docker starting up before the mount script is finished so it will create folders already in the mount_rclone/gdrive folder. Dockers which download like Sabnzbd and torrent clients often do this so they create the folder structure they need when it's not available. So try booting without autostart of dockers and then see if it happens again.
  18. Nice find then! Hope they fix it soon so you will can finally set and forget again :).
  19. No, not when Sonarr uses the /mnt/user base and not direct paths. If you use direct paths like /downloads it will first download files from your cloud and then re-upload them again when importing. Your logs should mention something about it though.
  20. It's a common issue if you read the Githubs. Sometimes Firefox works, but 90% of the time it doesn't. Firefox based browsers do see to work (like Mullvad browser). So no clue what it is. I'm not happy at all with this decision to switch to KasmVNC and devs don't seem to acknowledge the Firefox issues. I'll give it a few weeks and then switch to another docker if not resolved. And with Linuxserver's decision to move away from Guacamole and towards KasmVNC more dockers might become unusable.
  21. Yes it should be on "always", otherwise sonarr/radarr won't pick up changes or new downloads you made. Using manual is only useful for never changing libraries.
  22. Glad to hear! Did the permissions persist now through mounting again? I still want to advise to not use the analzye video options in sonarr/radarr. It doesn't give you anything you should need and it takes time and api hits. I don't see any advantage for that. Mono is what Sonarr is built upon/dependent off, so it could be that it was running jobs or maybe the docker already had errors/crashes and then you often get the error in my experience. Now that it's running well, does stopping it still give you this error? I would advise to add UMASK again to your docker template again in case you removed it. That way you stick closed to the templates of the docker repo (linuxserver in this case). Regarding your mount script not working the first time. It is because it sees the file "mount_running" in /mnt/user/appdata/other/rclone/remotes/$RcloneRemoteName. This is a checker to prevent it from running multiple times simultaneously. So apparantly this file doesn't get deleted when your script finishes. Maybe your unmount script doesn't remove the file? Or are you running the script twice during starting of your system/array? Maybe 1 on startup of array and another one on a cron job? I would check your rclone scripts and the checker files they use and the way they are scheduled. Something must be conflicting there.
  23. Yes I have it all on 99/100 down to the file. Can you try the library import without analyze video checked on? This causes a lot of CPU strain and also api hits, because it reads the file which is like it's streaming your files.