watchmeexplode5

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by watchmeexplode5

  1. @Yeyo53 Long post but I'll try to help. Two questions so I know what your goal is and can try and get you a good setup: Do you want all your files (movies/tv) in your "Plex" share to eventually be moved to your gdrive? Or are you going to keep some files local and some on the cloud? Do you seed files for a LONG time or just to meet ratio/hit-and-run rules (like 7 days seed before you delete it from your torrent client)? ------------------------------------------ ------------------------------------------ With regards to your other questions: Should I use unassigned device for torrents That's up to you. I would use unassigned device for seeds to avoid parity writes OR use your cache. But that of course would make those files unprotected (likely not an issue for torrents). Consider keeping the seeds on your cache drive if it's big enough and that will avoid excessive spin ups. Cache set to YES on "Plex" Yes you are correct. With cache set to yes, anything written to the "Plex" share will be placed on your cache drive (if there is enough space). When unraid runs the mover script it will write them to the disks on your array. mount_rclone mounted in storage One question, what do you mean "Storage"? The above config I posted should place the mount at /mnt/user/mount_rclone. If you go there it will be the contents of your gdrive. Free-space/used-space reported there will likely be wrong. Just know that it's your gdrive mount and don't worry about what space is being reported. Local in Torrent/gdrivedownloads This should be your "local" content that is pending upload to gdrive. The upload script will move things in "local" to your gdrive. So it doesn't make to have that set as Torrent/gdrivedownloads in my opinion. I would keep it /mnt/user/local. That works best with these scripts. To stop excessive spin ups, use your cache drive (if it's ssd) for /mnt/user/local... or you could map it to your unassigned drive if you want. Your torrent client should be set to download to something like /mount_mergerfs/downloads/"whatever-you-want" (something like /torrents) This setup will insure you can maintain hardlinks with torrent Regarding mount_mergerfs Your mount_mergerfs mount shouldn't be placed in your "Plex" share. The setup I gave you earlier combines your local, rclone, and Plex share into a single place for Sonarr/Radarr/Plex/torrents to use. That way any file located on local, rclone, or "Plex" folders will appear in the mount_mergerfs. So it doesn't matter if the file is local or on gdrive, your programs will always see it in the mergerfs folder. Think of it like shortcuts. The file isn't actually on your mergerfs share but it's "linked" to that so programs can't tell the difference. Docker Mapping --> Very important for Torrent Hardlinks! You should be able to map all your docker programs to something like: (Host Path) /mnt/user/mount_mergerfs ---> (Docker Path) /mount_mergerfs Torrents will download to /mount_mergerfs/downloads/torrent_dl_folder Sonarr will find them and move it to /mount_mergerfs/tv Plex will scan them and play them from /mount_mergerfs/tv Before you go and move your entire Plex folder to gdrive Test it by taking files from your "Plex/movie/Folder-File" share and move them to "/mnt/user/local/Movie/Folder-File". Then run your upload script. If you look in your mergerfs share, nothing will have changed because it's all linked together, that's normal! When you look in your mount_rclone folder you will see your new Folder-Files (because they are now on gdrive).
  2. @Yeyo53 These are the settings I would recommend for starting out. Mostly default but adapted to work for your Plex mount. Keeping things default also makes initial setup and support easier! Using gcrypt pointing to your gdrive RcloneRemoteName="gcrypt" RcloneMountShare="/mnt/user/mount_rclone" LocalFilesShare="/mnt/user/local" MergerfsMountShare="/mnt/user/mount_mergerfs" DockerStart="transmission plex sonarr radarr" MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/Plex/" So your gdrive will be mounted at .../mount_rclone. Your local file at .../local (to be moved to gdrive on upload) I added your mnt/user/Plex/ folder to the localfileshare2 setting for mergerfs to see. The merged folder will be at .../mount_mergerfs If you go to .../mount_mergerfs you will have all your paths combined so your gdrive, your ../local, and your /Plex files will all be there. When you write/move/copy things to .../mount_mergerfs it will be written to /mnt/user/local/ When you run the upload script. Anything in .../local folder will be uploaded to your gdrive. So with this configuration you should point Plex/Sonarr/NZBGet to "/mnt/user/mount_mergerfs" It will still see your media that's in your /Plex folder because it's added to localfileshare2. This setup will keep your /Plex folder untouched while you make sure everything works well. If you want to move portions of your /Plex folder to your gdrive. Simply move files from /mnt/user/Plex to /mnt/user/local (or /mnt/user/mount_mergerfs). Then run the upload script.
  3. @Yeyo53 Do you plan on moving your local plex files to the cloud? Or keeping some files local and some in the cloud? To start off, I wouldn't use the rclone cache system if you don't have to. In my tests, I haven't seen any performance gains from it compared to the scripts listed here. I recommend using just your a remote pointing to your gdrive and a crypt pointing to that gdrive. Here is an explanation of the commands that I think you are struggling with: RcloneMountShare="/mnt/user/mount_rclone" This is where your gdrive will be mounted on your server. So when you navigate to /mnt/user/mount_rclone you will see the content of your gdrive. In your case it sounds like your will see your two folders which are "media" and "movies" LocalFilesShare="/mnt/user/local" This is where local media is placed to be uploaded to the gdrive when you run the upload script. This is where you will have a download folder, a movie folder, a tv folder, or any folder your want. MergerfsMountShare="ignore" If your fill this in it will combine your local and gdrive to a single folder. So lets say you set it as /mnt/user/mount_mergerfs. These files do not actually exist at that location but simply appear like they are at this location. Here is a visual example to help /mnt/user/ │ ├── mount_rclone (Google Drive Mount) │ └── movies │ └──FILE ON GDRIVE.mkv │ ├── local │ └── movies │ └──FILE ON LOCAL.mkv │ └── mount_mergerfs └── movies ├──FILE ON GDRIVE.mkv └──FILE ON LOCAL.mkv MountFolders=\{"downloads/complete,downloads/intermediate,downloads/seeds,movies,tv"\} These are the folders created in your LocalFileShare location. The folders here will be uploaded to gdrive when the uploader script runs (except the download folder. The uploader ignores that folder). So typically it's best to leave them as the default value. You can always make your own files there if you want
  4. @DiniMuetter It's best to use all the script on DZMM's github: https://github.com/BinsonBuzz/unraid_rclone_mount Instructions for setting up all the user settings are well documented on his github. Read that fully and you should have no issue setting it up correctly. Use the userscripts plugin for easy editing and running. Your current command is mounting like it's on a windows file system -- Mounting your gcrypt: to the windows "t:" drive. For unraid that won't work and it should look something like: rclone mount \ .... .... gcrypt: /mnt/user/cloud (or where ever you want your gcrypt to be mounted in unraid) But again. If you use DZMM's scripts you won't have to do any of the hard editing. Simply set your user settings at the beginning of his scripts and they automagically configure it for you! Feel free to chime back in if you have more questions/problems
  5. @Bjur Yeah. Kinda a double edged sword with the ssd/nvme game. I let my nvme's get hit hard on the wear and tear front because it's so nice to write and unpack rapidly. Cost vs benefits debate but I'm a sucker for the speed of them.
  6. @Bjur Download via what -- Usenet/Torrent? Or download from your actual mount? Are you utilizing a cache drive to avoid parity write bottlenecks? Lots of different variables can effect your dl speeds and a lot are out of your control --> like distance from server and peering to the server. But onto what you can control. Generally the fastest way (and to test for any bottlenecks) would be to dl from whatever to a share that is set as "use cache: only" in unraid. That way you avoid any parity write overhead. Also, kinda obvious but, NVME/SSD will trump any mechanical HDD so for quick writes that's what you should be using. Other than that, you can play with the amount of parallel workers. Buffer and cache size of files, ect. With DZMMs scripts, these values are optimized for downloading/streaming from gdrive but you can read up on others settings on the official rclone forum. Animosity022's github has some great settings (heavily tested and very active on the rclone forum). His recommendations are often the most widely accepted settings when it comes to general purpose mounting!
  7. @Thel1988 Currently I kinda left everything barebones because it's more for advanced users. Definitely not for those just getting into it. But yeah, I added a basic readme. Most settings can be viewed on their official project page and the rest of the things are pretty self explanatory within the configs/scripts if you read them. For the script order --> I run the install script on startup of the array, then I run the mount script so I can access my mounts. Finally I cron my upload for every 20 minutes. I don't want to hijack DZMM's thread too much so if anybody has more questions feel free to PM me.
  8. @DZMM If anybody is interested in testing a modified rclone build with a new upload tool. Feel free to grab the builds of my repository. You can run the builds side-by-side with stable rclone so you don't have to take down rclone for testing purposes! It should go without saying, but only run this if you are comfortable with rclone / DZMMs scripts and how they function. If not, you should stick on DZMMs scripts with rclone official build! Users of this modified build have reported upload speeds of ~1.4x faster than rclone and ~1.2-1.4x on downloads. I fully saturate my gig line on uploads with lclone where on stock rclone I typically got around 75-80% saturation. I've also got some example scripts for pulling from git, mounting, and uploading. Config files are already setup so you just have to edit them for your use case. The scripts aren't elegant but they get the job done. If you anybody likes it, I'll probably script it better to build from src as oppose to just pulling the pre-builds from my github. https://github.com/watchmeexplode5/lclone-crop-aio Feel free to use all or none of the stuff there. You can just run just the lclone build with DZMM's scripts if you want (make sure to edit the rclone config to include these new tags) drive_service_account_file_path = /folder/SAs (No trailing slash for service account file) service_account_file = /folder/SAs/any_sa.json All build credit goes to l3uddz who is a heavy contributor to rclone and cloudbox. You can follow his work on the cloudbox discord if you are interested -----Lclone (also called rclone_gclone) is a modified rclone build which rotates to a service accounts upon quota/api errors. This effectively not only removes the upload limit but also the download limit (even via mount command - solving plex/sonarr deep dive scan bans) and also a bunch of optimization features. -----Crop is a command line tool for uploading which utilizes rotating service accounts based once a limit has been hit. So it runs ever service account to it's limit before rotating. Not only that but you can have all your upload settings placed in a single config file (easy for those using lots of team drives). You can also setup the config to sync after upload so you can upload to one drive and server-side sync to all your other backup drives/servers with ease. For more info and options on crop/rclone_gclone config files check out: l3uddz Repository https://github.com/l3uddz?tab=repositories
  9. @DZMM Yup, found out about that performance hit 2 weeks ago and already migrated to a bunch of t drives. I took the same approach with server side move after figuring out the corresponding encrypted name 👍 Currently playing around with a modified version of rclone's backend that rotates sa based on error callbacks. Developer has done a lot of performance tweaks, bypass 10tb dl per day limit on mounts, and circumvents API limit troubles (though that's less of an issue these days). I'd post it but it's fairly unstable and I don't think the developer wants it being tossed around till it's all ready but I'll keep you updated on it
  10. @DZMM I've encountered the same issues with union. The changes not being picked up on mounts being the most problematic issue. Regarding shortcuts: there is a few posts here Rclone forum and on Google's blog. Currently shortcuts "work" but I wouldn't recommend it. It adds an unnecessary layer of complication and you have to be very careful with file vs shortcut removal. I know a few people that have lost data when they thought they were deleting a shortcut but actually deleted files. I was experimenting with it because some higher ups at cloudbox use that setup. Currently it's easier/safer to just user mergerfs. I don't see downside to merger vs. shortcuts.
  11. @DZMM Sorry I haven't been posting much (still in a cross-country move). Currently I've dropped mergerfs to play around with union rclone backend. But my stable servers are now using "shortcuts" to combine all my team drives to a single my drive mount point. (As recommended by some developers of cloudbox/cloudplow). Once rclone union is a bit more mature/ironed-out I'll probably drop shortcuts and have the rclone union manage it all
  12. @markrudling When you tested the Windows+Raidrive plex scan did you use the same settings? IE: same media analyze, chapter/preview thumbnail settings: that can generate lots of traffic. Did you ever use windows with a rclone mount in your tests?? Also, I've never used Raidrive but rclone likely functions differently with regards to caching and file access. With the default mount settings, rclone should serve portions of the requested file. If you/plex requests more it will grab more of the file. It could be that you are seeing expected behavior on an initial scan which will settle down afterwards. I don't use (plex on windows) + (smb to unraid) . I keep plex on unraid itself so I can't be 100% on my statements. Maybe somebody with a unraid+windows-plex combo can chime in with more info.
  13. @JohnJay829 That looks like an error with the rclone plugin and not the scripts. What version of rclone plugin are you running? Try updating and/or running the beta rclone plugin.
  14. @francrouge, Short answer. Supports hard links, actively being developed and fixed, easier to cleanup products of merging dir (less scripts for the end-user). Typically agreed on being better for our general rclone case. Complex answer from trapexit's GitHub: UnionFS is more like aufs than mergerfs in that it offers overlay / CoW features. If you're just looking to create a union of drives and want flexibility in file/directory placement then mergerfs offers that whereas unionfs is more for overlaying RW filesystems over RO ones.
  15. @DZMM @Bjur. I often unpack and write to the local mount due to a minor decrease in performance on a fuse filesystem. But that decrease is very minor. It's easiest to follow DZMM's advise and do most your work in the mergerfs mount.
  16. @axeman The mount script creates the appdata/other/rclone folder. Feel free to create them yourself. For your mount remote id and secret. Use a access API you likely already have created for that. That's used to mount the drive. (Theoretically you could auth with a sa.json but let's not over complicate the setup 😝). No need for additional ids/secrets for the SAs. When an upload is ran, your .json service files will be referenced for credentials and the client id/secret will be ignored ( ie: you upload with a new unique user that has a clean quota limit -- 750gb per service account/day). You can also define a custom location for the accounts if you want. It's just cleaner to keep it all in the other/rclone folder.
  17. @axeman yup, you can create the service json files from any machine. It's a little less error prone if you do it on Linux but I've done it on Windows without issues. You can also do it on unraid easily. Make sure you have python installed via nerd tools. If you run into issues like not having pip just ssh to your box and run: "python3 get-pip.py"
  18. @Bjur Files will only be upload once the min age time (set in upload script) has been hit. Default is 15 minutes. So no issue moving files into the upload location while upload is running.
  19. @Bjur Shouldn't be any issue to have a single download and that location is fine. In your case with the two mounts, I think it makes sense to have a single download like you listed. The scripts (if you leave it default) will auto create /local/REMOTE/downloads folder but feel free to ignore it or edit it to something like this: googleFi_crypt: MountFolders=\{"movies"\} # comma separated list of folders to create within the mount googleSh_crypt: MountFolders=\{"series"\} # comma separated list of folders to create within the mount
  20. @DZMM Sounds good, glad it's not a widespread thing and just me. I wonder if I've got something doing deep scans of the full file rather than a quick touch and go scan. If anybody has a similar issue, a band-aid fix is to share the drive with another "dummy" user and temporarily mount under that user's credentials. That will get it up and running while the ban expires.
  21. @DZMM Question regarding rclone on full (new) library scans: Do you have any experience with avoiding a 403 - downloadQuotaExceeded? I think it's like 10tb/day per user. I think I hit it because of multiple test instances doing an initial scans of my drive library. Would changing chuck size reduce the odds of hitting this limit? Do file probes in the scan get counted as the full file size in my quota or do I get quoted on the data actually downloads?
  22. @DZMM Sorry about moving off topic and I don't mean to offend anybody. For best practice I was referring to best practice at reducing individual user impact on googles end. Technically the majority of gsuite users with using unlimited space are breaking the TOS which outlines that for unlimited drive use 5 paid users are required. Google just has never enforced this portion of the TOS. Enforcing that is the most likely option they would take to curb single user abuse. Finally I was referring to google blocking user-agent identified as rclone. That may have been a bug on googles end but if I recall correctly the admin initially worried it could be a purposeful block. No offense meant to anybody. If you pay for unlimited data you are completely in your right to use it however you want. K, no more off topic stuff for me. Just wanted to answer your questions.
  23. @Bjur I alluded to that fact about encrypted data and the issues it creates with de-duping in an earlier post. If google drive abuse through unlimited encrypted storage and breaking various upload/download limits gets out of hand, google will likely take action to prevent the abuse. Most likely through enforcing the 5 users for unlimited. That won't completely prevent abuse but it will stop a lot of people from utilizing the service. In the past *may have* prevented services it thought were rclone but that is no longer an issue. If you care about not using max space on googles servers then non-encrypted allows google to de-dupe files. That being said, it's understandable why people choose to encrypt even though courts would likely need a subpoena to look at any data and correct me if I'm wrong, but there has never been a case of that with standard google drive users (IE: not resellers of streaming services). One huge plus for unencrypted data is the use of .strm links. Every video file upload to google gets converted to different qualities (similar to how youtube works). This means that google utilizes it's massive transcoding power to convert your original 4k file into 1080p, 720p, and 480p files. You can then use a media server such as jellyfin to push the .strm links to shared users. The user is piped the video in whatever quality they desire directly from google to their computer. They don't have to connect to you as the middleman so there is 0 overhead for users streaming/transcoding from your database. This means you could theoretically run a server capable of streaming and server-side-transcoding 4k videos from something like a raspberry pi.
  24. @Bjur Those settings look fine. The logs you posted look like "googleSh_crypt" is mounting and responding fine. In your /mnt/user/mount_rclone/ do you find your two remote's folders (googleFi_crypt and googleSh_crypt)? Do you need --rc? That's for starting rclone's remote control http server. Maybe it's trying to start both of your instances with rc on the same port. If you don't use --rc then just remove those in both scripts. If you do, you may need to define the port for a second rclone instance ex: "--rc-addr=:5573" (defualt rc being :5572) --------------------------------------------- About the disk1/2/3 question: Doube check with @DZMM, but I'm fairly certain you could force it on a single disk by setting: LocalFilesShare="/mnt/disk1/local" Then you can do disk2 and disk3 by modifying it. Might add some overhead though for parity writes when moving to local as opposed to keeping local on the cache only.
  25. @Bjur Looks like it is mounting fine from the outputs. So is the mount visible available at /mnt/user/mount_rclone/googleSh_crypt ?