Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. What team drive limitations are you talking about? If your seedbox can use rclone it can mount directly to Google drive. But a seedbox also indicates torrent seeding and that's generally a problem with Google drive because of the api hits and the resulting temporary bans. If you just want to use your seedbox to download and move the files to your Google drive that's entirely possible and is what DZMM seems to be doing.
  2. You ran safe docker perms when you didn't have your rclone mounts mounted? And I think the default script already runs umask so seems like you are doubling that flag now?
  3. You can only use service accounts with team drives. And I don't think multiple actual accounts work for your own drive. You would have to share the folder with the other account. But I think it will then fail on making the rclone mount cause rclone can't see the shared folder. If you have the option, team drives are the easiest option for everything. Including being consistent with the storage limits of Google Workspace. Read a few posts up with responses to Bjur from me and Bolagnaise that should solve your issue. I don't want to be rude. But you have no idea what you are doing. And I don't think we can understand what you are trying to do. I don't see why you bring in Krusader which is just a file browser which can also browse to mounted files. I suggest you first read up on rclone and then what the scripts in this topic do before you continue. Or be precise in what you are trying to accomplish.
  4. Got a 500 internal error using this just now. And after that my plex database got malformed and is broken now. So be careful and make sure you backup your database.
  5. Don't overthink it. You have the Tools > New Permissions functionality which you can use to fix permissions on a folder level. For you that would be (I suppose) mount_rclone and mount_mergerfs and your localdata folder if you have it and if the permissions are not correct there. You don't need to run these permissions on your appdata/dockers. I don't know if you saw the edit of Bolagnaise above? Read it, cause your mount script won't work like this. Add \ to every addition you made. So I would put it like this: # create rclone mount rclone mount \ --allow-other \ --buffer-size 256M \ --dir-cache-time 720h \ --drive-chunk-size 512M \ --log-level INFO \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --vfs-cache-mode writes \ --bind=$RCloneMountIP \ --uid 99 \ --gid 100 \ --umask 002 \ $RcloneRemoteName: $RcloneMountLocation & You have to finish with that "&".
  6. Why would it break your Plex transcoder? You can try running the tool when you don't have your rclone mounts mounted. So reboot the server without the mount script and then run the tool on mount_mergerfs (and subdirectories) and mount_rclone. Maybe it will be enough for Sonarr.
  7. I'm having the same issue. Sometimes it's working for a few days and then it's not reachable again. So I'm just moving back to Deluge. Was a lot more stable and actually moved my files to the right folders.
  8. Yeah was afraid it would be a bad idea, and so it is. I'll have to find a way to trigger the script automatically when it gets api banned. API bans didn't happen before, but had to rebuild all my libraries last weeks after some hardware changes and bad upgrades from Plex. Regarding Bazarr, you can hit me up in my DM's yes. I have everything you need and also how to get it to work with Autoscan so Plex sees the subtitles.
  9. Correct, like that. I would only use /disks though when you actually use unassigned drivers. Cause you also have to think about the Read Write - Slave setting then. If you just use cache and array folders, only /user is sufficient. But I think @maxse is mostly "worried" about setting the right paths/mappings inside the docker itself. And that just requires going through the settings and check for the used mappings and alter them where needed.
  10. @DZMM can you help me with your brainpower? I'm using seperate rclone mounts for the same Team Drivers but with different service accounts (Like Bazarr instances on it's own mount, Radarr and Sonarrs, Plex, Emby, and 1 for normal usage). Sometimes one of those get api banned. Mostly Plex lately, so I have script to mount a backup mount and switch that one within the mergerfs with the Plex local folder and reboot Plex. It then works again. However I'm wondering if you can run 2 Plex instances. 1 to do all the API stuff, like scanning and meta refreshing, etc. And then use 1 Plex instance for the actual streaming. You can point the 2 Plex instances to their own mergerfs folders, which contains the same data. And then I'm thinking you can share the Library. But I don't think this will work right? Cause the Streaming Plex won't get real-time updates through the changed Library data from the other Plex instance right? And you would have to be 100% sure you disable all the jobs and tasks on the Streaming Plex intance. What do you think, possible or not?
  11. For Plex you can use the only partial scan indeed. And then from Sonarr and Radarr I would use the connect option to send Plex a notification. It will then trigger a scan. Later on when you are more experienced you can set up Autoscan which will send the triggers to Plex. Thumbnails will be difficult, but if they are really important for you, you can decide to keep your media files on your local storage for a long while so Plex can do everything it needs to do. Generally it's advised to just disable thumbnails because it requires a lot of CPU power and time, especially if you're going for a low power build. It all depends on your library size as well of course. I've also disabled all the other planned tasks in the maintenance settings like creating chapters and such and especially media analysis. What I do use is the Intro Detection for Series, I can't do without that. And generally series are easier to grab good quality right away and require less upgrades than Movies, so upgrades are less of a problem. Regarding the paths you can remove or edit the /data paths and add /user in the docker template. Within the docker itself you need to check the settings of that docker and change the paths accordingly. I know binhex uses /data so if you use mostly binhex it will often work ok. But because you use different path/mapping names Unraid will not see it as the same drive and thus will see it as a move from one drive to another, instead of within the same drive. And moving server side is pretty much instant, but if you use the wrong paths it will go through your server back to the cloud storage. So again, check your docker templates and mappings/paths that point to media files and such delete those and only use 1 path which is high level like /mnt/user or /mnt/user/mount_unionfs/Gdrive. And then go in the docker and change the paths used inside. Once you know what you are doing and looking for it's very simple and quick. Just do it docker by docker. Regarding your ITX build, I would recommend if you have the money to get at least 1 cache SSD and better is to get 2 SSD's (1 TB per SDD or more) for cache put in a pool with BTRFS. It's good for your appdata, but you can also run your downloads/uploads and Plex library from it. Especially the downloading/uploading will be better from the SSD because it does not have to switch between reading and writing like a HDD. Using 1 SSD for cache is fine as well, just be careful of what other data you let go through your cache SSD, because if your SSD dies (and it will often instantly die, unlike a HDD). your data will be lost. And get backups of course. Just general good Server housekeeping ;). EDIT: If you are unsure if you are doing the mappings right, just show screenshots of before and after from the template and inside the docker if you want and we can check it for you. Don't feel bothered doing that, I think many struggle with the same in the beginning.
  12. For the GID and UID you can try the script Bolagnaise put above this post. Make sure you alter the directory to your folder structure. For me this wasn't the solution, cause it was often stuck on my cloud folders. But just try it, maybe it works and you don't need to bother further. If that doesn't solve your issue, then you need to put the GID and UID commands where you suggested yes. We discussed your local folder structure earlier. The whole point of using mergerfs and the merged folder structure (/mnt/user/mount_unionfs/gdrive instead of /mnt/user/local/gdrive) is that it doesn't matter to your server and dockers if the files are actually local or in your cloud storage, it treats it the same. If you then used the upload script correctly, it will only upload the files that are actually stored locally, because the upload script does look at the local files, not the merged files/folders. The dissapearing of your folders is not something I've noticed myself. But are you looking at the local folder or the merged folder when you see that happening? If you look at the local folders it would explain it to me, because the upload script deletes source folders when done and empty. If it's when you look at your merged folder, then it seems strange to me.
  13. With the mappings what I'm saying is that you can remove the paths for /incomplete /downloads for Sab and for Sonarr /downloads and /series and just replace those with 1 path to /mnt/user/. Then inside the dockers for example Sab you will just point your incomplete folder to /mnt/user/downloads/incomplete instead of /incomplete. That way you keep the paths uniform and dockers will look at the same file through the same route (this is often a reason for file errors). What I find confusing when looking at your screenshots again, is that you point to the local folders. Why are you not using the /mnt/unionfs/ or mnt/mergerfs/ folders? I'm talking about the user script, but within (depending on what your script looks like) you have a part that is the mount and after the mount you merge the mount (cloud) with a local folder. So I was talking about the 2/3 flags that you need to add to your mount part of your user script. If you use the full template of DZMM then you can just add those 2 flags (--uid 99 --gid 100) in the list with all the other flags. Hope this makes more sense to you?
  14. Well I'm not a big fan of your mappings. I don't really see direct conflicts there, but I personally just removed all the specific paths like /incomplete and such. I'm talking about the media paths here, not paths like the /dev/rtc one. And only use /user (or in your case /mnt/user). And then within the docker use that as start point to get to the right folder. Much easier to prevent any path issues, but that's up to you. I also had the permissions issues so what I did is adding these lines to my mount scripts (not the merger, but the actual mount script for your Rclone mount). Those root/root folders are the issue, since sonarr is not running as root. --uid 99 --gid 100 And in case you didnt have it already (I didn't): --umask 002 Add these and reboot, see if that solves the importing issue.
  15. You don't show the owners of your media folders? But I think it's an issue with the docker paths. You need to show your docker templates for Sab and Sonarr.
  16. You won't be able to make a Client ID indeed, which will mean a drop in performance. You should still be able to configure you rclone mount to test things. When you decide this is the way for you I would honestly think about just getting an enterprise Google Workspace account. It's 17 euros per month and you won't be running the risk of getting problems with your EDU account owner. But I can't see the depths of your wallet ;). For BWlimit you would just put everything on 20M, don't erase and don't add any flags there. It's already done further below in the list of flags used for the upload. For Plex, yes you just use your subfolders for libraries. So /user/Movies and /user/TV-Shows. For all dockers in your workflow like Radarr and Sonarr, Sab/NZBget, Bazarr maybe. You remove their own /data or whatever points they use and you add the /user path as well. So Radarr should also look into /user, but then probably /user/downloads/movies. And your Sab will download to /user/downloads/movies so Radarr can import from there. So don't user the /media and /data paths, because then you won't have the speed advantage of mergerfs. Just be aware that when you remove these paths and put in /user, you also have to check inside the docker (software) that the /user path is also used. If it's programmed to use /data then you have to change that to /user as well.
  17. Look into Powertop if you haven't already. Can make quite a difference depending on your build(s). I don't think moving to a seedbox for electricity costs will be worth it. You'll still have to run dual systems like DZMM does. Or you would have to use just one server of Hetzner for example and do everything there. But you're easily looking at 40-50 euros per month already and then you still won't have a comparable system (in my case at least). You removed your earlier posts I think regarding the 8500T. Just to be clear, your Synology is only good for serving data, it will never be able to transcode anything efficiently. Maybe a 1080p stream, but not multiple and definitely not 4K's. I personally would only consider a Synology as an offsite backup solution if you actually consider running dockers and serving transcodes especially. Regarding the eduction account. I suspect you are not the admin/owner of that? So edu accounts are as far as I know, unlimited as well. But I don't think you have Team Drives that you can create there as non-admin. I don't know how the storage limits are for your personal drive then. With the change from Gsuite to Google Workspace it seems like the local Gdrive became 1 TB and the Team Drives became unlimited. So you would have to test if you can store over 1 TB on your local drive if you don't have access to Team Drives. You also don't have access to Service Accounts, so you will have only your personal account with access to your personal Gdrive, which you have to be careful with API hits/Quota's for. Should be fine if you just build up slowly. If you name drives/folders the same it is indeed a copy paste, aside of the parameters you have to choose (backup yes/no, etc.). Just always do test runs first, rule 1 of starting with this stuff. Regarding the paths, since you will probably only have 1 mount you just have to remove all the custom directory/names of the docker. So Plex often has /tv and /movies and such in it's docker template. Remove those and replace the dockers in your workflow with /user and point that to /mnt/user/mount_unionfs/gdrive or whatever the name of your mount will be. This is important for mergerFS, since it will be able to hardlink which makes the speed of data moving a lot faster (like importing from Sab to Sonarr). BW-limits are the limit with which you will upload. You'll have to look at your upload speed of your WAN connection and then decide what you want to do. With Google Drive rclone now has a flag to respect google api when you uploaded more (--drive-stop-on-upload-limit). The situation you described is not really possible I think. You would just set bw-limit to 20MB/s and leave it running. If it his the quota it will stop with the above flag. But canceling an upload job while it's running is not really possible or safe to do without risking data loss. So you either blast through your 750GB and let the upload stop. Or you just set a bw-limit that it can run on continuously. But I would first advise to check the edu account limits and then configure the mount itself with encryption and then see if you can actually mount it and get mergerfs running. After that you can start finetuning all the limits and such.
  18. All the rclone mount mentions are the same, they are not system based. However the paths and use of mergerfs can differ. I found this for mergerfs. https://github.com/trapexit/mergerfs/wiki/Installing-Mergerfs-on-a-Synology-NAS. If you have that working you just need to get the right paths in the scripts. But in the beginning just use simple versions of the script. Run the commands and then see if it's working. The scripts of DZMM are quite complex if you want to translate them to your system.
  19. Did you really search? First thing when I google is this: https://anto.online/guides/backup-data-using-rclone-synology-nas-cron-job/ Once you got rclone installed and you I assume you know your way around the terminal then you can follow any Rclone guide and to configure your mount through "rclone config". If there are specific steps you are stuck at then we need more information to help you. Probably the same as what you found. If you follow the guide from DZMM and use the AutoRclone generator you should have a group of 100 service accounts who have access to your teamdrives. Then just put them in a folder and while configuring your mounts remove all the client id info and such and just point to the service account. For example: "/mnt/user/appdata/rclone/service_accounts/sa_tdrive_plex.json". This way I have multiple mounts for the same teamdrive, but based on different service account. This way when you hit an API quota you can swap your mergerfs folder for example from "/mnt/user/mount_rclone/Tdrive_Plex" to "/mnt/user/mount_rclone/Tdrive_Plex_Backup". The dockers won't notice and your API quota is resetted again.
  20. Man I had the same and been spending a week on it, but found the solution for my situation yesterday. Turns out it will do this when you hit the api limit/download quota on your Google drive mount which you use for your Plex. I was rebuilding libraries, so it can happen, but this was too often for me. Turned out Plex Server had the task under Settings > Libraries for chapter previews enabled on both when adding files and during maintenance. This will make Plex go through every video and create chapter thumbnails. Once I disabled that the problem was solved and Plex became responsive again.
  21. Still having issues? Cause for me it's working fine using your upload script. I have 80-100 rotating SA's though. I've never been able to stop the array once I started using rclone mounts. I think the constant connections are preventing it. You could try shutting down your dockers first and make sure there are no file transfers going on. But I just reboot if I need the array down. I do use direct mounts for certain processes, like my Nextcloud photo backups go straight into the Team Drive (I would not recommend using the personal Google drive anymore, only Team drives). I always use encrypted mounts, but depending on what you are storing you might not mind that it's unecrypted. I use the normal mounting commands, although I currently don't use the caching ability that Rclone offers. But for downloading dockers and such I think you need to check whether the download client is downloading in increments and uploading those or first storing them locally and then sending the whole file to the Gdrive/Team Drive. If it's storing in small increments directly on your mount I suspect it could be a problem for API hits. And I don't like the risk of corruption of files this could potentially offer. Seeding directly from your Google Drive/Tdrive is for sure going to cause problems with your API hits. Too many small downloads will ruin your API hits. If you want to experiment with that I suggest you use a seperate service account and create a mount specifically for that docker/purpose to test. I have seperate rclone mounts for some dockers or combination of dockers that can create a lot of API hits and seperate it from my Plex mount so they don't interfere with each other.
  22. Thanks for this. I have been experimenting with using the 99/100 added lines to my mount scripts. That didn't seem to work. I removed it and added the umask. That alone was not enough either. So I think the safe permission did the trick. I added umask and 99/100 to my mount scripts now and it seems to work. Hopefully it stays like this. I wonder if using 0/0 (so you mount as root) would actually cause problems. Cause you might not have access to those files and folders when you are not using a root account. Anyways, I'm glad it seems to be solved now.
  23. What is the reason for you to use uid 98 and gid 99 instead of the uid 99 and gid 99?
  24. I think it's two seperate issues, because I have both issues. The green color you've shown I also had. When testing the file directly on my laptop it also showed the green color. So not a Plex issue. And Plex does indeed have confirmed issues with HDR tonemapping. Especially on latest gen intel GPU's. Which has not been solved yet. Make sure if your GPU is not the issue, that Hardware Transcoding is actually working by checking your Plex dashboard when watching a transcoded video.
  25. After updating to 6.10.0-rc1 my 11th gen (11500) HW transcoding is not working anymore. Removed the modprobe file, also didn't work. The driver does seem to be loaded fine without modprobe file. Maybe it has to do with the Intel TOP plugin that needs an update, I have no idea. But just be aware that HW transcoding for me is currently fully broken. Before it would HW transcode even in HDR on plenty of movies (not all as we discussed earlier).