Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. I responded earlier today to you, but somehow the forum didn't put it in the topic. It's probably because you haven't claimed your server yet. You can find more information about that in this topic and on the plex documentation. If that doesn't solve it, let us know so we can troubleshoot.
  2. It literally says in the log just below it that you don't have to worry about the error given. 🤣
  3. The only folder you need to work with is mount_mergerfs for all your mappings. So you download to mount_mergerfs and then at first it will be placed on your local folder. From there the upload script will move them to the cloud folder. So it basically migrates from user/local to user/mount_rclone, but while using rclone to prevent file corruption and such. The folder mount_mergerfs won't see a difference, it will just show the files but not care about its location. With Deluge and Torrents generally this setup is a bit more tricky. Seeding from Google Drive is pretty much impossible, you will get API banned quickly and then your mount work until the reset (often midnight or 24h). So you'll have to seed from your local drive, which means you need to prevent these files from being uploaded. You can do that based on age of the files, or you can use a separate folder for your seed files and add that folder into your mount_mergerfs folder. Then after you have seeded them enough you can put them in your local folder to be uploaded. I don't have much experience with Torrents, DZMM had a setup with it though. Maybe he knows some tricks, but the most important thing to realise is that seeding from your Google Drive will not work.
  4. That's good news! Hope it will stay working trouble free for you ;).
  5. Ah, I forgot, also the mount_mergerfs or mount_unionfs folder should be fixed with permissions. I don't know whether the problem lies with the script of DZMM. I think the script creates the union/merger folders as root, which causes the problem. So I just kept my union/merger folders and also fixed those permissions. But maybe letting the script recreate them will fix it. You can test it with a simple mount script to see the difference, of course. It's sometimes difficult to advise for me, because I'm not using the mount script from DZMM but my own one, so it's easier to for me to troubleshoot my own system.
  6. Well, we've discussed this a couple of times already in this topic, and it seems there is not one fix for everyone. What I've done is added to my mount script: --uid 99 --gid 100 For --umask I use 002 (I think DZMM uses 000 which is allowing read and write to everyone and which I find too insecure. But that's your own decision. I've rebooted my server without the mount script active, so just a plain boot without mounting. Then I ran the fix permissions on both my mount_rclone and local folders. Then you can check again whether the permissions of these folders are properly set. If that is the case, you can run the mount script. And then check again. After I did this once, I never had the issue again.
  7. Did you get further with this? Sqlite should be in the docker, but you can't just use the Plex commands, you'll have to convert them to suit Linux/Linuxserver docker. That's why I gave you the references to how they translate from the Plex wiki to the actual functioning commands. Otherwise, maybe someone on reddit will be able to "translate" them for you. But why don't you just restore a backup? I did all this script shit before and nothing worked. Then just plunked back a backup and ran the update of libraries, and it was good again. You'll miss a bit of your watched data depending on how recent your backup is, but there is no guarantee you will be able to extract non-corrupted metadata either.
  8. Shouldn't be root, this caused problems. It should be: user: nobody group: users Often this is 99/100.
  9. I don't know how you set up your Unraid box, and whether you put in the custom network (VLAN) properly. Bridge is always an option, as long as the ports used a free on your Unraid box. I'm not a fan of using bridge, but it should at least work when using free ports. You can also use custom:br0 so it will get an IP of your LAN. I'm not familiar with the command you are using. Maybe this can be some reference for you? Plex has the following article: https://support.plex.tv/articles/repair-a-corrupted-database/ Commands there transfer to this for the linuxserver docker: First cd to: cd "/config/Library/Application Support/Plex Media Server/Plug-in Support/Databases/" "/usr/lib/plexmediaserver/Plex Media Server" --sqlite com.plexapp.plugins.library.db "PRAGMA integrity_check" "/usr/lib/plexmediaserver/Plex Media Server" --sqlite com.plexapp.plugins.library.db "VACUUM" "/usr/lib/plexmediaserver/Plex Media Server" --sqlite com.plexapp.plugins.library.db "REINDEX" "/usr/lib/plexmediaserver/Plex Media Server" --sqlite com.plexapp.plugins.library.db ".output db-recover.sqlite" ".recover" "/usr/lib/plexmediaserver/Plex Media Server" --sqlite com.plexapp.plugins.library.db ".read db-recover.sqlite" chown abc:users com.plexapp.plugins.library.db
  10. No you don't, just make sure your firewall is set up to allow communication if needed (for example, to ping Plex from Sonarr/Radarr when you have new grabs).
  11. Ok this was a learning curve for me, but I think I found it (for me it worked to down the plex service and up it again). cd /var/run/s6-rc/servicedirs/ s6-svc -d svc-plex Hope it helps. Everything is better than the support at the Plex forums to be honest... Almost everyone on Unraid is using the linuxservers dockers, so you have more people with similar setups and possible issues. And the linuxserver team is always on top of things, which makes it easier to differentiate between docker issues and Plex issues. Linuxserver also works with the Plexpass. Just have to use the right variables when settings up the docker. Read the linuxserver instruction for Plex before installing, and you should be good.
  12. I just checked and I get the same error. Seems like they moved the location within the docker. I will see if I can find the new location.
  13. And you're using the Linux server docker and not another one?
  14. Did you run the commands from the Plex docker terminal?
  15. Just checked with a reboot and the script is currently pulling the 2.33.5. So right now no issues with using this script as far as I can tell. Thanks for the heads up, in case the bug prevails in the next release we know to switch back!
  16. Just checked the github and it still shows 2.33.5 from april in releases. So what is the august one you are referring to?
  17. It's being build as a docker yes. You can see the mergerfs docker often in your deleted dockers. Will have to find the correct syntax to get the working mergerfs if this issue persists.
  18. I run 64GB ram and have 30% constantly used. Doesn't mean you also need that, but 16GB is not a lot. You have to remember that while doing playback all the chunks are stored in your RAM if you are not using VFS cache. And if you have Plex also transcoding in RAM it will also consume. So I would lower the chunk sizes in both upload as mount scripts. This is an issue you should not have when you followed the instructions. You are missing files because you are not using mergerfs. For me files don't dissapear ever, because it doesn't matter if the files move from local to cloud.
  19. That's what I expected to be the cause, you don't have enough RAM in your server. If you start uploading it will consume a lot of RAM. So you could look at the upload script and check the seperate flags and reduce the ones that use your RAM. When I have my mounts running and uploads it will often take a lot of my quite beefy server.
  20. Don't think server side copy works like that. It just works on the same remote and moving a file/folder within that 1 remote. So it will probably just end up being a 24/7 sync job for a month. Or maybe get a 10GB VPS and run rclone from there for the one time sync/migration. Problem mostly is the 10TB (if they didn't lower it by now) download limit per day. I'm not sure about the move yet though. I also use the workspace for e-mail, my business and such, so it has it's uses. But them not being clear about the plans and what is and isn't allowed is just annoying and a liability on the long run. Didn't we start with unionfs? And I think we only started to use VFS after we got mergerfs available. So that could be the explanation of your performance issue during your test. I wonder if there are other ways to combine multiple team drives to make it look like 1 remote and thus hopefully increasing performance for you. I'll have to think about that. EDIT: Is your RAM not filling up? Or CPU maybe? Does "top" in your unraid terminal show high usage by rclone?
  21. Interesting. I have like 6 mounts right now, but I did notice that Rclone is eating a lot off resources indeed, especially with uploads going on as well. So I'm curious what your performance will be when you're finished. Are you still using the VFS cache on the union as well? Another thing I'm thinking about is just going with the Dropbox alternative. It is a bit more expensive, but we don't have the bullshit limitations of 400k files/folders per Tdrive. No upload and download limits. And just 1 mount to connect everything to. It only had api hits per minute which we have to account for. And I can't shake the feeling of Google sunsetting unlimited any time soon for existing users as well.
  22. Thanks for testing it already, didn't have time to do it sooner. Disappointing results, would have been nice when we can fully rely on rclone. I wonder why the implementation seems so bad?
  23. Dropbox should work fine, others have switched over to that. Just be aware that the trial is only 5TB. And when you do decide to use their service you should press on getting a good amount of storage beforehand, otherwise you will keep having to ask for more as well while migrating.
  24. It's something Google recently started doing for new accounts. I've told you about this possibility earlier. You can ask for more but they will probably demand you buy 3 accounts first and then still have to explain and ask for more storage with increases of 5TB only. Alternative is to use Dropbox unlimited which you need 3 accounts for as well but it's really unlimited with no daily upload limits. But it all depends on how much storage you need and your wallet.
  25. You're misunderstanding the way service accounts work. They function as regular accounts. So instead of using your mail account, you use a service account to create your mount. What I did is rename some of the service account files to the mount they represent. You can put them in a different folder. Like sa_gdrive_media.json for your media mount. So for this you will have to not put in the path to the folder with the service account, but to the exact folder. Which you did in your last post. By seperating this you will also not use these SA's for your upload script, seperating any potential API bans. The upload script will pick the first of the service accounts. Then when it finishes because it hits the 750gb api limit it will stop. That's why you put it on a cron job so it will start again with service account 2 until that one hits the api limit. And so on. Just so you understand, the script doesn't just keep running through all your service accounts. You will have to restart it through a cron job. About moving your current folder to your teamdrive. It depends whether the files are encrypted the same as your new team drive mount. So with the same password and salt. If that is the case you can just drag the whole folder from your gdrive to your team drive from the Google Drive website. Saves a lot of time waiting for rclone transfers to finish. You can even do this to transfer to another account. The encryption has to be identical though.