Bjur

Members
  • Posts

    104
  • Joined

  • Last visited

Everything posted by Bjur

  1. Hi I've received this mail from Google, do I need to do anything to not loose files with this plugin: A security update will be applied to Drive On September 13, 2021, Drive will apply a security update to make file sharing more secure. This update will change the links used for some files, and may lead to some new file access requests. Access to these files won't change for people who have already viewed them. What do I need to do? You can choose to remove this security update in Drive, but removing the update is not recommended and should only be considered for files that are posted publicly. Learn more After the update is applied, you can avoid new access requests by distributing the updated link to your file(s). Follow instructions in the Drive documentation Which files will the update be applied to? See files with the security update
  2. You can just do as I do. Use exclusion to exclude metadata and subs from being uploaded.
  3. Sorry you are right, I've just reverted back to standard port 32400. But would it be more safes or is 32400 standard good enough since it's not a directly attached login local on Plex?
  4. Thanks so what you are saying that it is just as secure opening port 32400 instead of running it behind a reverse proxy? I don't have port 32400, since it's running on reverse proxy so port 1443 is open and also configured at 1443 at Plex settings.
  5. Because I can use my subdomain without using specific ports. I think it would also make more security?
  6. Hi is anyone here using Plex on a reverse proxy? I have installed Letsencrypt (Swag now) and have setup Plex and other dockers on proxynet. I can get it all to work, but I can't get Plex to be green on remote access. I have specified public port as 1443 (used Spaceinvaders guide) and setup custom server access URLs as my https://plex.xxx.xx domain. I can't get it to work on remote access no matter what I try and it says indirect playback and max 2 mbps on my cell phone outside my home network, have disabled any bandwidth limitations. Can someone please help me:)?
  7. Is it possible to check somehow if the passwords are correct for the mounts created?
  8. Is it possible to check somehow if the passwords are correct for the mounts created?
  9. Hi first thanks for the excellent videos SpaceInvader. I have downloaded letsencrypt and followed the video but after selecting the newly created network with Sonarr I can't access Sonarr event locally. Do any of you have an idea what is wrong?
  10. Hi thanks for the fine answer using Usenet and just DL to standard share. When I did DL to UD then I get fast speed so it must be parity writing. I initially did DL to SSD also fast speed but have a Samsung EVO 860 or 870 can't remember but I didn't want to use that because of the tearing so I only use that for dockers. Does that makes sense?
  11. I'm still not 100% satisfying with my download. When I download something I only get around 20 Mbps and have gigabit line. I have tried in both local and mergerfs folder. Could it be perhaps the parity is writing and slowing down and if so, how do I solve it?
  12. Hi in my mountfolders, I just want media\tv When I do: MountFolders=\{"media/tv"\} I get 2 folders. 1 media\tv which is correct but also a {media with subfolder tv} How do I stop that?
  13. Hi, sorry I have a couple of extra questions: 1. If I set my download folder to /mnt/user/mount_mergerfs/downloads I will only get around 10 MBps, if I set my download folder to /mnt/local/downloads I will get around 70 MBps. So when you write there's less chance something goes wrong I want to make sure I get more benefits when using mergerfs folder, since my DL speed is sincerely limited if I use that method. Is other experiencing the same and are there any major disadvantages to use local instead? 2. How often do you scan library in Plex, and will a "Run a partial scan when changes are detected" result in quickly API ban?
  14. So will it be fine to create download folder in the root of /mnt/mergerfs/ and move completed into each of the crypt folders?
  15. @DMZZ @watchmeexplode5 Thanks for the answer. So should I create my download folder in the root of /mnt/mergerfs/ since I have 2 separate mounts like I did on local, since I don't think it would make sense to make in movies and afterwards move the completed files to the other drive if it's not a movie. Can you follow?
  16. Okay, thanks for the information. And I guess the same goes with /mnt/local in regards to moving to that first? Just followed watchmeexplodes advice to move downloaded stuff into local first. I just remembered you writing a post earlier in regards to files not being overwritten correct which caused multiple copies, so I wanted to make sure.
  17. Hi a question. If I wish to overwrite an older version with a new video. Will it be best to move it to local folder first, which would be empty, because the video is already uploaded and figurates in mergerfs and run upload script to let it overwrite or should I move it directly to mergerfs folder and run upload script? I don't want copies of the video just have the older version which is already in the cloud overwritten.
  18. No I have my files in /mnt/local/remote. I'm running the upload script. While it is running I want to add new files to my /mnt/local/remote while it is running
  19. Okay have you tried it. Just read earlier today on Rclone it was not a good idea, so I just want to make sure before I try it out. I don't won't to mess with the upload I'm running since it will take a long time to finish. Thanks for the help.
  20. @watchmeexplode5: Thanks. So I think I will create a manual download folder in /mnt/local/Downloads /mnt/local/Downloads/temp /mnt/local/Downloads/finished and then let Filebot move them to: /mnt/local/googleFi_crypt /mnt/local/googleSh_crypt What will happen when you move files to local folder while an upload is running. Will that be fine or will it mess the upload up?
  21. @watchmeexplode5 Would it make more sense to me to make a Download folder to /local/download instead of /local/REMOTE/downloads. Limit the local folder share to only 1 disk and the completed folders as: /local/googleFi_crypt/movies or /local/googleSh_crypt/series It should then be easy to move the completed files quickly and still have a separate download folder before the remotes. Does that makes sense?