Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. Sorry, the mobile view screwed your Quote, and it seemed like you merged everything to /mnt/user/data/ which is the parent folder. But it merges to /mnt/user/data/mergerfs, which is fine. The permission problems have been discussed a couple of times in this topic, you don't need to go back too far for it. Can you show your docker container template for radarr? And are you downloading with Sabnzbd or something? What path mapping is used there? Also, you can go to the seperate /mnt/user/data/xx folders (local/remote/mergerfs) with your terminal cd /mnt/user/data/local/ and type ls -la It shows you the owners of the folder. I would expect it to say "nobody/users".
  2. You're merging a subfolder with its parent folder. That doesn't work.
  3. What @JonathanM means is the technical correct explanation for how read only permissions work with file systems. What Google is doing is just limiting uploading and calling it "read only". So if you have set up your system as written down in this topic, you can just disable your upload script. It will then see it still as 1 folder, but just keep your new files locally stored. Just make sure /mnt/user/local/ is not cache only, or you will run out of space eventually.
  4. Glad it works! Does it now show up in your Dashboard with direct play? All problems solved?
  5. You don't set permissions within the container, but within Unraid. That's the mapping of /movies for within your docker template. It could be that the files itself are limited in access, but then it wouldn't be able to play it in the first place. You could try the unraid file explorer (top right icon) and go to your plex media folder and select it and set permissions to everything read write. The file could also be corrupt, so are all your test files having this issue?
  6. So I'm still doubtful about that Disk1. Normally you name shares after the files or category you store. A share name would be Media for example and the path would be /mnt/user/Media. So you are sure you have a share named Disk1 in which you have the folder plex-media? And what library path do you use for the library which contains the files you're testing with? EDIT: if you check the movie in Plex, does it show the correct runtime?
  7. Oh and I forgot the Network settings within Plex. Did you put in your LAN networks there? Like 192.168.1.1/24 or whatever your subnet(s) are? The Shields are direct playing, but maybe they don't find the direct route to your Unraid box and going through WAN back to your Unraid server, which can cause bandwidth limits.
  8. Unraid 6.14? Probably a spelling error, but 6.12.4 is the latest (stable) version. And it's not so much a driver missing when installing Intel_GPU_TOP. It's exposing the GPU readings to Unraid, so you can check them with the GPU statitics plugin for example. That way you know to what capacity your iGPU is being used. For the internal Plex settings, try: both transcoder quality as hardware transcoding device on auto. And Background preset on Very Fast. In the Docker template change "plexpass" to "docker". And did you make sure to use the plex claim function? Just checking, I expect you did and just put in the link for anonymization. Another thing that is odd to me is that you are apparently using a Share called "Disk1"? Is that correct?
  9. Definitely a problem with your setup, not Unraid. Comments like "I really want to like Unraid, but......" are not going to engage people to try and help out. We're not here to motivate you to use a product. Your comments and what I've seen in your other topic is some lack of understanding the technology. Not a problem at all, but right now you're pointing at all other factors as the problem instead of your own lack of understanding. I'm running Plex on the latest Unraid stable version and also latest Plex version (plex pass) and my HW transcodes even still work. So I don't even have the issues the others here have. - Firstly, in your own topic, I see that you're playing a file that has both audio transcoded and a PGS subtitle. PGS subtitles are always CPU transcodes, and single core at that as well. So CPU spikes during playback of such files is normal. - Second, I'm not seeing your GPU stats, so I wonder which plugins you actually installed for your GPU? You should have the plugin "Intel-GPU-TOP". And with the plugin "GPU Statistics" you can see your GPU workload - You're in the linuxserver Plex topic right now. So use that docker, not the other ones. After you've installed it properly and can access your Plex server, click on the Plex docker and then > console. Enter: ls /dev/dri What is the output in the console? - How are you playing a video, like in the screenshot above? What device? What app? - What are your Plex transcoder settings?
  10. No, it's often when you had an unclean shutdown, or the script didn't finish correctly. So in that case you need to remove the file manually, or with some automation, for example at start of array. If the script runs correctly it will delete the file at the end. It's to prevent to have the same script starting a second instance of the same job.
  11. It's a checker file in appdata/other/rclone/renote/gdrive_media_vfs. Just delete that file and you can start the upload.
  12. I think you're talking about rclone VFS cache? That's just to keep recently played files cached locally to not have to download them everything you start accessing the files. It is not some file system you can access. I was talking about the Unraid cache, which you use for a Share. About the moving and Google deleting accounts. I think people are expecting too long of a period where they can keep access to the files. Maybe Google has some kind of legal obligation to keep access to the files as long as the user is paying, but I doubt it. I would count on 6 months maximum before accounts get deleted. I think you are also one of the VERY few who will be going the full local route. The investment is just way too big, and there are much cheaper solutions which cause a lot less headaches. Moving files when you have space again is still foreign to me. I think most will make a selection first of what data they want to save local, instead of just moving everything over without prioritizing. But if that's what you really want to do, you could use the rclone move script to move from your cloud to your /mnt/disks/diskXXX once you added a new one. Once that drive is full, the script won't work anymore. I still believe the 2 root paths per folder with the RR's is still the way to go for most people who want to move files locally, but selectively. Setting that up and adding the paths in Plex would take 15 minutes maximum. After that, you can go through Radarr and Sonarr per item. Obviously, with your volume, this is time-consuming. But like I said, your plan is not something many will do. I don't know your personal and financial situation, so I don't want to make too many assumptions. But knowing that I just purchased 88 TB of storage with deals, yet still spending 1300 euros, I don't think going to 300+ TB is reasonable. And I don't even plan to go local, I just needed drives for backups and such. I don't think you need to worry about the main post anymore. It's not possible anymore, so now it's mostly about moving local again ;).
  13. I'm a bit confused by your post, since you seem to be sharing info and also asking questions? Firstly, I think people with big libraries should obviously first decide what they want to do with their cloud media. Do you need all those files, or can you sanitize? Secondly, I think something that many people now haven't done (there was no need to) is being more restrictive on media quality and file size. Another possibility now, that I think not many used, is to add something like Tdarr to your flow to re-encode your media. So what I think the problem with moving local right now is that we have often nested the download location within our merge/union folder. Now that you disable the downloading, the files are stuck on your local drive (often a cache drive for speed). So when your mover starts moving your media to your array, your download folder structure also gets moved, breaking dockers like Sab/NZBget. I've thought a lot about this and how to work around this. Some idea might be to create a separate share and add it to the merged folder. But I still expect problems with the RR's not being able to move the files, because your local folder is still the write folder. I think the easiest choice is to leave everything as it is. Install the moving tuner plugin from the app store. In which you can input through a file with paths in it, which files/directories need to be ignored by the mover. This way you can just keep native Unraid functionality with the mover, no need for rclone. And you can keep your structure intact whilst making your share cache to array instead of cache only. And in case you didn't know, you can use the /mnt/user0/ path to write directly to your array, bypassing your cache. However, within the union/merger folder I don't see how we can use that and also have the speed advantages of using the merged folder with a root structure of /mnt/user/ as /user. Regarding the script, I think you have no other option than to define the directories you want to move over manually. Something like this: #!/bin/bash rclone copy gdrive_media_vfs:Movies '/mnt/user0/local/gdrive_media_vfs/Movies' -P --buffer-size 128M --drive-chunk-size 32M --checkers 8 --fast-list --transfers 6 --bwlimit 80000k --tpslimit 12 exit I don't see why you would want to pump all the media over without being selective, though. Getting 350TB local is not something you will be achieving without a big investment in drives and a place to put them. You'd almost certainly need to move to server equipment to store this big number of drives. And with current prices, you'd be looking at about 5k of drives you need. So you could also add another folder for each category, like Movies_Keep. And then add that folder to your Plex libraries. Then within the RR's you can determine per item if you want it local or not. The RR's will take care of moving the files, and Plex won't notice a difference. And you can just run your rclone copy script to move those folder's contents, without need for more specification.
  14. The port problem, you can fix it the way you do now. But I wonder if it's not another rclone script (maybe the upload?) that's using the --rc. I can't find the remote command in the mount script. Anyway, that's not what you are trying to do. You want to combine multiple mounted remotes into 1 merged folder. That's perfectly possible, but not with this script. You will need a custom script for which does another mount and then change the merge. You could try to copy part of the script to the top for the first mounting. The snippet for the merge would look like this: mergerfs /mnt/user/local/gdrive_media_vfs:/mnt/user/mount_rclone/gdrive_media_vfs:/mnt/user/mount_rclone/onedrive /mnt/user/mount_mergerfs/gdrive_media_vfs -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true Test this first with some separate test folders to make sure it works as you'd expect.
  15. Transferring from within your folder structure is more risky. You won't get the Google Drive feedback signals and in case your mount drops connection it might also corrupt. You need to change --drive-stop-on-upload-limit to --drive-stop-on-download-limit. You're not uploading but downloading, which has a 10TB limit per day. Not something you can achieve with a gigabit connection, so not really needed to put it in. Regarding encrypted data, when you transfer from the crypt mount to your local storage it should be the decrypted data. I'm doing that as we speak. But your rclone mount and folder structure is strange to me. It seems you are copying from your regular mount since you use an encrypted folder name. You need to copy from your crypt mount. So lets say you have gdrive: as regular mount and your crypt is pointed to gdrive: named gdrive_crypt: then you would transfer from gdrive_crypt: to your local storage.
  16. Just a heads up for you and others considering moving to Dropbox. Since 1 or 2 days ago Dropbox took the policy to only grant 1TB additional storage per user per month. So getting allocated storage beforehand and then increasing it rapidly for migrating will be impossible, or you must be very lucky with the support rep you get. Dropbox can't handle the big influx from Google refugees. But there have also been strong rumors that Dropbox is moving to the same offering as Google, limiting to 10TB per user. So be warned, that you might end up in the same situation as now with Google.
  17. Just rclone copy back from the mount to your local share. I would advise using the user0 path to bypass cache. No, you need to add some more \ after each command. Dropbox batch is missing one at least.
  18. What do you mean with Docker didn't mount? You could also try to use my simple mount script from 1 or 2 pages back. Just mount the cloud share first, see if that works. Only then continue with the merger part.
  19. For the google drive config remove the "/" with the crypt. Just gdrive-media:media. After that, reboot and run the mount script again, see if that helps.
  20. I've posted this before: They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request. So yes, not much you can do at this point. And with 16TB, you're better off just buying the drives yourself. If it's for backup purposes, then you can look at Backblaze and other offerings. Don't expect to media stream from those though. You'll need Dropbox Advanced for that, with 3 accounts minimum.
  21. Post your rclone config, but anonymize the important stuff before posting. To be clear. When you go to both /mnt/user/gmedia and /mnt/user/gmedia-cloud you get shown encrypted files?
  22. There is your problem. It's missing "--". But I don't know why you are adding it, since it's already part of the default mount scripts from DZMM?
  23. Sorry about the script. You need to create the folder first and then run the script. But are you using Dropbox Enterprise or Advanced, whatever it's called, the business one? If so you need to put a / before your folder in your rclone config for the crypt. So: dropbox:/dropbox_media_vfs