Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Everything posted by Kaizac

  1. Next time post your scripts in the code box option in this input field of the forum. That way the formatting is preserved. Right now I see --allow-other not lined up with the other commands. Is that actually the case for your script? I don't expect that to be the issue, but you never know. Another thing that could cause is your rclone config for Dropbox. Can you share the config anonimized? I just need the Dropbox and the crypt on that dropbox config. So 2 mounts. And then lastly, just use a simple mount script. For example: rclone mount --allow-other --log-level INFO --umask 002 dropbox_media_vfs: /mnt/user/mount_rclone/dropbox This is not the mount script for daily usage, but it will do the minimum to see if the mount script is the issue.
  2. It's missing a part of the mount script. But what I was wondering is, can you play the files through your explorer? Seeing the files is different from being able to actually access and read them.
  3. You can't switch 1:1 from Drive to Dropbox, so I don't know what you mean with the cleaning up of your weird commands. It seems you are still using a cache? That already complicates your setup for testing. So what I would do is to use a very simple mount command to just mount your dropbox to a folder like /mnt/user/mount_rclone/dropbox/ Then see if the files show through your windows/mac explorer software and see if you can play them from there. If you can play them through that, we know Plex is bugging out. But right now I suspect the Rclone side first.
  4. Thank you, that makes sense! I will check whether it's in IT mode. Should be fine then.
  5. VPN is not needed. You're allowed with most providers to only stream from 1 IP address, this is your WAN/VPN IP. So you can have all the streams you want from within your network, as long as they share their IP. Outside your network, the same applies, so you have to be careful that someone isn't stream at home and someone else from another IP. But with 3 euros a month, getting another account is maybe not unwise to do. But they have some VPNs whitelisted, so you could if you wanted. You'll have to check their website and your VPN. Important to do, to protect yourself, is to make sure when using software like Stremio with Torrentio, to only use Debrid links (cached are preferred, since they already downloaded the torrent to their servers). You can also use software like Stremio to stream directly from Torrents, without Debrid. But you expose yourself that way without VPN. And the speed is often terrible, so I would really not advise doing that. Real-Debrid is the biggest for media streaming, with the most 4K quality as well. All-Debrid is also good, but has less 4K material cached (already downloaded on their servers). But since it's so cheap, I have it as backup as well. They are also very useful for those file hosts, when you want to download some file and you have that slow transfer speed. All-Debrid has the most file hosts, but if you have both, almost always 1 of the 2 has a premium connection and you can get full download speed. Premiumize is another provider, which I don't have experience with. They are a bit more expensive, but also offer Usenet access. Which you can use to download if you want to. If you use Kodi with some addons you can also get Easynews, which is the only usenet provider that allows to download and stream simultaneously. The others only offer that for Torrents. I'm not a fan of Kodi though, and Torrents are plentiful for me. One project to check out is https://github.com/itsToggle/plex_debrid. They built a script that connects with Plex and you can put in a watch list or just look up a movie/series in Plex, and it will be cached in within a few minutes, and you can watch it. This would be interesting for people with multiple Plex users. It's a bit more difficult to set up though, so prepare to spend some time on that.
  6. Debrid is not personal cloud storage. It allows you to download torrent files on their servers, often the torrents have already been downloaded by other members. It also gives premium access to a lot of file hosters. So for media consumption you can use certain programs like add-ons with Kodi or Stremio. With Stremio you install Torrentio, setup your Debrid account and you have all the media available to you in specific files/formats/sizes/languages. Having an own Media library is pretty pointless with this, unless you're a real connoisseur and want to have very specific formats and audio codecs. It also isn't great for non-mainstream audio languages, so you could host those locally when needed. I still got my library with both Plex and Emby lifetime, but I almost never use it anymore.
  7. I'm intending to buy the Adaptec ASR-71605 to swap it with my LSI 2008 HBA controller. I often see a battery in the sales pictures. What is the point of this battery? Is it as backup for flashing firmware, so it doesn't lose power? Do I need to keep it attached while running the card during daily usage?
  8. You're absolutely right, thanks for pointing that out! I edited my previous scripts.
  9. Yes, just use the documentation to create a SMB remote. Then everywhere the script mentions mount_rclone, you replace that with your SMB remote name. https://rclone.org/smb/
  10. That's simple to add. But it's not necessary if you are <1gbit downloading, you won't hit the download cap. But if you have an unstable connection, you can use this to run on a cron schedule. #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/appdata/other/rclone/rclone_download_gdrive_vfs" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/appdata/other/rclone/rclone_download_gdrive_vfs fi ####### End Check if script already running ########## ####### check if rclone installed ########## if [[ -f "/mnt/user/mount_rclone/gdrive_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with download." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/appdata/other/rclone/rclone_download_gdrive_vfs exit fi ####### end check if rclone installed ########## # sync files rclone move XXMOUNTXX /mnt/user/local -vv --buffer-size 512M --drive-chunk-size 512M --tpslimit 8 --checkers 8 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-download-limit # remove dummy file rm /mnt/user/appdata/other/rclone/rclone_download_gdrive_vfs exit
  11. Download limit for Google is 10TB per day, upload is 750gb. I think you need to change the --drive-stop-on-upload flag with the --drive-stop-on-download-limit one. I think the script will then switch to the next service account to download once you hit 10TB. But with a gigabit line you won't be able to hit the 10TB per day. So you're better off to just use a simple script and keep that running. Something like: rclone move XXMOUNTXX /mnt/user/local -vv --buffer-size 512M --drive-chunk-size 512M --tpslimit 8 --checkers 8 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --drive-stop-on-download-limit
  12. That looks good. If you get your remote sorted these scripts should work fine. Regarding the reboot issue, it's known that with rclone shutdowns are often problematic because rclone keeps continuous processes going which unraid has trouble shutting down. What does alarm me is that it's trying to remove /mnt/user. Is that something in your unmointing script? So you need to get a good unmointing script activated when your array stops. In the github of DZMM in the topic start you can find an issue with an adjusted script in the comments that should help properly shutting down. I use that in combination with the fuzermount commands. It's a bit trial and error. If you open the unraid console and type "top" you should see all the processes running. Each one has an ID. Then you can use the kill command to shutdown that process manually. Google the exact command, I think it's kill -p XX, but I can't check right now. Once done you shouldn't get this error when shutting down. It's not advised to use this regularly. But you can use it now just to rule out that there are no other issues causing this error.
  13. Are you using Dropbox business? If so, you need to add a / before your folder. So how the crypt remote works is as follows. It will use your regular remote, in your case "dropbox". And with adding : you can define what folder the crypt will point to. So let's say you have a remote for your root dropbox directory. If you then point the crypt remote to dropbox:subfolder it will create the subfolder in your root directory and all the files you upload through your crypt will go into that subfolder. In your case dropbox_media_vfs is the name of your remote. But now you also use it as the subfolder name, which seems a bit too long for practicality. So maybe you want to use something like "archive" or "files", whatever you like. Then your crypt remote configuration would be as follows: [dropbox_media_vfs] type = crypt remote = dropbox:/archive password = XX password2 = XX This is assuming you use Dropbox Business which requires the / in the remote, which is different from Google and explains why you run into the error.
  14. Security is not my issue, it's continuity. 1 person will be the admin of a Dropbox account. So when I'm not the Admin, someone can just delete my data or user. That's a big issue for me. But if you have people to trust then sharing a storage and pooled library is an option. But like I said, Dropbox is going to end as well. Maybe in 1, maybe in 5 years. The problem will be the same as we have right now.
  15. I think he means the Gsuite to Enterprise Standard switch we all had to do with the rebranding of Google. But you have Enterprise Standard? And if so, if you go to billing or products. What does it say below your product? For me it says unlimited storage. Right now I don't have to migrate, but as soon as I do I will go fully local. You can join a Dropbox group, but you would need to trust those people. That is too much of an insecurity for me. So with your storage of "just" 50TB, it would be a no-brainer for me. Just get the drives. You will have repaid them within the year. In the end, Dropbox will also end their unlimited someday and it will be the same problem. And look at your use case. Is it just Media or mostly backups? Backups can be done with Backblaze and other offerings that aren't too costly. Media has alternatives in Plex shares and using Debrid services. The last one I'm hugely impressed by. But also that depends on how particular you are about your media.
  16. They already started limiting new Workspace accounts a few months ago to just 5TB per user. But recently they also started limiting older Workspace accounts with less than 5 users, sending them a message that they either had to pay for the used storage or the account would go into read-only within 30 days. Paying would be ridiculous, because it would be thousands per month. And even when you purchase more users, so you get the minimum 5, there isn't a guarantee you will actually get your storage limit lifted. People would have to chat with support to request 5TB additional storage, and would even be asked to explain the necessity, often refusing the request. So far, I haven't gotten the message yet on my single user Enterprise Standard account. Some speculate that only the accounts using massive storage on the personal gdrive get tagged and not the ones who only store on team drives. I think it probably has to do with your country and the original offering, and Google might be avoiding European countries because they don't want the legal issues. I don't know where everyone is from though, so that also might not be true. Anyway, when you do get the message, your realistic only alternative is moving to Dropbox or some other more unknown providers. It will be pricey either way.
  17. Look at backblaze if you just want to backup. Seems to get positive reviews. Their only issue atm is apparently that restoring is on file level. Don't know how that would work with tar snapshots. There are some other alternatives. I think I'll just stick to an offsite backup at a family's place and sync with their NAS for mutual backup.
  18. Enterprise Standard, always had this. Even with Enterprise Standard 5 users you are not guaranteed of getting the storage. You will still have to ask for each time and only get 5TB and you might even have to explain your use case for getting more. Plus it's more expensive than Dropbox. But paying 800 USD per year or more for storing some media is just nonsense in my opinion. Better to get more hard drives yourself, or use alternatives like debrid or plex shares.
  19. Still nothing for me, no banner and no e-mail. Dropbox works just as well as Google Drive is what I'm reading. Some people who got annoyed by the API limits of Google already switched a while ago.
  20. Only 1 user and I don't see any notification. Maybe it happens when you run close to your next billing cycle? No idea so far.
  21. I'm also curious where you see it. Somehow some people get it and some don't. I think it might depend on whether you have an actual company connected to the account. If you are using encryption or not and if you are storing on team drives or mostly personal google drive. I think the cheapest solution is to get a dropbox advanced account (you need 3 accounts). You might be able to pool together with others if you trust those. But it also depends on how much you store. Local storage could be more interesting financially.
  22. I'm using 256M, DZMM is using 512M. It depends on the amount of users/transfers you have and how stable/fast your internet connection is I think. I'm running 1 gig fiber with no hiccups and I have almost no users. But each transfer will use the buffer size, so it can quickly add up. I run a lot of sync/backup jobs which also eat up RAM. So even though I have 64GB RAM I try to stay a bit conservative. Say, you open a file it will buffer 512M first for example. You close it and then re-open it, it will again start to buffer the 512M. I find that wasteful for my situation. Default is 16M, so both 256M and 512M is already way over default.
  23. Are you running the mount script in the background from user scripts plugin? If not, you should always run in the background, otherwise the script will just end when you close the pop-up. I've altered my own mount script to your situation. I'm not using VFS cache (waste of storage I think), so it's a simpler script than the one you are using now. If this one still has the same issues, there is something wrong in your setup that you are missing with folders or maybe an instable connection. Just save this as a new script in user scripts and run in the background. Make sure your folders are not mounted right now, so you have a fresh/clean start before running my script. And also delete the checker files in /mnt/user/appdata/other/rclone if you have those. #!/bin/bash ################## Check if script is already running ################### # sleep 1 echo "$(date "+%d.%m.%Y %T") INFO: *** Starting rclone_mount script ***" echo "$(date "+%d.%m.%Y %T") INFO: Checking if this script is already running." if [[ -f "/mnt/user/appdata/other/rclone/rclone_mount_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/appdata/other/rclone/rclone_mount_running fi ####### End Check if script already running ########## ########## create directories for rclone mount and mergerfs mount ############## mkdir -p /mnt/user/appdata/other/mergerfs/ mkdir -p /mnt/user/google_remote/gdrive mkdir -p /mnt/user/google_merged/gdrive ####### Start rclone gdrive mount ########## # check if gdrive mount already created if [[ -f "/mnt/user/google_remote/gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs already mounted." else echo "$(date "+%d.%m.%Y %T") INFO: mounting rclone vfs." rclone mount --allow-other --umask 002 --buffer-size 256M --dir-cache-time 9999h --drive-chunk-size 512M --attr-timeout 1s --poll-interval 1m --drive-pacer-min-sleep 10ms --drive-pacer-burst 500 --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes --uid 99 --gid 100 gdrive: /mnt/user/google_remote/gdrive & sleep 15 # check if mount successful with slight pause to give mount time to finalise echo "$(date "+%d.%m.%Y %T") INFO: sleeping 5 sec" sleep 5 echo "$(date "+%d.%m.%Y %T") INFO: continuing..." if [[ -f "/mnt/user/google_remote/gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone gdrive vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone gdrive vfs mount failed - please check for problems." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End rclone gdrive mount ########## ####### Start mergerfs mount ########## if [[ -f "/mnt/user/google_merged/gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, mergerfs already mounted." else # Build mergerfs binary and delete old binary as precaution rm /bin/mergerfs # Create Docker docker run -v /mnt/user/appdata/other/mergerfs:/build --rm trapexit/mergerfs-static-build # move to bin to use for commands mv /mnt/user/appdata/other/mergerfs/mergerfs /bin # Create mergerfs mount mergerfs /mnt/user/google_local/gdrive:/mnt/user/google_remote/gdrive /mnt/user/google_merged/gdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true sleep 5 if [[ -f "/mnt/user/google_merged/gdrive/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, mergerfs mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: mergerfs Remount failed." rm /mnt/user/appdata/other/rclone/rclone_mount_running exit fi fi ####### End Mount mergerfs ########## rm /mnt/user/appdata/other/rclone/rclone_mount_running echo "$(date "+%d.%m.%Y %T") INFO: Script complete" exit
  24. Root folder is fine. Server side transfers true should be added indeed. After that I would do a fresh reboot and run the mount script once. And then see the mapping. Gdrive/gdrive should not be happening unless you made that folder within the folder.
  25. I think this is exactly the problem. I made an error with setting up your rclone mounts through rclone config. Can you show your rclone config for your gdrive mount and crypt if you use that? Just wipe any identifying info before posting.