Jump to content


  • Content Count

  • Joined

  • Days Won


Everything posted by Kaizac

  1. How are you transferring the books to your devices when Calibre is running on it's own IP? I can use the content server but that will download the epub file, and not give me the option to create a library within my ereader.
  2. @DZMM how would your cleanup script work for a mount you've only connected to mount_rclone (backup mount for example which isn't used in mount_unionfs)? I can't alter your script as I'm not 100% sure whether some lines are necessary.
  3. No. The Team Share is a shared storage which multiple users have access to. So you get 750gb per day per user connected to this Team Share. It's not just extra size added to a specific account.
  4. Yeah I do the same, but thought backing up my whole cache was a nice addition. I was wrong ;). On another note, I'm not having any memory problems for a few months now. So maybe rclone changed something but I'm never running out of memory. Hope it's better for others as well.
  5. I'm currently on a backup of 2 TB with almost 400k files (370k)..... I thought backing up my cache drive would be a good idea, whilst forgetting that Plex appdata is huge of small files. Currently also getting the limit exceeded error. So I'm pretty sure rclone doesn't count folders as objects, but Gdrive does.
  6. How do you make sure the unmount script is run before the mount script?
  7. Just create 2 scripts. One for your rclone mount first start and your rclone continous mount. In your first start script you put at the beginning to delete the check files which should be removed to run the script properly.
  8. You could use mount_rclone as your RW folder and it will download directly to your Gdrive. However this will slowed by your upload speed. And it will probably also cause problems while direct writing to the mount. Rclone copy/move/etc. is intended to solve this issue by doing file checks.
  9. Not sure if I understand you properly. You point Sonarr to your unionfs folder so it doesn't matter where you store your file.
  10. Thanks! Kerio is indeed paid and quite expensive as far as I can tell. Using local clients on my desktop and then back that up is possible, however then I'm wasting local storage as well. Which is just waste of expensive space when I have enough of it on my Unraid box. So far running a mail client like Thunderbird in docker seems most likely.
  11. Thanks, unfortunately network drives are not recognized. And when I create a sym or dirlinker it also sees it's a networked drive.
  12. I'm looking for a way to create backups on my Unraid box of my online e-mailaccounts (Outlook/Hotmail/Gmail/etc.). I found MailStore home (free) and Mailstore Server (300 USD). The home version can only run on a Windows box while storing locally. Now I could run this in a Windows VM, but I find that quite a waste of resources. Are there any other ways you found to create these backups? Running Thunderbird as docker seems possible, but that is also not really the clean situation I'm looking for.
  13. I also have all my media stored on my Gdrive, with nothing local. Only thing I keep local are the small files, like subtitles and .nfo's because of the Gdrive/Tdrive file limit. I also keep a backup on Gdrive of my local files. Recently had a problem that I couldn't play files and it turned out my API was temporarily banned. I suspect that both Emby and Plex were analyzing files too much to get subtitles. So I switched to another API and it worked again. Something that has been an ongoing issue is memory creep. I've had multiple times that the server gave an out of memory error. I think it's because the scripts running simultaneously take up too much RAM (upload/upload backup/moving small files/cleanup). I will experiment a bit more with lowering the buffer sizes to lower the RAM usage. But with 28GB RAM I didn't expect to run into problems to be honest.
  14. No, did you put the shield on wifi? If so that's probably the issue. Or are you playing 4k movies? The shield doesn't need transcode so it will play on full quality which can be taxing on your bandwidth depending on your file size.
  15. Are you hard rebooting/force rebooting your server? If your server doesn't have enough time to shut down array it won't run the unmount script. That's why I added the unmount script before my mount script at boot. Otherwise the "check" files won't be removed and the scripts see it as still running.
  16. Did you set up your own client id before (per what DZMM linked)? If so log in to your Google Admin page and see whether these credentials are still valid. If so, rerun the config for your mount(s) and put in the client id and secret again. Don't alter anything else. Then when you get to the end of the config it will ask you if you want to renew your token. Then you can just say yes and run the authorization again. That should do it.
  17. You're complicating stuff too much, he gave you a working script. You can just use: rclone copy /mnt/user/Media Gdrive_StreamEN:Media --ignore-existing -v Just need to change the /mnt/user/Media folder to the folder you want to copy. And same for Gdrive_StreamEN:Media aswell. So if you store it in the folder Secure it would be Gdrive_StreamEN:Secure.
  18. Make sure you restart Krusader before you test. Krusader often only sees the old situation and not the new one, thus you won't see the mount working. When you go to your mount folder (mount_rclone) from the tutorial you should see 1 PiB in size to know it worked. If you don't see that, your rclone config and mounting was wrong. To help you with that we need more info about your config and mounting script.
  19. Use User Scripts and run it in the background. Through the log you can see what's happening if you want but you don't have to keep a window open.
  20. Well depends on your nginx configs. If you pointed to 443 somewhere there and you changed it to 444 you should also change that port in your nginx config for Nextcloud. If you point to your dockername (since you use proxynet) I think you can just leave it as is.
  21. It was in your log: listen tcp bind: address already in use. So maybe it's your LE docker that is on port 443? Or maybe you have your https WebUI of Unraid on 443? I think if you change the port of Nextcloud to 444 you will find that it will start up fine. You just have to change your nginx config to match the new port.
  22. Regarding your nextcloud it seems like you are running another docker on port 443. So you can change the port on Nextcloud or on the other docker.
  23. By the way, did you also allow the WAN to access your port forward? So not just creating a Pfsense port forward, but also an associated firewall rule on your WAN interface? I'm running a DuckDNS docker on my unraid box, but every other DNS service works to get your ip pushed a dns domain. That address I put in the CNAME alias of Cloudflare. So kaizac.duckdns.org is in the alias of every cname I want routing to my home address. So I think you followed SpaceInvaders guide to get proxynet for your dockers. I'm not much of a fan of that construction. And I created nginx configs in site-configs of LE. So it's on you whether you want to make that change. But I think for testing purposes you can just put everything back on bridge and use your unraidIP:port in your nginx configs and it should work. If you want to go my route: Within pfsense I created a few VLAN's. You don't have to do it, but I like to keep it clean. You can also just give the dockers an address on your local LAN ip subnet. With a VLAN you can use PFsense to block your dockers to your local IP subnet if you so desire. Then I also created that VLAN in the Network Interface of Unraid. After that is done you give all the dockers that need to access other dockers or need to be accessed from your LAN an ip on your VLAN or LAN network depending whether you use a VLAN or not. Make sure when you give your LE docker it's own ip you also change the firewall rules in Pfsense. When the dockers have it's own IP you have to change your nginx configs to redirect to the right IP and port.
  24. If you are happy with it then it's cool. But if you prefer routing through their CDN you should be able to get it to work. My set-up is identical to yours, I just configured things differently. I'm using VLAN's and giving dockers their own IP. If you want to troubleshoot through them, let me/us know. If you're fine with the current state then enjoy :).