Jump to content

Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. I'm looking for a way to create backups on my Unraid box of my online e-mailaccounts (Outlook/Hotmail/Gmail/etc.). I found MailStore home (free) and Mailstore Server (300 USD). The home version can only run on a Windows box while storing locally. Now I could run this in a Windows VM, but I find that quite a waste of resources.

     

    Are there any other ways you found to create these backups? Running Thunderbird as docker seems possible, but that is also not really the clean situation I'm looking for.

    • Like 1
  2. 10 hours ago, DZMM said:

    This thread has got a lot more action than I or @Kaizac or @slimshizn probably ever hoped for.  There's been a lot of action getting people up and running, so I'm wondering how some of the people in the background are getting on? 

     

    How intensively are other people using rclone?  Have you moved all your media?  How are you finding the playback experience?

     

    Personally, I don't have any plex content on my unRAID server anymore, except for my photos and I've now got over 300TB of Plex content stored on gdrive, as well as another big crypt with my backups (personal files, VM images etc).  I don't even notice the impact of streaming anymore and when I do have any skips, I actually think they are because of local wi-fi issues rather than rclone.

     

     

    I also have all my media stored on my Gdrive, with nothing local. Only thing I keep local are the small files, like subtitles and .nfo's because of the Gdrive/Tdrive file limit. I also keep a backup on Gdrive of my local files.

     

    Recently had a problem that I couldn't play files and it turned out my API was temporarily banned. I suspect that both Emby and Plex were analyzing files too much to get subtitles. So I switched to another API and it worked again.

     

    Something that has been an ongoing issue is memory creep. I've had multiple times that the server gave an out of memory error. I think it's because the scripts running simultaneously take up too much RAM (upload/upload backup/moving small files/cleanup). I will experiment a bit more with lowering the buffer sizes to lower the RAM usage. But with 28GB RAM I didn't expect to run into problems to be honest.

  3. 2 hours ago, brasi said:

    Anyone experiencing play back issues with nvidia shield plex client with this setup?  I keep getting "Your connection to the server is not fast enough to stream this video”

     

     

    No, did you put the shield on wifi? If so that's probably the issue. Or are you playing 4k movies? The shield doesn't need transcode so it will play on full quality which can be taxing on your bandwidth depending on your file size.

  4. 8 hours ago, francrouge said:

    if i check my logs i always got this until i manually reset 

     


    Script Starting Wed, 01 May 2019 18:50:01 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    01.05.2019 18:50:01 INFO: Exiting script already running.
    Script Finished Wed, 01 May 2019 18:50:01 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    Script Starting Wed, 01 May 2019 18:55:01 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    01.05.2019 18:55:01 INFO: Exiting script already running.
    Script Finished Wed, 01 May 2019 18:55:01 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    Script Starting Wed, 01 May 2019 18:58:05 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    01.05.2019 18:58:05 INFO: Exiting script already running.
    Script Finished Wed, 01 May 2019 18:58:05 -0400

    Full logs for this script are available at /tmp/user.scripts/tmpScripts/test rclone/log.txt

    Are you hard rebooting/force rebooting  your server? If your server doesn't have enough time to shut down array it won't run the unmount script. That's why I added the unmount script before my mount script at boot. Otherwise the "check" files won't be removed and the scripts see it as still running.

  5. 6 hours ago, francrouge said:

    i'm getting this error

     

    2019/04/25 20:41:38 INFO : Google drive root 'crypt': Failed to get StartPageToken: googleapi: Error 401: Invalid Credentials, authError
    2019/04/25 20:42:38 INFO : Google drive root 'crypt': Failed to get StartPageToken: googleapi: Error 401: Invalid Credentials, authError
    2019/04/25 20:43:38 INFO : Google drive root 'crypt': Failed to get StartPageToken: googleapi: Error 401: Invalid Credentials, authError
    2019/04/25 20:44:38 INFO : Google drive root 'crypt': Failed to get StartPageToken: googleapi: Error 401: Invalid Credentials, authError

    Did you set up your own client id before (per what DZMM linked)? If so log in to your Google Admin page and see whether these credentials are still valid. If so, rerun the config for your mount(s) and put in the client id and secret again. Don't alter anything else. Then when you get to the end of the config it will ask you if you want to renew your token. Then you can just say yes and run the authorization again. That should do it.

    • Like 1
  6. 42 minutes ago, tmoran000 said:

    I will primarily be using the copy command. I see you are using the source starting at /mnt/, that is very helpful. Now in the documentation on the site it showed the first command in an Rclone copy  as    Source:Source path... Im not sure what to replace the "Source" with.. I understand the path now but what would I change the word source with before the :    so where the ** is what would I put here based on the format Source:Sourcepath Dest:Destpath

     

    You're complicating stuff too much, he gave you a working script. You can just use:

     

    rclone copy /mnt/user/Media Gdrive_StreamEN:Media --ignore-existing -v

    Just need to change the /mnt/user/Media folder to the folder you want to copy. And same for Gdrive_StreamEN:Media aswell. So if you store it in the folder Secure it would be Gdrive_StreamEN:Secure.

    • Like 1
  7. 2 minutes ago, jonathansayeb said:

    Sorry, it must have been the frustration to get it to work emoji28.png.
    I've created all my Google drive mounts and team drives.
    I can mount them via script properly.
    Once mounted I can see each mounts in both krusader and windows 10 but they are all empty.
    I can copy files in the mount but when I check in Google drive the copied files don't appear.
    On unraid terminal I can see all the content of the remotes I've setup by using
    rclone ls remotename:

    My question is how can I see the content of my remotes in krusader and/or smb shares?

    Thank you.

    Sent from my LYA-L29 using Tapatalk
     

    Make sure you restart Krusader before you test. Krusader often only sees the old situation and not the new one, thus you won't see the mount working. 

    When you go to your mount folder (mount_rclone) from the tutorial you should see 1 PiB in size to know it worked.

     

    If you don't see that, your rclone config and mounting was wrong. To help you with that we need more info about your config and mounting script.

    • Like 1
  8. 3 minutes ago, TheFreemancer said:

     

    Thanks for clearing it up.

    I'm using only the mount and unmount script you provided.

    All I wanted was to be able to access the files from outside the array and stream those.

    And as a bonus be able to edit and even execute some .exe files. I'm testing streaming old games which are light from the cloud.

     

    I'm going to ask the most retarded question but what do you guys use to do the rclone copy command?

     

    My server is headless. If I access it with a browser I can open a terminal and output the command.

    But if I close the browser on my other computer I can't check the copy process.

     

    If there a way to log into the array. Do rclone copy, leave it. Get back to it the next day (maybe from another computer) and still access the same terminal session?

    Use User Scripts and run it in the background. Through the log you can see what's happening if you want but you don't have to keep a window open.

  9. 5 minutes ago, MothyTim said:

    Thanks for pointing me in the right direction, it was the Unraid WebUI hadn't thought of that! It seems to be working now, what do I need to change in nginx  config?

    Well depends on your nginx configs. If you pointed to 443 somewhere there and you changed it to 444 you should also change that port in your nginx config for Nextcloud. If you point to your dockername (since you use proxynet) I think you can just leave it as is.

  10. 5 minutes ago, MothyTim said:

    Ok, you can obviously see something that I can't as I can't see any other docker on port 443?

    It was in your log:

    listen tcp 0.0.0.0:443: bind: address already in use.

    So maybe it's your LE docker that is on port 443? Or maybe you have your https WebUI of Unraid on 443? I think if you change the port of Nextcloud to 444 you will find that it will start up fine. You just have to change your nginx config to match the new port.

  11. 14 hours ago, MothyTim said:

    Hi, hoping someone can help, I'm new to Unraid and was liking it but suddenly getting some issues that worry me as to whether it's stable enough to be my server! First thing I noticed was that Time Machine had stopped working and will not connect back to Unraid so I rebooted server! Then Nextcloud wouldn't start, so tried another reboot, still nothing just said execution error? So I tried removing the container and re-installing and it won't install! 

     

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='nextcloud' --net='proxynet' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -p '443:443/tcp' -v '/mnt/user/nextcloud/':'/data':'rw' -v '/mnt/user/':'/shares':'rw' -v '/mnt/user/appdata/nextcloud':'/config':'rw' 'linuxserver/nextcloud' 

    aaea1521bd9f8248f4068861339fc1f2f043956de1aadbb0cfeb32cf6ea2f38c
    /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint nextcloud (a6e92a2887d7dd6adb62efdbe275cc371c61b5a12bdc68ea597d09adc29ffedb): Error starting userland proxy: listen tcp 0.0.0.0:443: bind: address already in use.

     

    I don't understand this it was all working fine! I haven't added anything else to the server, the only thing I've been trying to get working was the reverse proxy and that was/is mostly working. Not sure where to look now to see what's going on?

     

    Cheers,

    Tim

     

    Regarding your nextcloud it seems like you are running another docker on port 443. So you can change the port on Nextcloud or on the other docker.

  12. 2 minutes ago, sfnetwork said:

    Of course, I would be interested routing through their CDN.
    Could you describe your setup (using VLAN with Dockers and maybe how to get it working with CNAME going through their CDN)?
    Thank you BTW, I really appreciate it! a little new to this...

    By the way, did you also allow the WAN to access your port forward? So not just creating a Pfsense port forward, but also an associated firewall rule on your WAN interface?

     

    I'm running a DuckDNS docker on my unraid box, but every other DNS service works to get your ip pushed a dns domain. That address I put in the CNAME alias of Cloudflare. So kaizac.duckdns.org is in the alias of every cname I want routing to my home address.

     

    So I think you followed SpaceInvaders guide to get proxynet for your dockers. I'm not much of a fan of that construction. And I created nginx configs in site-configs of LE. So it's on you whether you want to make that change. But I think for testing purposes you can just put everything back on bridge and use your unraidIP:port in your nginx configs and it should work.

     

    If you want to go my route:

    Within pfsense I created a few VLAN's. You don't have to do it, but I like to keep it clean. You can also just give the dockers an address on your local LAN ip subnet. With a VLAN you can use PFsense to block your dockers to your local IP subnet if you so desire. Then I also created that VLAN in the Network Interface of Unraid.

     

    After that is done you give all the dockers that need to access other dockers or need to be accessed from your LAN an ip on your VLAN or LAN network depending whether you use a VLAN or not.

     

    Make sure when you give your LE docker it's own ip you also change the firewall rules in Pfsense.

     

    When the dockers have it's own IP you have to change your nginx configs to redirect to the right IP and port.

     

     

    • Like 1
  13. 1 minute ago, sfnetwork said:

    Thanks, I already have OpenVPN setup and it works great. I just wanted to test out. But really good advice, I might leave nextcloud only like that and use the rest through VPN.
    for mariaDB, no issue, I presume Nextcloud communicates with it behind the scene, directly to the bridge IP and port.

    No it doesn't work when I enable Cloudflare thing on the CNAME

    If you are happy with it then it's cool. But if you prefer routing through their CDN you should be able to get it to work. My set-up is identical to yours, I just configured things differently. I'm using VLAN's and giving dockers their own IP. If you want to troubleshoot through them, let me/us know. If you're fine with the current state then enjoy :).

  14. 1 minute ago, sfnetwork said:

    The 80/ 180 port matter doesn't matter, since I validated my certificates through Cloudflare DNS (I just need to delete the NAT). Anyway, my 80 is blocked from ISP.
    as for the CNAME records, that's really how it got working... ping was giving their IP, not mine (I get that's the point but didn't work with nGINX)

    Does it still work when you put the CDN back on and the pings give you their IP back? You probably don't want to hear this, but the way you configured your subdomains now opening it all to the internet is really asking for trouble. The adviced procedure is to use a VPN (which you can easily set up since you are on Pfsense) and then access your dockers like Sabnzbd and radarr. Only dockers like Nextcloud should be opened to the internet. Please make sure you have an idea what you are doing, cause right now it seems to me like you are just following some guides and not really understanding what is going on.

     

    Also I wonder if you run into problems with your Nextcloud since you put MariaDB on bridge and your Nextcloud on proxynet. I expect them to have problems connecting, but maybe they work fine?

     

     

  15. 4 minutes ago, sfnetwork said:

    OMG I finally found the issue!!!

    It's about CloudFlare CNAME records...

    I had to disable the traffic going through cloudflare:

    cloudflare2.thumb.png.3bf1281cc688b1f3b5f71194418c5c11.png

     

    Now EVERYTHING works perfectly....
    Hope this can help someone else and avoid losing so much time lol

    No, that's not correct. I have LE running with Cloudflare through their CDN network. Your configuration is not correct.

    First you have your docker on 180/443 and in Pfsense you open up 80 and 443?? That should be 80 forwarding to 180 and 443 to 443. But then in your Nextcloud config you hard redirect to port 444. 


    So if I were you I would walk through your config from the beginning, seems like you skipped through some steps. And for your LE validation you can use the Cloudflare.ini file if you aren't using that already.

  16. 3 minutes ago, tmoran000 said:

    Thank you. With Rclone cache, does that let you choose where it gets cached to? i.e. if I add a 120 SSD Mount to unraid, can I choose that as the cache for the streaming so that it does not hit ram or any other location on the array?

    You don't want rclone cache since you use VFS. Rclone cache is just a temporary storage folder from which it gets uploaded to your gdrive. VFS is superior.

     

    What you can do though is use an SSD to use for your Plex to use as transcoder location. But if you are mostly direct streaming this won't help. So first check what kind of streams you are having.

     

    And like I said, you're probably better of reducing the buffer commands in your mount command. You can look the seperate commands up and see whether lowering them will help your RAM usage.

  17. 4 minutes ago, tmoran000 said:

    So I have been running Rclone for a couple of weeks after getting some help setting it up and I have a few questions about it. 

     

    First, When I am transferring, Every so often I get the following error and if I hit retry it just pops back up however if I wait a while and hit retry it will pick back up. my thought is that my uploads are storing on a cache drive of 750gb on google and I have to wait for some files to move off before I can put more onto it, well atleast thats what it seems because of needing to wait some time before I can put some more to google.

    1170953112_ScreenShot2019-03-08at11_55_22AM.thumb.png.ff5ecd07e86fdda8e8e8cb55917caad7.png

     

     

    Second: 

    When someone is streaming to plex from the Rclone mount, I would assume that this file is temp stored on the server while It is being streamed, Now does this create a cache folder on the array and then clears when it is done streaming. Or does it hit ram. I ask because it appears that when someone is streaming my ram usage is raised but when done it lowers. Now when I was getting help I do not recall any caching set up and I know he had mentioned something about it but I have not done any configurations about it. Can anyone answer either of these questions? 

    The error you get is probably because you maxxed your 750gb daily upload to gdrive. You are writing directly to the Gdrive so your local drives are not hit or bottlenecking you.

     

    With your setup you are not using rclone cache. When you stream however it is putting your buffer in your RAM. Also depending on your Plex setup it might be transcoding to RAM aswell. So if you have enough RAM there is no problem. If you are running short you can play with the mount settings to put less in buffer.

     

     

  18. 9 hours ago, Tortoise Knight said:

    I'm sorry, I'm pretty much Linux illiterate so I wouldn't know how to check this or fix it.

    Did you have a mount running? If so your rclone is busy. Best way to make sure is just a reboot of your server with no mounts on. So if you have scripts for mounts, disable those.

    • Upvote 1
  19. 6 minutes ago, Djoss said:

    What the protection is doing?  If it forces redirection to HTTPs, then I guess it won't work.  Unless you can disable the protection for specific URLs...

    You made me remember, it's not the IP/CDN protection it's a setting in Cloudflare. Someone else in this topic mentioned it. You have to disable the HTTPS rewrites. So I got most of my subdomains working. Two aren't though or not as desired (Nextcloud and OnlyOffice). Both which require a more specific configuration. So what I can do is put my older NGINX config in, but then it has includes which it can't find.

    I see that the standard configs are including files like block-exploits.conf. Are those accessible and editable somewhere? I can't find them, so I wonder if they are hardcoded/somewhere hidden.

  20. 14 minutes ago, Djoss said:

    The container is not reachable from the Internet.  Note that when assigning an IP to the container, you cannot choose the ports used by the container.   So you need to forward to the container HTTPs port 4443 and HTTP port 8080.

    Ok so I changed this and it give the error below. So then I disabled the Cloudflare CDN protection. And it works. So is it possible to get this working with the Cloudflare CDN/protection on you think?

    Failed authorization procedure. bitwarden.mydomain (http-01): urn:acme:error:unauthorized :: The client lacks sufficient authorization :: Invalid response from https://bitwarden.mydomain/.well-known/acme-challenge/Z6vJRYrurz18JbcCPEeexbC1IhmWJoxfOFIY3jVRatw [2606:4700:30::681b:80cc]: "<!DOCTYPE html>\n<!--[if lt IE 7]> <html class=\"no-js ie6 oldie\" lang=\"en-US\"> <![endif]-->\n<!--[if IE 7]> <html class=\"no-js "

     

  21. 5 minutes ago, ofthethorn said:

    Another quick update: since my router is a piece of trash I cannot control (forced by ISP) I decided to just add a second plex container, change its name and give it its own appdata folder. This container will solely be used for local access and is not setup in the LE docker. 

    This won't have any downsides I hope...

    Thanks for all your help though! Really appreciate the effort.

    But why? It's incredibly inefficient, straining your server needlessley and you have configure 2 dockers. You can have both, local and WAN access to the same docker. You just need to configure it well.

     

    So your DuckDNS doesn't need to be on the docker network. It can just be in host mode on your Unraid box. For your LE docker I would also give that docker it's own IP and make sure your redirect your router to that IP (I assume this is what you also did for your current setup?). And then in your nginx config you use the ip of your Plex docker and both WAN as LAN access should work.

     

    • Like 1
  22. Just now, ofthethorn said:

    All seems to work now. Alas, still no connection to the local server. Probably should've added that I can't even connect to plex docker IP address.

    Did you configure also enable access from outside your network in Plex and open port 32400 in your router to your docker? If so, disable that all. Your plex docker should only be accessible through your LE setup.

     

    And what mode is Plex on? Own IP, or bridge or host, or?

×
×
  • Create New...