vurt

Members
  • Posts

    304
  • Joined

  • Last visited

Everything posted by vurt

  1. I started enquiring about this in the Letsencrypt thread but now think it might be a NextCloud issue. I've installed it and can confirm it runs fine when I access it via its internal IP of 192.168.1.252:444. But when I reverse proxy it with Letsencrypt, https://server.com/nextcloud/ takes a very long time to open. And when it finally opens and I enter my login, the webpage hangs for a while and eventually returns a 504 Gateway Time-out error. This was my NextCloud config: <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'datadirectory' => '/data', 'instanceid' => 'oc5tpjrjiliz', 'passwordsalt' => '7iiEZcRX0TmIANc8q1CB1ZO3pRrAhd', 'secret' => 'xNhed76LWYKrFfde7LL0W0rKz18PBHu62KoXYHmKxQg7YvU8', 'trusted_domains' => array ( 0 => '192.168.1.252:444', 1 => 'advurt.net', ), 'overwrite.cli.url' => 'https://advurt.net', 'overwritehost' => 'advurt.net', 'overwriteprotocol' => 'https', 'overwritewebroot' => '/nextcloud', 'dbtype' => 'mysql', 'version' => '11.0.1.2', 'dbname' => 'nextcloud', 'dbhost' => '192.168.1.252:3306', 'dbport' => '', 'dbtableprefix' => 'oc_', 'dbuser' => 'oc_ray', 'dbpassword' => 'V781F3CLEyYO59J6Jc7fJhprhVTzQ5', 'logtimezone' => 'UTC', 'installed' => true, ); And this is Letsencrypt: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'xxxxxxxxxxxx'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } #Config for NextCloud location ^~ /nextcloud { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass https://192.168.1.252:444/; }
  2. Yeah, I would have thought. Not sure then, leave it running and see if that helps after an hour or so? I also realized the internal IP of https://192.168.1.252:444/ no longer works, I assume this is because of the reverse proxy that's been set up? EDIT: I reverted the configs for NextCloud and letsencrype back to before the reverse proxy attempt, and can confirm I can access NextCloud at the internal IP with no problem. The slow loading of the login page and the timeout after logging in makes me think it's NextCloud that's the problem and not Letsencrypt. I'll try posting in the NextCloud thread. Thanks CHBMB!
  3. Hmm possibly, how can I check that? I had set the NextCloud share to only use Disk 1. But I've been working on this the whole afternoon starting with fresh installs of MariaDB and NextCloud, I'd assume the disk is already spun up? If it is a disk spinning up issue, how do I fix it? Will setting up the NextCloud share to use All Disks be better? And I've accessed server.com/nextcloud/ a few times, getting to the login page, logging in, and eventually timing out with 504 error. Shouldn't this have resolved itself once the disk is spun up?
  4. Thank you, that fixed it but that leads to a new issue: server.com/nextcloud/ takes a very long time to open, and when I finally get to the login page and enter my credentials, the page turns into a 504 Gateway Time-out error. This doesn't happen when I use other reverse proxies like server.com/deluge/.
  5. Hi, I'm trying to reverse proxy NextCloud. I followed the install instructions here. when I try https://advurt.net/nextcloud/, I get this 400 error: 400 Bad Request The plain HTTP request was sent to HTTPS port My NextCloud config: <?php $CONFIG = array ( 'memcache.local' => '\\OC\\Memcache\\APCu', 'datadirectory' => '/data', 'instanceid' => 'xxxxxxxxxxxx', 'passwordsalt' => 'xxxxxxxxxxxx', 'secret' => 'xxxxxxxxxxxx', 'trusted_domains' => array ( 0 => '192.168.1.252:444', 1 => 'advurt.net', ), 'overwrite.cli.url' => 'https://advurt.net', 'overwritehost' => 'advurt.net', 'overwriteprotocol' => 'https', 'overwritewebroot' => '/nextcloud', 'dbtype' => 'mysql', 'version' => '11.0.1.2', 'dbname' => 'nextcloud', 'dbhost' => '192.168.1.252:3306', 'dbport' => '', 'dbtableprefix' => 'oc_', 'dbuser' => 'xxxxxxxxxxxx', 'dbpassword' => 'xxxxxxxxxxxx', 'logtimezone' => 'UTC', 'installed' => true, ); My letsencrypt site-confs/default: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'xxxxxxxxxxxx'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } #Config for NextCloud location ^~ /nextcloud { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.1.252:444/; }
  6. Yup I already have reverse proxy running with letsencrypt. Once I get NextCloud running I'll reverse proxy it. Thanks for the link.
  7. Thank you, I'm noticing two votes for OwnCloud. I've been reading about OwnCloud and NextCloud and since there's already a NextCloud docker from linuxserver.io, I'll give that a go first.
  8. Would that be the OpenVPN AS docker? It looks like it'll require a connection client to run. I would prefer to just get onto a web interface the way OneDrive works.
  9. I tried searching but the word "share" is used differently and I'm getting the wrong hits except this thread from 2011. I would like to try sharing a Share with my friend so he can access my files over the web. He's familiar with services like OneDrive, that's how we're sharing files right now. We're not collaborating and working on files, just download. What would be the best secure option that can be accomplished over a web interface? My first thought goes to Nextcloud, will I be able to configure it so it opens up a Share with existing files inside? The installation looks quite involved and I don't know if this is overkill for my need. Set up some kinda SFTP access? Thank you!
  10. Also getting this error as before: [05.02.2017 20:11:24] No connection to rTorrent. Check if it is really running. Check $scgi_port and $scgi_host settings in config.php and scgi_port in rTorrent configuration file.
  11. Getting a different error with the latest update: [05.02.2017 19:07:28] Bad response from server: (500 [error,list]) Link to XMLRPC failed. May be, rTorrent is down? Just removed and reinstalled. Still giving that error.
  12. I realize it's probably set to Read Only so nothing goes wrong with the data that Rclone is meant to be backing up to the cloud, and in my case I'm doing the reverse of that. But glad it works both ways! Confused as to why I have to run the command that way instead of the default format though.
  13. Ok I solved it, it was a silly permissions problem except I didn't know where to look. I had to edit the settings for the Data Path. Access Mode was set to "Read Only" by default, and I changed it to "Read/Write": But I had to run it this way: rclone -v sync remote:/Test/ /data which is different from the default format of rclone sync /data $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH
  14. Torrent downloads as normal and this happens about 15 mins after rTorrent is started from unRAID. Getting this error in the log which results in the GUI not connecting to rtorrent: - tail: cannot open '' for reading: No such file or directory This is what the GUI reports: [01.02.2017 15:00:48] WebUI started. [01.02.2017 15:00:48] No connection to rTorrent. Check if it is really running. Check $scgi_port and $scgi_host settings in config.php and scgi_port in rTorrent configuration file.
  15. I made some progress. Running rclone -v sync remote:/Test/ data in the docker's advance setting: Executing => rclone -v sync remote:/Test/ data 2017/02/02 23:35:00 rclone: Version "v1.35" starting with parameters ["rclone" "-v" "sync" "remote:/Test/" "data"] 2017/02/02 23:35:01 Local file system at /config/data: Modify window is 1s 2017/02/02 23:35:01 One drive root 'Test': Reading "" 2017/02/02 23:35:01 One drive root 'Test': Finished reading "" 2017/02/02 23:35:01 Local file system at /config/data: Waiting for checks to finish 2017/02/02 23:35:01 Local file system at /config/data: Waiting for transfers to finish 2017/02/02 23:35:02 unraid-banner.png: Copied (new) 2017/02/02 23:35:02 Go routines at exit 7 2017/02/02 23:35:02 Waiting for deletions to finish 2017/02/02 23:35:02 Transferred: 87.364 kBytes (77.966 kBytes/s) Errors: 0 Checks: 0 Transferred: 1 Elapsed time: 1.1s Rclone creates the folder "data" in Rclone's config folder and the "unraid-banner.png" is copied. Running rclone -v sync remote:/Test/ /data in the docker's advance setting, I get "read-only file system" error: Executing => rclone -v sync remote:/Test/ /data 2017/02/02 23:41:00 rclone: Version "v1.35" starting with parameters ["rclone" "-v" "sync" "remote:/Test/" "/data"] 2017/02/02 23:41:01 Local file system at /data: Modify window is 1s 2017/02/02 23:41:01 One drive root 'Test': Reading "" 2017/02/02 23:41:01 One drive root 'Test': Finished reading "" 2017/02/02 23:41:01 Local file system at /data: Waiting for checks to finish 2017/02/02 23:41:01 Local file system at /data: Waiting for transfers to finish 2017/02/02 23:41:01 unraid-banner.png: Failed to copy: open /data/unraid-banner.png: read-only file system 2017/02/02 23:41:01 Local file system at /data: not deleting files as there were IO errors 2017/02/02 23:41:01 Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors Running rclone sync $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH /data gets me "directory not found error" because the $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH variable is not being passed to the command: Executing => rclone sync /data 2017/02/02 23:44:00 Local file system at /data: Waiting for checks to finish 2017/02/02 23:44:00 Local file system at /data: Waiting for transfers to finish 2017/02/02 23:44:00 Local file system at /data: not deleting files as there were IO errors 2017/02/02 23:44:00 Attempt 1/3 failed with 0 errors and: error listing source: Local file system at /config/:: directory not found Running the default command (manually putting it into the text box) rclone sync /data $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH gets me this, not sure what it is: Executing => rclone sync /data 2017/02/02 23:50:00 Local file system at /config/:: Waiting for checks to finish 2017/02/02 23:50:00 Local file system at /config/:: Waiting for transfers to finish 2017/02/02 23:50:00 Waiting for deletions to finish 2017/02/02 23:50:00 Transferred: 0 Bytes (0 Bytes/s) Errors: 0 Checks: 0 Transferred: 0 Elapsed time: 0s It seems to think /data = /config/ even though I had set it up the data path to be /mnt/user/Comics/Test/ ? And $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH always appear as in the actual command. I hope this helps, I'm very confused. The first test kinda worked except it was creating the folder and syncing to the docker's config folder.
  16. The Hydra UI is showing there's an update to 0.2.195. Should I update to that or ignore it and wait for the Docker to be updated? Always confused by that. Generally speaking for all dockers that show available updates within the UI, should I just wait for dockers to be updated from unRAID?
  17. No I didn't figure it out. I posted in the Rclone forum here and someone said it looks like I had the right syntax.
  18. Thanks for that. The Rclone docker config seems to be set up such that the remote is the destination. The default Rclone command is rclone sync /data $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH Would I be able to flip that around to rclone sync $SYNC_DESTINATION:/$SYNC_DESTINATION_SUBPATH /data So the empty folder on my unRAID will fill up with the files from OneDrive? EDIT: Ok so simply flipping the command doesn't work, guess that would've been too easy. Executing => rclone sync /data 2016/12/23 00:16:00 Local file system at /data: Waiting for checks to finish 2016/12/23 00:16:00 Local file system at /data: Waiting for transfers to finish 2016/12/23 00:16:00 Local file system at /data: not deleting files as there were IO errors 2016/12/23 00:16:00 Attempt 1/3 failed with 0 errors and: directory not found 2016/12/23 00:16:00 Local file system at /data: Waiting for checks to finish 2016/12/23 00:16:00 Local file system at /data: Waiting for transfers to finish 2016/12/23 00:16:00 Local file system at /data: not deleting files as there were IO errors 2016/12/23 00:16:00 Attempt 2/3 failed with 0 errors and: directory not found 2016/12/23 00:16:00 Local file system at /data: Waiting for checks to finish 2016/12/23 00:16:00 Local file system at /data: Waiting for transfers to finish 2016/12/23 00:16:00 Local file system at /data: not deleting files as there were IO errors 2016/12/23 00:16:00 Attempt 3/3 failed with 0 errors and: directory not found 2016/12/23 00:16:00 Failed to sync: directory not found Does anyone know if Rclone can sync from OneDrive to unRAID instead of the default unRAID to OneDrive? I tried a custom command like rsync sync remote:/Test/ /data/ which didn't work too.
  19. Hi everyone. I've set up Rclone and gotten the authorization token for OneDrive. I've attached my config below: Right now there is nothing in the data path, it's a folder I just created on unRAID. The sync destination "remote:To To Read/" refers to the "To To Read" folder in the root of my OneDrive. 1) When I run Rclone, what will happen? Will Rclone copy the files from sync destination to data path? Or will it erase everything at sync destination because my data path is currently empty? 2) I don't see a log option when I click on the Rclone docker. How do I see what's going on and if this is working? The background to what I'm trying to do: I have a OneDrive folder on my laptop and would like to stop using that. I want to have the OneDrive files reside on my unRAID so they don't occupy space on the laptop.
  20. Thank you, I'll see how that goes! What does that line do?
  21. Hi everyone. It looks like rTorrent is creating files with 644 permission and folders with 755 permission. File and folder owners are set to nobody. I can't delete or rename them even though I'm connecting via SMB with a Mac as a user with read/write access. What can I do?
  22. I fixed it by changing the email time from 12 to 1am.
  23. If your TV Show share has Cache enabled you'll see the directory in the Cache drive. unRAID has a mover that moves those files from Cache to the Array. How are you manually activating the mover and what are you trying to move, TV shows? I might be misunderstanding your problem but could it be a naming issue? I recently imported a bunch of TV shows into Sonarr and for a while the mover wouldn't trigger and my cache drive was filling up too (because I was manually putting my files into the Download folder for Sonarr to pick up). I couldn't get it to import manually neither. Turned out to be a folder naming problem and Sonarr wasn't recognizing the name or season and didn't know what to do with it. What I did was, double check the folder name of the TV shows, check the name of the TV show files, within Sonarr add the TV show entry to its library, and then go to manual import and point it at the folder with the files.
  24. Has anyone tried to get Sonarr to download subtitles from addic7ed.com in post-processing? I read of a script called Subliminal but how can I install it in the Sonarr docker environment? Subtitles are released later after TV shows so some kinda timer to look for the subtitles would be necessary.