ritalin

Members
  • Posts

    13
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ritalin's Achievements

Noob

Noob (1/14)

3

Reputation

  1. For anyone running into issues following these instructions. AND you are running a non-bridged Google Wifi or On-Hub there is one additional change you need to make. By default, Google Wifi is utilizing port 80 to tell you to go download the Google Wifi app. To fix this switch off of the Default Google DNS inside of the Google Wifi app. I switched mine over to Cloudflare's new public DNS (1.1.1.1 and 1.0.0.1), saved the changes then restated the LetsEncrypt container and all was well. Not sure if this was already posted elsewhere, but I spent the past 48 hours banging my head trying to figure out what the issue was. So I figured the least I could do was post this up in hopes of sparing someone else a concussion.
  2. Yes the GUI becomes completely unresponsive. No error listed in the UI. After the GUI becomes unresponsive, if I try to restart or stop the container, it just prompts me that it failed to do so. No Error other than that. Ill grab the logs next time I attempt to stop it and post those for you. I might try the backup on the Windows client as Ive read there are some additional options that are not available for the linux client yet. I really wanted to avoid pulling data from one system to another on the network, only to push it out to the internet from there. But if it works then maybe I can look into just running a light Windows VM until such time as the linux client reaches parity with the windows client.
  3. Hey Djoss. Im running into an issue with this docker and have a question about its usage. First the problem. If I attempt to stop a backup that is in progress from inside of the Cloudberry UI, the docker hangs. All attempts to stop or restart the docker from the UnRaid UI fail. Restarting the UnRaid server during this hang, ends up hanging the UnRaid UI. Not really sure what is going on with that. Oddly enough if I delete a running backup instance inside of the Cloudberry UI it does not cause a hang. As to the usage issues I am having, I just wanted to ask if its possible to simply backup files alone. I am coming from a Rclone backup instance that went crazy and started creating duplicates all over the place. I was hoping to setup something similar to Rclone Sync in Cloudberry. I have the docker set to have access to /mnt/. I need to do this method because I am backing up to Google Drive, which is mounted to my system under /mnt/disks/Google. So the first problem is that when the backup runs, it is putting everything on the Google drive under the following directory My Drive/ServerMedia/CBB_Servername/Storage/user/media/... My Rclone backup was saving everything to My Drive/ServerMedia/ Is there no way to remove the CBB_Servername subfolder? Is there no way to avoid it backing up the entire path from /storage to /media? This is not a big issue, I can make it work by simply moving all of the existing files to this new sub directory. It just seems unnecessary to have the entire path created. The bigger issue is that for every file it backs up, it creates two additional sub-folders. One is a folder with the filename, and the second under it is a date. Lets say the the files Im backing up are located in .../ABC/test.srt When it backs up the actual is then located in .../ABC/test.srt/11222017/test.srt Is there no way around this? Can I not simply backup the files in their directory as they are?
  4. Ok, Im back. As stated before I have my reverse proxy setup for my Home-Assistant docker, and its working well. Im trying to get Sonarr, Couchpotato and NZBGet setup now. So far I have Couchpotato working perfectly. Sonarr works as well, but the address comes out differently once it resolves. It shows up as https://couchpotato.mydomain.com/mydomain.com NZBGet on the other hand resolves to the correct url but it simply loads up the main "Welcome" webpage. Im overlooking something with regard to adding proxies to the default file. Any help is appreciated. EDIT Resolved Avoided the use of the default file and created a 3 extra files in the same directory as the default.
  5. Home-Assistant Docker with LetsEncrypt Docker setup on a sub domain Considering I spend/wasted a good deal of time running around in circles trying to get this working and looking at various locations for info, I thought it would be nice to share my setup just incase someone else is going through the same thing. Here is how I have my sub domain encrypted and setup as a reverse proxy through nginx in LetsEncrypt. My letsencrypt docker setup My Router's Firewall The configuration.yaml HTML section for Home-Assistant http: api_password: MyPassWord base_url: 192.168.1.2:8123 A secondary file named "ha" in the /nginx/site-confs directory containing the following code. map $http_upgrade $connection_upgrade { default upgrade; '' close; } server { # Update this line to be your domain server_name SUB.MYDOMAIN.com; # These shouldn't need to be changed listen 80 default_server; #listen [::]:80 default_server ipv6only=on; return 301 https://$host$request_uri; } server { # Update this line to be your domain server_name SUB.MYDOMAIN.com; # Ensure these lines point to your SSL certificate and key ssl_certificate /config/etc/letsencrypt/live/MYDOMAIN.COM/fullchain.pem; ssl_certificate_key /config/etc/letsencrypt/live/MYDOMAIN.COM/privkey.pem; # Use these lines instead if you created a self-signed certificate # ssl_certificate /etc/nginx/ssl/cert.pem; # ssl_certificate_key /etc/nginx/ssl/key.pem; # Ensure this line points to your dhparams file ssl_dhparam /config/nginx/dhparams.pem; # These shouldn't need to be changed listen 443 ssl ; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains"; ssl on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4"; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; proxy_buffering off; location / { # Update this line to be your HA servers local ip and port proxy_pass http://192.168.1.2:8123; proxy_set_header Host $host; proxy_redirect http:// https://; proxy_http_version 1.1; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; } } Startup your HA and LetsEncrypt Docker and you should now be able to securely access Home-Assistant from outside your network. Thank you again to Tyler, Aptalca and CHBMB for your help.
  6. TYLER!!! Thank you man. Seriously thank you a lot. I had to fiddle with the file provided in the link a bit but I finally got it up and running. I'm going to post again after this with the setting I'm using just in case anyone else comes across this thread needing help with Home-Assistant and LetsEncrypt.
  7. Alright I think Im making some progress. Ive got it working, but the redirect all traffic to https portion is causing issues. If I unhash it, the page fails to load no matter if I manually specify https or not. Is there something wrong with the servername portion? #server { #listen 80; #server_name ha.mydomain.com; #return 301 https://$server_name$requests_uri; #} Other than that, the last hurdle is that I can't log into home-assistant when I load up the secure link. Not via https:ha.mydomain.com or https://192.168.1.2:8123 After I enter the password, it just spins for a bit and then I get a notice below the password field of Unable to Connect. Have you ever run into an issue like that with a proxy? REALLY! Thank you for the help.
  8. Sorry man, Im banging my head against the desk over here. Anything you can spot that you think is wrong, Id appreciate the help. Here is a my letsencrypt docker setup My Router's Firewall Here is the ha file sitting in /mnt/user/appdata/letsencrypt/nginx/site-confs/ with the default file. server { listen 80; server_name ha.mydomain.com; return 301 https://$server_name$requests_uri; } server { listen 443 ssl; server_name ha.mydomain.com; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass http://192.168.1.2:8123; Does the contents of the default file matter at all, if the ha file is whats doing the reverse proxy? Since Home-Assistant is running in another docker on the same machine, should the proxy_pass be the ip or localhost, or does it even matter? Ive been working on this for so long now, and I dont fully comprehend the syntax of this HA file, I feel like Im overlooking something stupid. Thanks again for any help offered.
  9. Ok, Im game. Am I still on the right path with the HA file in the same directory as the default file?
  10. Hello all, hoping for a little help. Ive been at this for two days now, and dont have much hair left. Im attempting to get LetsEncrypt setup for my Home-Assistant.io docker, but Im running into a few issues. The first is that I cant seem to get HA to see my certs. I constantly run into the following error. /certs/... is a path setup in the Home-Assistant docker pointing to /mnt/user/appdata/letsencrypt/ Im positive this is a permissions error as I can get around it by copying the pem files out of /archive/myserver.com/ and dropping them directly into the Home-Assistant directory "/mnt/user/appdata/home-assistant" Not quite sure how to change the permissions, still new to all this. The second issue I am having is getting the proxy to work correctly. Page is not resolving "Unable to connect" https://myservername.com resolves correctly and shows the "Welcome to our Server" page. The sub I have setup through NoIP is ha.myserver.com I followed the instrustions listed here by CHBMB for setting up NextCloud. So in /letsencrypt/nginx/site-confs I have a file names "ha" with the following in it. server { listen 80; server_name ha.mydomain.com; return 301 https://$server_name$requests_uri; } server { listen 443 ssl; server_name ha.mydomain.com; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass https://192.168.1.2:8123; } } The following is the pertinent information from the configuration.yaml file in my Home-Assistant install. http: api_password: MyPassword base_url: 192.168.1.2:8123 ssl_certificate: /config/fullchain1.pem ssl_key: /config/privkey1.pem Here is some additional info from the HA site regarding the HTTP section of the configuration.yaml. https://home-assistant.io/components/http/
  11. Hello Joch, Any chance this could be altered to work with Amazons new Cloud Storage? It is afterall just a consumer facing (priced) S3 option. Thanks Bill
  12. Sonarr to RuTorrent Seedbox corruption Hello again binhex. As I stated in the Plex thread Im brand new to unraid, but fairly competent. Everything has been going well with my build-out and past this next issues I have SFTP access left and my build is complete. Since the plex docker is working so well, I decided to go with your Sonarr client as well. Im running into an issue though. I have a seedbox with dediseedbox, which houses RuTorrent. I have Sonarr setup to feed torrents over there (fallback plan) through httprcp. It seems to connect just fine and I get no error during the connection test. When I add the torrent is where the trouble shows up. It does communicate with the RuTorrent, and does push a file. The issue is that the file seems to be corrupted. It shows up in RuTorrent as the hash for the magnet link with an extension of .meta. It also shows up in the stopped state. If I wait about 15minutes and force it to start it does eventually rename the file based on the magnetlink and start downloading. One other fun bit of info is that after I force it to start, it strips the label applied by Sonarr. This is obviously not the automated method I was aiming for. Seems to be a known issue that was resolved. https://www.reddit.com/r/seedboxes/comments/3lqmnq/local_sonarr_to_seedbox/ Im just not sure if your build includes this fix. PS - After that softball Plex question, bet you thought I was going to give you another easy one.
  13. (SOLVED) Hello binhex, I am brand new to UnRaid, so bear with me. I have everything setup and running smoothly after several days of transferring data to my array. Pretty much have a handle on Dockers and had Plex installed and setup without much issue. Now its time to side load some Plug-in for Plex. I dont seem to have permission to write to anything in the Plex Media Server folder. /mnt/cache/application/Plex Media Server drwxrwxr-x 4 nobody users 88 Mar 16 12:16 ./ drwxrwxrwx 5 nobody users 73 Mar 16 22:04 ../ drwxr-xr-x 11 nobody users 204 Mar 16 12:18 Plex\ Media\ Server/ -rw-r--r-- 1 root root 162 Mar 15 00:35 perms.txt -rw-r--r-- 1 root root 58267 Mar 16 23:14 supervisord.log drwxr-xr-x 7 nobody users 4096 Mar 16 23:14 transcode/ What am I missing? EDIT Ive gone into the docker settings and change the PUID and GUID to the user I want to have access. This has changed the permissions correctly on the transcode folder, but the Plex Media Server is still showing nobody:user. Do I need to delete that Perms.txt file and restart the docker? EDIT 2Yep that seems to have done the trick. So for any noobs out there, if you run into the same problem do the following. SSH into your server, login, and run the following command. "id <username>" (replace username with the user that you want to have write access.) Make note of the UID and GID numbers that represent this user Open the settings (Edit) for your Plex Docker Application and click the advanced switch at the top. Enter the UID number in the PUID field, enter the GID number in the GUID field. Click Save. Browse to the folder where your Plex Media Server install resides. In this folder along with the Plex folder is a file called Perms.txt, delete this file. Now restart your Plex Docker Application. Done.