-
Posts
500 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by CrashnBrn
-
-
4 hours ago, aptalca said:
Now with the default site config, go to https://yoursubdomain.duckdns.org on your mobile device on 4g/3g connection.
But you need to clear your cache first because you had a 301 redirect before, which is cached permanently. Or do it on a new device if you can (or ask a friend to do it). You should get the default homepage.
PS. Don't do 301 redirects unless you're sure you'll stick with it. Do a 302, it's a temporary redirect.
You're 100% correct! I enabled NAT reflection, changed it to a 302 redirect, cleared all my cache and everything started working!
Thanks for your help aptalca!
Edit: Any danger of leaving NAT reflection on?
-
29 minutes ago, aptalca said:
Post your docker log
Here you go. I removed my email and site. The container is currently stopped. Thanks
[s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... generating self-signed keys in /config/keys, you can replace these with your own keys if required Generating a 2048 bit RSA private key ........................+++ ...............................................................................+++ writing new private key to '/config/keys/cert.key' ----- [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=America/Los_Angeles URL=duckdns.org SUBDOMAINS=mywebsite EXTRA_DOMAINS= ONLY_SUBDOMAINS=true DHLEVEL=2048 VALIDATION=http DNSPLUGIN= EMAIL=myemail STAGING= Created donoteditthisfile.conf Backwards compatibility check. . . No compatibility action needed Creating DH parameters for additional security. This may take a very long time. There will be another message once this process is completed Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time ................................................................................+...............................+........................+..............+.....................................................................................................................................................................................................................................................................................................................................................................................++*++* DH parameters successfully created - 2048 bits SUBDOMAINS entered, processing SUBDOMAINS entered, processing Only subdomains, no URL in cert Sub-domains processed are: -d mywebsite.duckdns.org E-mail address entered: myemail http validation is selected Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Obtaining a new certificate Performing the following challenges: http-01 challenge for mywebsite.duckdns.org Waiting for verification... Cleaning up challenges IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/mywebsite.duckdns.org/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/mywebsite.duckdns.org/privkey.pem Your cert will expire on 2018-06-23. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le [cont-init.d] 50-config: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Server ready Signal handled: Terminated. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks.
-
17 hours ago, aptalca said:
Your problem is likely nat loopback
Try it on your cell phone with a cell connection. If it works, it is nat loopback
Nope still not working. I removed the container and appdata folder and tried again. I realized I can't even get to the default nginx page (before I change the default file). I just get "Site can't be reached URL took to long to respond". I'm trying to figure out if it's pfsense blocking something or nginx not working properly. I assumed pfsense is good since apache worked without issues.
I do see this in my pfsense logs:
nginx: 2018/03/23 13:22:58 [error] 33793#100135: *847 open() "/usr/local/www/sonarr" failed (2: No such file or directory), client: 1.1.1.1(changed), server: , request: "GET /sonarr HTTP/1.1", host: "website.duckdns.org(changed)"
-
Hi!
I've been trying to get this working all week and have no clue why it's not working for me. My conf file is below. when I go to https://domain.duckdns.org I get nothing. It just spins. I see the request pass through my firewall (pfsense). I'm wondering if there could be something wrong with nginx? I'm NAT'ing 81 and 443 externally. I've replaced my internal IP.
I see Server Ready under logs for the container. I'm wondering if nginx is dropping the requests? Can anyone help point me in the right direction for troubleshooting? I've had Apache working no problems for a couple of years. I feel like I'm missing something obvious.
TIA
upstream backend { server 1.1.1.1:19999; keepalive 64; } server { listen 443 ssl default_server; # listen 80 default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location = / { return 301 /htpc; } location /nzbget { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:6789/nzbget; } location /sonarr { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:8989/sonarr; } location /couchpotato { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:5050/couchpotato; } # location /radarr { # include /config/nginx/proxy.conf; # proxy_pass http://1.1.1.1:7878/radarr; # } # location /downloads { # include /config/nginx/proxy.conf; # proxy_pass http://1.1.1.1:8112/; # proxy_set_header X-Deluge-Base "/downloads/"; # } location ~ /netdata/(?<ndpath>.*) { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend/$ndpath$is_args$args; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } }
-
Hi Guys!
I upgraded both parity drives. I now want to swap the old parity drives to data drives (one at a time).
Is it the same procedure? Pull out data drive, put in old parity drive and assign to slot for it to rebuild? Or is it different since it was once a parity drive in the array?Thanks!
-
I always end up selling my old drives.
-
36 minutes ago, Iormangund said:
How did you manage to get the release notes btw? I am still waiting for them to update the bios for X11SSH-CTF and it's getting a bit ridiculous now considering every other X11SSH variant now has the update. Never gotten a reply from them whenever I try to contact them to ask about it.
PM'ed
-
-
19 minutes ago, Smackover said:
Last I checked, 2TB on BB B2 was $10/mo, so it seems to only be cost effective if you need less than that. Otherwise, just stick with CrashPlan Business for your unRAID box, and backup all other machines to it.
Other than that, I'm still researching.
To add to that, you can use duplicati to backup to B2 instead of CloudBerry
-
There is a windows server core docker container, I wonder if it would be possible to run that to use a windows client from something like carbonite to backup. I'm not quite sure how licensing would work though.
-
On 8/17/2017 at 3:15 PM, Iormangund said:
Ty, unfortunately no 2.0b yet for my board, X11SSH-CTF.
X11SSH-LN4F appears to be the only one of the X11SSH range with 2.0b so far.
Guessing it can't be long now till its out for my one.
It's notoriously difficult getting release notes for Supermicro boards, no clue why. There should be some info in the bios update download, but it's pretty limited. Good luck getting any info from them, I've emailed them before with bios/mobo questions and never got a reply to any of them.
May only be for a few boards, but I found it under support bios updates for the specific board, at the very bottom listed as beta. (H270 mobo)
Here are the release notes for the X11SSH 2.0b bios:
QuoteProduct Name: X11SSH-F/LN4-F
Revision: 2.0b Previous Revision: 2.0a
Release Date: 7/27/2017
Update Category: Critical
Dependencies:
None
Important Notes:
None
Enhancements:
1. Changed BIOS revision to 2.0b.
2. Updated Intel Kaby Lake RC/SI package 4.1.0.6 PLR2.
3. Updated DT_P_123 for Kaby Lake-S B0 stepping microcode M2A906E9_0000005E and Skylake-S
R0/S0 stepping MCU M36506E3_000000BA.
4. Set enabled SATA Hot Plug as default.
5. Added support for new Micron ECC_On_Die chip(I-Die).
6. Displayed SGX-related items.
7. Updated Kaby Lake BIOS/SINIT ACM 1.2.0.
8. Added SumBbsSupportFlag into DAT file.
9. Added TPM PCR measurements for PCR[1], PCR[2], and PCR[6].
10. Removed the "Unrecoverable Media control failure" event log from BMC.
11. Added a workaround to clear onboard LAN device UR, CE status.
New Features:
None
Fixes:
1. Fixed problem of valid bit being checked before IPMI CMOS clear flag.
2. Fixed system reset or hanging after Watchdog function is enabled during BIOS update.
3. Fixed issue of SUM TC306 and TC317 failing in certain configuration cases.
4. Fixed inability of the system to enter "Recovery mode" automatically if BIOS ACPI table in the Main
Block is corrupted.
5. Fixed issue of system halting at 0xB2 when disabling JPG1.
6. Fixed incorrect CECC DIMM location being reported in BMC SEL log.
7. Fixed problem of IPMI device _CRS being reported as 0xca2 or 0xca3.
8. Fixed problem of "file size is zero" error occurring when using SUM in-band command
"GetCurrentBiosCfg".
9. Fixed inability to find correct Memory CECC DIMM location through SD5.
10. Fixed problem of "No DIMM Information" showing for Memory CECC in Event log of BIOS Setup.
11. Fixed problem of recovery from JBR1/FFS Check hanging at 0x90. -
9 hours ago, johnnie.black said:
Bios 2.0b with the HT fix is now available for my Supermicro X11SSM-F, didn't check but it should also be available for other X11 models.
Thanks for the update. I see the 2.0b out for my SM.
Any way to find release notes aside from emailing SM and hoping they send them?
-
3 hours ago, yippy3000 said:
There is an update but there aren't any release notes so I asked support and they said they did not think it included the micro-code update.
Is there anyway to tell if I am running the fixed micro-code for the CPU after I do the BIOS update?
Is it a supermicro? I wish they had release notes for bios updates.
-
9 minutes ago, Squid said:
Honestly don't know. I do know that when I played around with CP and creating metadata for Kodi, I was less than impressed with the results. (Or more to the point, the lack of brevity). I prefer Kodi to grab the metadata itself.
Yeah a lot of people go that route, and if that's the case Radarr or CP work. But I like to have my metadata saved locally in the folder.
-
Correct me if I'm wrong but from what I've seen Radarr doesn't grab metadata yet. Did they add that? I know that CP grabs a ton of metadata for Kodi.
-
16 hours ago, antaresuk said:
and I was amazed and how limited the GUI is. No Squid app store goodness, cant even install Bittorrent sync without SSH. Unraid for the win. Synology will act as a dumb backup server from now on
While I love unraid. DSM 6.x is not bad at all. Which version of DSM is on your Synology?
-
Welcome back! 2010 seems like a few years ago
-
Edit: I think I have it figured out. This might have been my fault. I forgot to shut down CP so I think the issue was due to that. But for some reason Radarr isn't getting any metadata. Guess I'll work on that one next. Anyone run into that?
Edit2: Looks like radarr can't do metadata yet?
Just wanted to update regarding the partial file.
This only happens with Radarr. Sonarr as well as Couchpotato don't leave any additional files. I've tested with multiple movies and shows.
Is anyone else experiencing something similar?
Edit: Also getting the following error
Import failed, path does not exist or is not accessible by Radarr:
My docker paths to my download folder are identical for nzbget and radarr.
-
35 minutes ago, Squid said:
While the file is being transferred, its named ...partial~ Once the transfer is completed, the partial file should be deleted. Does the partial file show up if you browse the folder via the shares tab (click on the folder icon)
Yes I see it there as well
-
Hi guys,
I have the container set up and working but I noticed once it downloads a movie, along side the movie there is a file with the same name and size that has a .partial~ extension.
So it looks like
movie.mkv
movie.mkv.partial~
Does anyone know why that file is there?
Thanks.
-
Updated with no issues.
-
Hi Guys,
I have everything running and working, but does anyone know how often the database syncs?
Example: A show downloads, it does not immediately show up in Kodi. Even if I re-sync the plugin in Kodi. I need to go to the Emby server and refresh the show, then it shows up immediately in Kodi.
Is there a trick or a setting to have it show up right away?
Thanks.
-
On 2/23/2017 at 3:42 PM, CrashnBrn said:
I have that same board, I'll try upgrading this weekend and see if I run into the same issue.
Going back to 6.2.x is just copying the bzimage file back?
Just updated to 6.3.2 and ran into no issues. Win10 VM came up, all docker containers came up. No issues that I can see. I will update if anything goes wrong.
-
11 hours ago, burtjr said:
Do you have a SuperMicro MB? If so several including myself can only boot into GUI mode. Mine is an SuperMicro X11SSH-F LN4, but may effect some earlier models too.
I have that same board, I'll try upgrading this weekend and see if I run into the same issue.
Going back to 6.2.x is just copying the bzimage file back?
best offsite backup solution atm
in Lounge
Posted
I actually just set up my backblack b2 backup yesterday (moving from crashplan). You have to use a docker like duplicati or rclone to backup to: backblaze, S3, or similar.