CrashnBrn

Members
  • Posts

    500
  • Joined

  • Last visited

Everything posted by CrashnBrn

  1. I'm in need of a way to scan and organize my receipts. These include small paper receipts from stores, as well as online receipts, and full page receipts (from doctors). Does anyone have a good hardware/software solution? I want to be able to tag the receipts to easily find the things I'm looking for ie: TV, Hand Surgery, and so forth. I don't mind paying a one time fee for an app or service but would prefer something free. I'm tired of having folders overfilled with receipts for projects going on. Thanks!
  2. I actually just set up my backblack b2 backup yesterday (moving from crashplan). You have to use a docker like duplicati or rclone to backup to: backblaze, S3, or similar.
  3. You're 100% correct! I enabled NAT reflection, changed it to a 302 redirect, cleared all my cache and everything started working! Thanks for your help aptalca! Edit: Any danger of leaving NAT reflection on?
  4. Here you go. I removed my email and site. The container is currently stopped. Thanks [s6-init] making user provided files available at /var/run/s6/etc...exited 0. [s6-init] ensuring user provided files have correct perms...exited 0. [fix-attrs.d] applying ownership & permissions fixes... [fix-attrs.d] done. [cont-init.d] executing container initialization scripts... [cont-init.d] 10-adduser: executing... ------------------------------------- _ () | | ___ _ __ | | / __| | | / \ | | \__ \ | | | () | |_| |___/ |_| \__/ Brought to you by linuxserver.io We gratefully accept donations at: https://www.linuxserver.io/donations/ ------------------------------------- GID/UID ------------------------------------- User uid: 99 User gid: 100 ------------------------------------- [cont-init.d] 10-adduser: exited 0. [cont-init.d] 20-config: executing... [cont-init.d] 20-config: exited 0. [cont-init.d] 30-keygen: executing... generating self-signed keys in /config/keys, you can replace these with your own keys if required Generating a 2048 bit RSA private key ........................+++ ...............................................................................+++ writing new private key to '/config/keys/cert.key' ----- [cont-init.d] 30-keygen: exited 0. [cont-init.d] 50-config: executing... Variables set: PUID=99 PGID=100 TZ=America/Los_Angeles URL=duckdns.org SUBDOMAINS=mywebsite EXTRA_DOMAINS= ONLY_SUBDOMAINS=true DHLEVEL=2048 VALIDATION=http DNSPLUGIN= EMAIL=myemail STAGING= Created donoteditthisfile.conf Backwards compatibility check. . . No compatibility action needed Creating DH parameters for additional security. This may take a very long time. There will be another message once this process is completed Generating DH parameters, 2048 bit long safe prime, generator 2 This is going to take a long time ................................................................................+...............................+........................+..............+.....................................................................................................................................................................................................................................................................................................................................................................................++*++* DH parameters successfully created - 2048 bits SUBDOMAINS entered, processing SUBDOMAINS entered, processing Only subdomains, no URL in cert Sub-domains processed are: -d mywebsite.duckdns.org E-mail address entered: myemail http validation is selected Generating new certificate Saving debug log to /var/log/letsencrypt/letsencrypt.log Plugins selected: Authenticator standalone, Installer None Obtaining a new certificate Performing the following challenges: http-01 challenge for mywebsite.duckdns.org Waiting for verification... Cleaning up challenges IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/mywebsite.duckdns.org/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/mywebsite.duckdns.org/privkey.pem Your cert will expire on 2018-06-23. To obtain a new or tweaked version of this certificate in the future, simply run certbot again. To non-interactively renew *all* of your certificates, run "certbot renew" - Your account credentials have been saved in your Certbot configuration directory at /etc/letsencrypt. You should make a secure backup of this folder now. This configuration directory will also contain certificates and private keys obtained by Certbot so making regular backups of this folder is ideal. - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le [cont-init.d] 50-config: exited 0. [cont-init.d] done. [services.d] starting services [services.d] done. Server ready Signal handled: Terminated. [cont-finish.d] executing container finish scripts... [cont-finish.d] done. [s6-finish] syncing disks.
  5. Nope still not working. I removed the container and appdata folder and tried again. I realized I can't even get to the default nginx page (before I change the default file). I just get "Site can't be reached URL took to long to respond". I'm trying to figure out if it's pfsense blocking something or nginx not working properly. I assumed pfsense is good since apache worked without issues. I do see this in my pfsense logs: nginx: 2018/03/23 13:22:58 [error] 33793#100135: *847 open() "/usr/local/www/sonarr" failed (2: No such file or directory), client: 1.1.1.1(changed), server: , request: "GET /sonarr HTTP/1.1", host: "website.duckdns.org(changed)"
  6. Hi! I've been trying to get this working all week and have no clue why it's not working for me. My conf file is below. when I go to https://domain.duckdns.org I get nothing. It just spins. I see the request pass through my firewall (pfsense). I'm wondering if there could be something wrong with nginx? I'm NAT'ing 81 and 443 externally. I've replaced my internal IP. I see Server Ready under logs for the container. I'm wondering if nginx is dropping the requests? Can anyone help point me in the right direction for troubleshooting? I've had Apache working no problems for a couple of years. I feel like I'm missing something obvious. TIA upstream backend { server 1.1.1.1:19999; keepalive 64; } server { listen 443 ssl default_server; # listen 80 default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location = / { return 301 /htpc; } location /nzbget { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:6789/nzbget; } location /sonarr { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:8989/sonarr; } location /couchpotato { include /config/nginx/proxy.conf; proxy_pass http://1.1.1.1:5050/couchpotato; } # location /radarr { # include /config/nginx/proxy.conf; # proxy_pass http://1.1.1.1:7878/radarr; # } # location /downloads { # include /config/nginx/proxy.conf; # proxy_pass http://1.1.1.1:8112/; # proxy_set_header X-Deluge-Base "/downloads/"; # } location ~ /netdata/(?<ndpath>.*) { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend/$ndpath$is_args$args; proxy_http_version 1.1; proxy_pass_request_headers on; proxy_set_header Connection "keep-alive"; proxy_store off; } }
  7. Hi Guys! I upgraded both parity drives. I now want to swap the old parity drives to data drives (one at a time). Is it the same procedure? Pull out data drive, put in old parity drive and assign to slot for it to rebuild? Or is it different since it was once a parity drive in the array? Thanks!
  8. I always end up selling my old drives.
  9. I love it! Using it now. Thanks!
  10. To add to that, you can use duplicati to backup to B2 instead of CloudBerry
  11. There is a windows server core docker container, I wonder if it would be possible to run that to use a windows client from something like carbonite to backup. I'm not quite sure how licensing would work though.
  12. Thanks for the update. I see the 2.0b out for my SM. Any way to find release notes aside from emailing SM and hoping they send them?
  13. Is it a supermicro? I wish they had release notes for bios updates.
  14. Yeah a lot of people go that route, and if that's the case Radarr or CP work. But I like to have my metadata saved locally in the folder.
  15. Correct me if I'm wrong but from what I've seen Radarr doesn't grab metadata yet. Did they add that? I know that CP grabs a ton of metadata for Kodi.
  16. While I love unraid. DSM 6.x is not bad at all. Which version of DSM is on your Synology?
  17. Welcome back! 2010 seems like a few years ago
  18. Edit: I think I have it figured out. This might have been my fault. I forgot to shut down CP so I think the issue was due to that. But for some reason Radarr isn't getting any metadata. Guess I'll work on that one next. Anyone run into that? Edit2: Looks like radarr can't do metadata yet? Just wanted to update regarding the partial file. This only happens with Radarr. Sonarr as well as Couchpotato don't leave any additional files. I've tested with multiple movies and shows. Is anyone else experiencing something similar? Edit: Also getting the following error Import failed, path does not exist or is not accessible by Radarr: My docker paths to my download folder are identical for nzbget and radarr.
  19. Hi guys, I have the container set up and working but I noticed once it downloads a movie, along side the movie there is a file with the same name and size that has a .partial~ extension. So it looks like movie.mkv movie.mkv.partial~ Does anyone know why that file is there? Thanks.
  20. Hi Guys, I have everything running and working, but does anyone know how often the database syncs? Example: A show downloads, it does not immediately show up in Kodi. Even if I re-sync the plugin in Kodi. I need to go to the Emby server and refresh the show, then it shows up immediately in Kodi. Is there a trick or a setting to have it show up right away? Thanks.
  21. Just updated to 6.3.2 and ran into no issues. Win10 VM came up, all docker containers came up. No issues that I can see. I will update if anything goes wrong.