talmania

Members
  • Posts

    205
  • Joined

  • Last visited

Everything posted by talmania

  1. EDIT: Sometimes typing something out can REALLY help. It occurred to me no more than 10 minutes after publishing this post that these lines jumped out at me: #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; I commented out those extra authentication steps and IT WORKS! I guess I was close and not as stupid as I thought--would love to hear best practices moving forward and if this approach in sites-conf/default is preferable to /proxy-conf/app.subdomain.conf and what the distinction is. Leaving this here in case it helps someone down the road. I've been running letsencrypt perfectly with Nextcloud for years now thanks to this awesome community---I've recently decided to try more dockers than just Plex & Nextcloud (specifically Bitwarden but my questions are primarily around letsencrypt) and I'm trying to learn how to publish more than one docker. I've jumped back to square one and learning about the configs etc (from watching spaceinvader videos and reading articles at linuxserverio) and feel like I'm SUPER close but missing something subtle somewhere (or maybe just think I'm closer than I actually am). I've got DNS working for a custom domain (was already working for nextcloud) for bitwarden, created the new docker network and moved the containers over to that network for communications (as I understand it is necessary) and then passing the bitwarden subdomain to letsencrypt successfully after working with the nginx/proxy-confs and creating a bitwarden.subdomain.conf file and then realizing there's default in nginx/site-confs that looks like the following and when I would hit bw.mydomain.com externally I'd land on the nextcloud landing page. server { listen 80; listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name nextcloud.mydomain.com; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass https://172.23.1.31:444/; } } So I tried several things from that point: 1) added a line to reference the proxy-confs (with the *.subdomain.conf file I created) with an include statement in the site-confs/default (which ended up in errors in letsencrypt log on startup). 2) ignored the proxy-confs all together and added the following to my sites-confs/default after the above code: server { listen 80; listen 443 ssl; server_name bw.mydomain.com; include /config/nginx/ssl.conf; client_max_body_size 0; location / { auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_Bitwarden Bitwarden; proxy_pass http://172.23.1.31:8343; } } I no longer land on the nextcloud landing page but get a generic prompt for a username and password for authentication and get no further (I can access the docker internally no problem). I've tweaked more than a few I'm curious what best practice is? Is it to use the proxy-confs or to use site-confs to address the various services? In my research I've seen it done both ways...thanks in advance for any and all advice/help!
  2. Actually windows gave me a permissions error and when I started poking around I realized I was under disk9/lost+found and not \\tower\lost+found. I'm assuming all I have to do is move the files from \\tower\lost+found to their respective \\tower\sharename and I'll be set no? Or is a more complex move needed? Usershare to usershare if I'm not mistaken...
  3. And I suck...was trying to access via disk9 and NOT the users share lost+found. Buried under those original directories are my ACTUAL directories and files! Tried a couple and they seem to work perfectly. I think we're done here...time to check and move!
  4. And more diagnostics too.... deed-diagnostics-20170802-1745.zip
  5. Ok I ran it and it completed....then had to stop the array and restart it. It allowed disk9 to be mounted and I can now see the disk share but nothing is there except "lost and found". Can't browse the share with windows but can under the gui. Tons of folders in there and files as well of the correctish sizes I presume. Assume I have to open them up to read them etc? They are named with simple numeric sequences--see attached picture. Attached is the output of the --rebuild-tree. rebuild-tree.txt
  6. Should I be concerned that it states "unmountable"? It appears to be rebuilding but never been here before....thanks!
  7. Evening update: came back home from the office and found the following in the summary of the log: Then I unmounted the array, assigned the new disk9 and brought the array online and now I see the following:
  8. Looking further ahead--when this process completes do I simply complete the remaining steps in the original directive johnnie?
  9. It is now---I can see the counters incrementing 2 (of 19)/52 (of 92)/ 107 (of 170) etc...
  10. Ok thanks johnnie--I'm following the wiki and crossing my fingers. Didn't change the array status at all--hope that was correct. Currently status: root@Deed:~# reiserfsck --rebuild-sb /dev/md9 reiserfsck 3.6.24 Will check superblock and rebuild it if needed Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes Did you use resizer(y/n)[n]: n rebuild-sb: wrong block count occured (854657433), fixed (488378624) rebuild-sb: wrong bitmap number occured (26083), fixed (14905) rebuild-sb: wrong free block count occured (791198636), zeroed Reiserfs super block in block 16 on 0x909 of format 3.6 with standard journal Count of blocks on the device: 488378624 Number of bitmaps: 14905 Blocksize: 4096 Free blocks (count of blocks - used [journal, bitmaps, data, reserved] blocks): 0 Root block: 130410427 Filesystem is clean Tree height: 5 Hash function used to sort names: "r5" Objectid map size 8, max 972 Journal parameters: Device [0x0] Magic [0x63a76705] Size 8193 blocks (including 1 for journal header) (first block 18) Max transaction length 1024 blocks Max batch size 900 blocks Max commit age 30 Blocks reserved by journal: 0 Fs state field: 0x1: some corruptions exist. sb_version: 2 inode generation number: 2232 UUID: a150194f-d4db-4f12-8220-a6300f0b8386 LABEL: Set flags in SB: ATTRIBUTES CLEAN Mount count: 205 Maximum mount count: 30 Last fsck run: Fri Nov 5 23:24:38 2010 Check interval in days: 180 Is this ok ? (y/n)[n]: y The fs may still be unconsistent. Run reiserfsck --check. root@Deed:~# reiserfsck --check Usage: reiserfsck [mode] [options] device Modes: --check consistency checking (default) --fix-fixable fix corruptions which can be fixed without --rebuild-tree --rebuild-sb super block checking and rebuilding if needed (may require --rebuild-tree afterwards) --rebuild-tree force fsck to rebuild filesystem from scratch (takes a long time) --clean-attributes clean garbage in reserved fields in StatDatas Options: -j | --journal device specify journal if relocated -B | --badblocks file file with list of all bad blocks on the fs -l | --logfile file make fsck to complain to specifed file -n | --nolog make fsck to not complain -z | --adjust-size fix file sizes to real size -q | --quiet no speed info -y | --yes no confirmations -f | --force force checking even if the file system is marked clean -V prints version and exits -a and -p some light-weight auto checks for bootup -r ignored Expert options: --no-journal-available do not open nor replay journal -S | --scan-whole-partition build tree of all blocks of the device root@Deed:~# reiserfsck --check /dev/md9 reiserfsck 3.6.24 Will read-only check consistency of the filesystem on /dev/md9 Will put log info to 'stdout' Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes ########### reiserfsck --check started at Tue Aug 1 14:18:50 2017 ########### Replaying journal: Done. Reiserfs journal '/dev/md9' in blocks [18..8211]: 0 transactions replayed EDIT: It's continuing to run--worried that it hung up there on 0 transactions replayed.
  11. Ok I get to console and see the following attached picture1--this came up after one of the initial steps I believe. So I putty'd in and was able to login that way and got the output shown as picture2. Edit: I should add that doing anything in console in this top picture didn't respond--could not get a login prompt etc.
  12. Attached--thanks again! deed-diagnostics-20170801-1354.zip
  13. Ok i'm back--I got a new 2TB red drive and performed the steps quoted above---I stopped array and unassigned disk9. Then I started the array with only the option checked to "add a drive as soon as possible"--not the one that started the array but didn't mount the disks. My drive9 now shows as "unmountable" and I can browse the server and see the other shares INCLUDING the disk shares but there is no disk share for "disk9". Should I still proceed? Thanks!
  14. Ok thanks--looks like to me the difference is in utilizing the old current 2TB drive in this process. It's still being recognized (at least at last boot it was)--in your professional opinion is the new 2TB disk 9 option the safest route to potentially salvaging my data?
  15. Thanks Johnnie--if the 2TB gives me better odds then I'll pick a new one up. Will do so, follow the above steps and report back.
  16. I probably do actually---wouldn't be precleared and would be a previous pull from the same array. Also have 5 other brand new 8TB Red's waiting for install--disk9 was on the list for replacement as it's one of the oldest in the array. Not opposed to going out and buying a 2TB if that's required.
  17. Yep completely---parity check was run and completed before the swap for disk13 which I still have available. Is the best course of action to stop rebuild, shutdown, replace disk13 with the old one then unassign disk9 and replace it instead? I feel like that's the correct move but I don't know if unraid will allow the old disk13 back in it's place...
  18. And now Disk9 is throwing errors in the gui and can no longer read smart data.
  19. Gotcha--here you go! Noticed disk9 shows no errors now and I can actually see the smart report (shows 7 current pending sectors). Thanks so much for your help! It's incredibly appreciated! Edit: And the parity rebuild started on Disk13 on boot. deed-diagnostics-20170801-0955.zip
  20. Thanks johnnie! I've stopped the rebuild and am trying to take the array offline and getting a repeated "REISERFS error (device md9) resierfs_read_locked_inode: I/o failure occurred trying to find stat data of [4 1463 0x0 SD] from the console of the server. Gui just says unmounting disk shares...retry unmounting disk shares. Hard power off I'm assuming?
  21. Thanks--can't believe I forgot them in the initial post! Thanks again Frank--hopefully someone can provide some insight into the appropriate direction to take next. Thanks again! deed-diagnostics-20170801-0928.zip
  22. Possible but unlikely--they are hot swaps and the drive was present with no issues when I started the rebuild and monitored it for the first 20 percent or so. Thanks Frank!
  23. Thanks for the feedback---yeah I just had a eureka moment I hope! I do have the old drive 13 and it is in perfect condition (besides being 5+ years old). Can I cancel the rebuild and go backwards and reassign the original drive 13 to it's previous place?