mkono87

Members
  • Posts

    150
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mkono87's Achievements

Apprentice

Apprentice (3/14)

3

Reputation

  1. IM having a hell of time getting SSH to work with gitea. Im using a custom network and have it mapped to 25 on the host. I have set the port to 25 in the app.ini and restarted. When trying to clone a repo, it just hangs at cloning repo. I have already added my key to gitea.
  2. Just an update I moved all my appdata back to /mnt/user, wiped the other drive and reformatted. Been about 2 weeks uptime since then. I guess its just an old failing ssd on its way out. Il be testing it in another system later.
  3. I recently moved and using another subnet. Today I noticed with Org, that when I sign using Plex oAuth, while it is sucessful, I get a API connection failed and wont let me in. Plex is the only means I have to sign in atm. My nginx error log shows this. Any idea how I can fix this? 2021/09/27 09:12:08 [error] 381#381: *3408 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 172.18.0.2, server: _, request: "POST /api/v2/login HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7-fpm.sock", host: "main.mydomain.com", referrer: "https://main.mydomain.com/" 2021/09/27 09:12:08 [error] 381#381: *3408 upstream timed out (110: Operation timed out) while reading response header from upstream, client: 172.18.0.2, server: _, request: "POST /api/v2/login HTTP/1.1", upstream: "fastcgi://unix:/var/run/php7-fpm.sock", host: "main.mydomain.com", referrer: "https://main.mydomain.com/"
  4. Thats exactly what i just did as it was first complaining about a xml in dockerman being corrupted then i couldn't start docker again. I had appdata on a separate ssd (the xfs with corruption) I have moved it back to my cache drive (default unraid setup). Il see how that goes.
  5. Ran a few memtest passes and no errors but my system has crashed again. Here is the new logs. @fmp4m Just wanted to loop you in if ok. nas-diagnostics-20210919-1812.zip syslog.zip
  6. No, that was my next step I think. How long do you think it sufficient? Overnight?
  7. It's crashed again. I will need to post another log when I get the chance. I'm totally lost at this point.
  8. It did crash again and I finally got around to doing xfs_repair on my appdata drive. Seems to have fixed itself correctly. root@NAS:~# xfs_repair /dev/sdb1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 bad CRC for inode 244937761 bad CRC for inode 244937761, will rewrite Bad atime nsec 2173239295 on inode 244937761, resetting to zero cleared inode 244937761 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 2 - agno = 3 - agno = 1 - agno = 0 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... done No lost and found folder created. So I guess at this point is reset the syslog and see what happens.
  9. I realized that my bios was way out of date. I updated an its been running for 3 days so far. Im not out of the clear yet, but its a good sign. My app data drive appears to have a xfs corruption error so I need to repair that at some point too.
  10. Will it probably throw the error notification again in the near future?
  11. I did receive an error notification for the docker driver with an uncorrected 1 status. I did a smart test after it finished the parity check. (I know not related). I did get a successful smart extended test, is this not to be trusted in this case? Sent from my Mi 9T using Tapatalk
  12. Yes this is what I have now. Just to let you know. My docker containers and vms are on their own separate drive. Yes its small but I have have vms atm. NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS loop0 7:0 0 13.5M 1 loop /lib/modules loop1 7:1 0 101.1M 1 loop /lib/firmware loop2 7:2 0 40G 0 loop /var/lib/docker/btrfs /var/lib/docker sda 8:0 1 7.3G 0 disk └─sda1 8:1 1 7.3G 0 part /boot sdc 8:32 0 223.6G 0 disk └─sdc1 8:33 0 223.6G 0 part /mnt/disks/docker-vm sdd 8:48 0 3.6T 0 disk └─sdd1 8:49 0 3.6T 0 part sde 8:64 0 3.6T 0 disk └─sde1 8:65 0 3.6T 0 part sdf 8:80 0 465.8G 0 disk └─sdf1 8:81 0 465.8G 0 part /mnt/cache sdg 8:96 0 3.6T 0 disk └─sdg1 8:97 0 3.6T 0 part sdh 8:112 0 3.6T 0 disk └─sdh1 8:113 0 3.6T 0 part md1 9:1 0 3.6T 0 md /mnt/disk1 md2 9:2 0 3.6T 0 md /mnt/disk2 md3 9:3 0 3.6T 0 md /mnt/disk3
  13. Hmm, thats no good. I dont even see a sdb1 entry when using lsblk or ls /dev. Im not sure what that would be.
  14. Okay I have no updated to the latest bios and attached the latest logs from when it went down yesterday. Please note: These logs are from before I updated the Bios. syslog.zip nas-diagnostics-20210908-1252.zip