mehappy

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by mehappy

  1. I followed this and now my log memory constantly shows 100%. Is this something to be concerned about? EDIT: this is unrelated, looks like nginx is running away with the logs I found another topic (unsolved) that is about this issue: https://forums.unraid.net/topic/86114-nginx-running-out-of-shared-memory/
  2. Won't writing the partitions using gdisk screw up screw up parity? Edit, is this the process to follow? 1. Stop array 2. Run gdisk 3. Assign drive back 4. Start array 5. Maybe everything is fine? 6. If not, how do I get Unraid to be able to mount the disk?
  3. I ran the check via the GUI with options -nv: Phase 1 - find and verify superblock... - block cache size set to 679024 entries Phase 2 - using internal log - zero log... zero_log: head block 19606 tail block 19606 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 7 - agno = 3 - agno = 4 - agno = 6 - agno = 1 - agno = 5 - agno = 8 - agno = 9 - agno = 10 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. XFS_REPAIR Summary Fri Aug 20 19:42:59 2021 Phase Start End Duration Phase 1: 08/20 19:42:58 08/20 19:42:58 Phase 2: 08/20 19:42:58 08/20 19:42:58 Phase 3: 08/20 19:42:58 08/20 19:42:59 1 second Phase 4: 08/20 19:42:59 08/20 19:42:59 Phase 5: Skipped Phase 6: 08/20 19:42:59 08/20 19:42:59 Phase 7: 08/20 19:42:59 08/20 19:42:59 Total run time: 1 second
  4. Update, I ran gdisk on this as a hunch and here's the results. root@Tower:~# gdisk /dev/sdc GPT fdisk (gdisk) version 1.0.4 Caution: invalid main GPT header, but valid backup; regenerating main header from backup! Warning: Invalid CRC on main header data; loaded backup partition table. Warning! One or more CRCs don't match. You should repair the disk! Main header: ERROR Backup header: OK Main partition table: OK Backup partition table: OK Partition table scan: MBR: protective BSD: not present APM: not present GPT: damaged **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** Command (? for help): q Is it a good idea to proceed to write the table to the disk? I'm not super familiar with gdisk, but saw this as a fix for someone with the same issue here:
  5. SOLVED: Ran gdisk on the drive in question and everything works again. I upgraded from 6.8.3 to 6.9.2 and noticed my disk4 is giving the message, "Unmountable: Unsupported partition layout". I haven't made any hardware changes recently. I've tried the following to no avail: rebooting reseating the power and sata cables on the drive restoring back to the 6.8.3 backup Anyone know how to fix this, or is the best option going to be to try to format & rebuild from parity? I've attached diagnostics. tower-diagnostics-20210820-1619.zip
  6. @binhex According to the scripts included in PIAs documentation on manually getting a port, tokens should be generated by sending the POST request to https://privateinternetaccess.com/gtoken/generateToken, rather than 10.0.0.1. Would updating the arch-int-vpn layer to use that URL fix the issue people have been having today where certain PIA endpoints are not generating tokens? Unfortunately I don't know enough about docker to figure out how to hijack & modify the vpn layer and test it myself.