Jonathan

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Jonathan's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I've changed that setting (as well as disabling "Host access to custom networks" as the help menu's explicitly mentions macvlan and not enabling unless you're sure - and I don't recall having a good reason to enabling it). The setting has been on for months (if not a year or wo) and I'm not sure why it's causing issues only now. Having said that, I have seen some strange things in the past months - just not so frequent. The server is rebooted and everything is running. I'll report back when this doesn't solve the problem. Thanks very much for the quick repsonse!
  2. Hey! I'm reaching out because recently I've started experiencing system freezes that I can't attribute to a specific cause. In general, I first notice that DNS is no longer working on the network (I'm using an AdGuard container on the unRAID host), which prompts me to investigate. Then I'll find the machine unresponsive (connection refused or timeout on the web interface and SSH, no disk activity lights, and plugging in an HDMI cable to a display also doesn't give an output) but with the power indicator still on. The only way I've found to recover from it is a hardware reset, after which the machine runs fine again (but requires a parity check). This happens once or twice a week now. Today, the same symptoms (I don't know if the cause is the same) occurred by manually triggering a short "Fix Common Problems" scan. This is the first time I've seen this cause the symptoms, which is why I wanted to post this question in General Support initially. In the attachment you'll find a recently generated diagnostics file. Now I've read that this may not be entirely useful since for example system logs are not preserved over reboots. Because of this I've set-up persistent logging using the syslog server and I also have a copy of the system log since November 1 which should include log files pre-freeze. Since this is not anonymised I'd prefer sending it directly to somebody who is willing to help me out, instead of posting it here publicly. I've did a bit of digging and did found some threads that ended up with something along the lines of "the hardware is old and starting to be unreliable". I would not be surprised if that's it (oldest components incl. CPU, RAM and MoBo are 8 years old, HDD's are new), but before I look into replacement hardware, I just would like to confirm there's not something else going on. If there's more information I can provide please do let me know. Thank you in advance for any help, it's much appreciated! rannoch-diagnostics-20231114-1154.zip
  3. Rebuild has started, and all other aspects of Unraid are also recovering. I think the problem has been solved and I'll confirm once the rebuild has succeeded. Thanks!
  4. The repair has ran and logs the following. I can try to restart the array without the disk to see if I can access my data, but I would like to verify if this is the right and safe next step. Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata sb_fdblocks 1990335180, counted 1994233943 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 inode 30363094451 - bad extent starting block number 4503567550841217, offset 0 correcting nextents for inode 30363094451 bad data fork in inode 30363094451 cleared inode 30363094451 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451 clearing inode number in entry at offset 3464... - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... rebuilding directory inode 30362347891 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (6:957662) is ahead of log (1:2). Format log to cycle 9. done
  5. I've started the repair and will update this thread once it's done! Thanks.
  6. I have been able to stop the array by performing a clean restart via the UI. I've started the array in maintenance mode. The physical disk still shows up under unassigned. I've then performed a filesystem scan on the (emulated) data disk (disk 1). Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 1990335180, counted 1994233943 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 inode 30363094451 - bad extent starting block number 4503567550841217, offset 0 correcting nextents for inode 30363094451 bad data fork in inode 30363094451 would have cleared inode 30363094451 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451 would clear inode number in entry at offset 3464... inode 30363094451 - bad extent starting block number 4503567550841217, offset 0 correcting nextents for inode 30363094451 bad data fork in inode 30363094451 would have cleared inode 30363094451 - agno = 18 - agno = 19 - agno = 20 - agno = 21 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... entry "IMG-20190822-WA0053.jpg" in directory inode 30362347891 points to free inode 30363094451, would junk entry would rebuild directory inode 30362347891 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting.
  7. At this moment I can't seem to do that, as the array is still in "stopping" mode. I can't start it in maintenance mode.
  8. I've just noticed that I'm able to run the filesystem check on the disk while it's unmounted (not sure if that's due to a plug-in or native Unraid) but here's the output of that check, which to my knowledge is still the same as the moment I described in my post above: FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/sdb1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... sb_fdblocks 1990335180, counted 1994233943 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 inode 30363094451 - bad extent starting block number 4503567550841217, offset 0 correcting nextents for inode 30363094451 bad data fork in inode 30363094451 would have cleared inode 30363094451 - agno = 15 - agno = 16 - agno = 17 - agno = 18 - agno = 19 - agno = 20 - agno = 21 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 3 - agno = 2 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - agno = 15 - agno = 16 - agno = 17 entry "IMG-20190822-WA0053.jpg" at block 39 offset 3464 in directory inode 30362347891 references free inode 30363094451 would clear inode number in entry at offset 3464... inode 30363094451 - bad extent starting block number 4503567550841217, offset 0 correcting nextents for inode 30363094451 bad data fork in inode 30363094451 would have cleared inode 30363094451 - agno = 18 - agno = 19 - agno = 20 - agno = 21 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... entry "IMG-20190822-WA0053.jpg" in directory inode 30362347891 points to free inode 30363094451, would junk entry would rebuild directory inode 30362347891 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... No modify flag set, skipping filesystem flush and exiting. File system corruption detected! If the problem with the data on the disk is isolated to the one particular file mentioned, that file is not that important. If the safest way to rebuild the array and get parity again, it's okay if that file gets lost.
  9. Hey! I would like to reach out to get some help with my Unraid set-up (6.12.3). Until now I have been able to get by with reading other's post but at this moment I'm afraid of performing actions that will result in losing data, which is why I would like to post. I just came back from a vacation to find my Unraid shares unavailable. The array was running, but the "space free" indicator was at 100% and none of my array shares showed up. The pool shares (SSD) appear to be fine, and my Docker containers (appdata is on the pool shares) were fine. I think these problems may have come from an unclean shutdown, as there was one during my time away. What I noticed (before I started troubleshooting) was that on the main page both HDD's (one parity, one data) in the array showed a green circle, but the data drive shows (and still shows) "Unmountable: Unsupported or no file system". I've started searching and arrived at this forum post that seemed to point at a similar issue. Per that forum post I stopped the array, started it in maintenance mode and ran the disk check on the data disk. I unfortunately don't have the full log anymore, but it was mostly similar to what was in the linked thread except for the fact that it mentioned a file or two that were corrup/missing (I don't know the exact terminology anymore). At this point my thought was to stop the array again start the array with the disk missing so that it would be emulated from the parity disk. This would allow me to see if my data is still there and rebuild the data disk from parity (similar to what is described here). When I did so it show the data disk as being "uninstalled, contents emulated" as expected. However, I'm still not seeing any of my array shares. This is when I started getting worried. As preparation for starting a forum post asking for help I wanted to stop the array to prevent any further changes. However, when writing this post the array is still being stopped with it hanging on "Retry unmounting disk share(s)...". The current state is as follows: With none of my array shares showing up. The array is in a "stopping" (not: "stopped") state as mentioned above. I'll attach the diagnostics to this post as well. What are the appropriate steps to attempt re-mounting the disk and/or rebuilding the array from the parity disk? Thanks in advance for any help on the topic! rannoch-diagnostics-20230821-1051.zip