Jump to content

[Solved] Shares missing after restart


Drakkim

Recommended Posts

I restarted my server last night and when I tried accessing it today, I discovered that my shares are gone and the Docker engine can't start. I tried one restart and no change.

 

Edit: I dug into the logs after my first post and (I think) found the problem. syslog.txt line 1263 loads my hard drive, then dumps a bunch of kernel information that I don't understand and eventually comes back to "normal geek speak" at line 1361 with "mount: /mnt/disk1: mount(2) system call failed: Structure needs cleaning."

 

So, I ran xfs_repair, which gave me the following:

Quote

ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Unless there's another mounting trick, I can't mount the drive, so... is -L my only option and how likely is it that I'm going to lose tons of data? I'm looking at a 10TB drive with 8TB used last I looked...

 

server-diagnostics-20191104-0126.zip

Edited by Drakkim
Updated title to [Solved]
Link to comment

You only have 1 data disk and no parity disk in the array. Data disk1 is unmountable. It should show this clearly in Main - Array Devices, but you didn't mention it. User shares cannot work with no disk mounted in the array.

 

Check filesystem on disk1:

 

https://wiki.unraid.net/Check_Disk_Filesystems#Drives_formatted_with_XFS

 

Be sure to capture the results so you can post them if needed.

Link to comment

Thanks for the super-quick reply. I actually edited my post with more info while you were approving/replying there, but you are right, disk 1 is not mountable and I didn't see that note on the main page. I ran xfs_repair -v as that article suggests and got this:

Quote

root@Server:/mnt# xfs_repair -v /dev/md1
Phase 1 - find and verify superblock...
        - block cache size set to 687864 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 1271027 tail block 1270589
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 ... which is a little more information than I got without the -v, but still leaves the question of how likely is -L to kill all my data? Would I be much better served to shut the server off and wait until I can but a new drive to attempt File Scavenger or the TestDisk live cd? Or would (likely) only recent changes be lost?

 

(And for clarity, the "still leaves the question" refers to part of the OP that wasn't there when Constructor replied)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...