Jump to content
We're Hiring! Full Stack Developer ×

problems with disc 1


JustinChase

Recommended Posts

I've been cleaning up some stuff on my server, and today I see that all of my pictures are gone :(

 

I went into MC via putty to see if they still showed on the discs there, but it's telling me that it's unable to read the directory of disc 1 (I hope this is where all my pictures are).

 

But, nothing in unRAID shows any problems.  Disc 1 shows a green ball, and the smart report indicator is green also.  No errors with the array that have been reported to me, and all appears well, except for my inability to get to/use disc 1 in my system.

 

I've already rebooted, but issue persists.

 

Sadly, the cleanup I was doing was a precursor to updating my backups, so I'm not fully protected if I can't get this working again (or if the pics aren't on that disc).

 

Any help appreciated.

media-diagnostics-20180206-1246.zip

 

**EDIT to add more info**

 

I just checked in the unRAID GUI and each other disc shows plenty of my picture directories and files, but they don't list in the windows share that points to the share Photos.

 

It seems like disc 1 being 'missing' is causing the inability to see the photos on other discs for some reason.

 

Should I revert to version 6.4 from 6.4.1 which I just updated to the other day.

 

I don't want to work on anything else until I get this working right.

 

Why isn't disc1 throwing any errors in unRAID?  I expected to be warned about problems.

Link to comment

Thanks.  I'm working on doing that right now.  I checked and had fill up as my share fill method, and had 0kb as the minimum level.  Obviously not a good choice.  Perhaps unRAID should force a usable minimum.  either way, I've changed it to 100MB minimum now, which should hopefully take care of this in the future.

 

But, even so, why didn't unRAID handle filling the disc 'correctly' and not stumble when it got so full.  I had started an update process on my media server, which would have resulted in lots of files being re-written with updated tag info.  I assume it tried to do this 'in place' and killed the disc most of the files were on.  I would have thought it should have used the cache disc, or at the least, saw the disc out of space and auto-corrected to move files to a disc with space.

 

It seems like this is an unRAID 'error' that this could happen, but maybe it's just my stupidity/ignorance.

 

Hopefully the xfs repair works for me and I can rebalance the drives to free up some space and get working again.

 

thanks again.

Link to comment

darn...

 

Phase 1 - find and verify superblock...
        - block cache size set to 2241400 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 271615 tail block 271586
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

Are they suggesting I restart in not maintenance mode and try again?  I'm not sure how else to 'mount'  the file system.

 

Also, is this a good reason to re-format my drives to give the larger space to allow for better repair tools that the new formatting allows?  I had questions about this extra space a while back, but didn't reformat all the old drives with the new formatting/system.  I guess I'll have to do that pretty soon.

 

Link to comment

root@media:~# xfs_repair -L /dev/md1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
        - scan filesystem freespace and inode maps...
sb_fdblocks 37, counted 8229
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
        - agno = 1
        - agno = 2
        - agno = 3
        - process newly discovered inodes...
Phase 4 - check for duplicate blocks...
        - setting up duplicate extent list...
        - check for inodes claiming duplicate blocks...
        - agno = 1
        - agno = 3
        - agno = 2
        - agno = 0
Phase 5 - rebuild AG headers and trees...
        - reset superblock...
Phase 6 - check inode connectivity...
        - resetting contents of realtime bitmap and summary inodes
        - traversing filesystem ...
        - traversal finished ...
        - moving disconnected inodes to lost+found ...
Phase 7 - verify and correct link counts...
done

 

 

I'm restarting the array in normal mode now.  Fingers crossed.

 

thanks again.

Link to comment

Fill up with zero Minimum does seem like a disaster waiting to happen.

 

You should set Minimum Free to larger than the largest file you will ever write. It has no way to know how large a file will become when it begins to write it. If there is more than Minimum Free it can choose the disk and if it runs out of space it fails. In fact, with Fill Up and zero Minimum I think it has to fail eventually since it must choose the disk (if Split Level allows it) no matter how full it is.

 

Maybe @Squid could add a check in FCP for Fill Up, Zero Minimum.

Link to comment
1 hour ago, trurl said:

Fill up with zero Minimum does seem like a disaster waiting to happen.

 

You should set Minimum Free to larger than the largest file you will ever write. It has no way to know how large a file will become when it begins to write it. If there is more than Minimum Free it can choose the disk and if it runs out of space it fails. In fact, with Fill Up and zero Minimum I think it has to fail eventually since it must choose the disk (if Split Level allows it) no matter how full it is.

 

Maybe @Squid could add a check in FCP for Fill Up, Zero Minimum.

 

Agreed, but happy this turned out to NOT be a disaster, it easily could have been.

 

I think I've got myself back on track now.  Just cleaning up what got bunged up in the process.  I'm also changing to a minimum of 40GB since that's (almost) the largest movie on my server.  Probably excessive, but this was too scary/frustrating to deal with again, if avoidable.

 

Thanks again everyone for the help; much appreciated.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...