Jump to content

Kujo

Members
  • Posts

    19
  • Joined

Posts posted by Kujo

  1. 10 hours ago, johnnie.black said:

    Recommended converting away from reiserfs, it's not recommended for years now for various reasons.

    I'll look into this in the new year once I get my shares cleaned up.  Been using Unraid for 8 years, never had any issues before.  I didn't realize XFS was recommended.

  2. Quote

    reiserfsck 3.6.27 Will read-only check consistency of the filesystem on /dev/md4 Will put log info to 'stdout' ########### reiserfsck --check started at Mon Dec 16 18:03:21 2019 ########### Replaying journal: Replaying journal: Done. Reiserfs journal '/dev/md4' in blocks [18..8211]: 0 transactions replayed Checking internal tree.. Bad root block 0. (--rebuild-tree did not complete)

    --rebuild-tree on disk 1 was successful, but disk 4 is not so much.  2 attempts now.  Worth running again?  Any other options?  If I was to replace the drive, could I rebuild via parity and not loose anything?

    mymediasvr-diagnostics-20191216-1817.zip

  3. Quote

    disk 4 output

    ----------------------------------------------------------------

    reiserfsck 3.6.27

    Will read-only check consistency of the filesystem on /dev/md4
    Will put log info to 'stdout'
    ###########
    reiserfsck --check started at Sun Dec 15 17:12:55 2019
    ###########
    Replaying journal: 
    Replaying journal: Done.
    Reiserfs journal '/dev/md4' in blocks [18..8211]: 0 transactions replayed
    Checking internal tree..

    Bad root block 0. (--rebuild-tree did not complete)

     

  4. Quote

    disk 1 output

    ------------------------

    reiserfsck 3.6.27

    Will read-only check consistency of the filesystem on /dev/md1
    Will put log info to 'stdout'
    ###########
    reiserfsck --check started at Sun Dec 15 17:06:47 2019
    ###########
    Replaying journal: Trans replayed: mountid 188, transid 405216, desc 7519, len 1, commit 7521, next trans offset 7504

    Replaying journal: |                                        |  0.3%  1 trans
    Trans replayed: mountid 188, transid 405217, desc 7522, len 1, commit 7524, next trans offset 7507
    Trans replayed: mountid 188, transid 405218, desc 7525, len 1, commit 7527, next trans offset 7510
    Trans replayed: mountid 188, transid 405219, desc 7528, len 1, commit 7530, next trans offset 7513
    Trans replayed: mountid 188, transid 405220, desc 7531, len 1, commit 7533, next trans offset 7516

    Replaying journal: |=                                       /  1.6%  5 trans
    Trans replayed: mountid 188, transid 405221, desc 7534, len 1, commit 7536, next trans offset 7519
    Trans replayed: mountid 188, transid 405222, desc 7537, len 1, commit 7539, next trans offset 7522
    Trans replayed: mountid 188, transid 405223, desc 7540, len 1, commit 7542, next trans offset 7525

                                                                                    

    Replaying journal: Done.
    Reiserfs journal '/dev/md1' in blocks [18..8211]: 8 transactions replayed
    Checking internal tree..  finished
    Comparing bitmaps..Bad nodes were found, Semantic pass skipped
    3 found corruptions can be fixed only when running with --rebuild-tree
    ###########
    reiserfsck finished at Sun Dec 15 17:10:38 2019
    ###########
    block 164600672: The level of the node (0) is not correct, (1) expected
     the problem in the internal node occured (164600672), whole subtree is skipped
    block 191561884: The level of the node (0) is not correct, (2) expected
     the problem in the internal node occured (191561884), whole subtree is skipped
    block 77158294: The level of the node (0) is not correct, (3) expected
     the problem in the internal node occured (77158294), whole subtree is skipped
    vpf-10640: The on-disk and the correct bitmaps differs.

     

  5. I replaced the parity drive with a new 4TB drive as well as disk 1 with a new 4 TB drive as the original disk 1 was having smart disk failures.  I followed the steps on the Wiki to replace the parity drive, this completed in about 12.5 errors.  I then replaced the disk 1 with the other new 4TB drive.   This took just over 10 hours. 

     

    When I attempted to bring the array back up, disk 4 is now unmountable.  This was a surprise as there was never an issue with this disk.  I put the array in maintenance mode, and ran a disk check in the gui for disk 4.   There were errors and it recommended to run reiserfsck rebuild tree.  I did but no change.  Not sure how to get of this mess.  

     

    I've attached the logs and screen shots.  Running  Unraid 6.6.7.  Any help would be appreciated. 

    image.thumb.png.2f91db679aabf30e0ce183fbad00ed08.png

     

    image.png

    mymediasvr-diagnostics-20191215-1633.zip

×
×
  • Create New...