Jump to content

HELP! Lost data?


Recommended Posts

Hi, ~20 hours ago power disconnected from my unraid machine. When booting up again, as expected, the system started a parity check, which is still running. I was not particularly concerned with data validity as most of the times my disks are spun down, and only the cache is being written.

 

Though, I just noticed something pretty bad: all the files created in the past 8 months or so, seem to be lost, unless for some miracle they will reappear when the parity check ends. This behavior is different from anything I've experienced in the past.

 

Another thing I'm noticing is that one of the disk is showing "no file system" and is not being mounted. But, on the other hand, that disk seem to be being used for rebuilding parity, thus also wiping my chance of getting my data back through that.

 

Attaching diagnostic files. Please help.

 

pescaflix-diagnostics-20240413-2203.zip

Screenshot 2024-04-13 at 22.14.07.png

Edited by grana
Link to comment
7 hours ago, itimpi said:

Handling of drives that unexpectedly go unmountable is covered here in the online documentation accessible via the ‘Manual’ link at the bottom of the GUI or the DOCS link at the top of each forum page. 

The manual says that documentation for ZFS should be added, but it is not :(

Link to comment
1 hour ago, grana said:

The manual says that documentation for ZFS should be added, but it is not :(

Sorry - missed the fact that it was ZFS.

 

Not sure of the steps to handle this on ZFS I am afraid, hopefully someone else can provide some guidance.   In the worst case I would expect something like UFS Explorer on Windows would be able to get most/all of the data.

Link to comment
zpool import
   pool: cache
     id: 15828626886734704336
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	cache       ONLINE
	  sdb1      ONLINE

   pool: disk3
     id: 2087549526651695302
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	disk3       ONLINE
	  md3p1     ONLINE

   pool: disk2
     id: 11012034467400980073
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	disk2       ONLINE
	  md2p1     ONLINE

   pool: disk1
     id: 4272399308488201852
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	disk1       ONLINE
	  md1p1     ONLINE

 

this is after starting the array in maintenance mode. With the array stopped, only cache entry is shown.

Link to comment
zpool import disk3
cannot import 'disk3': insufficient replicas
	Destroy and re-create the pool from
	a backup source.

and

zdb -U /data/zfs/zpool.cache -eCC disk3

Configuration for import:
        vdev_children: 1
        version: 5000
        pool_guid: 2087549526651695302
        name: 'disk3'
        state: 0
        vdev_tree:
            type: 'root'
            id: 0
            guid: 2087549526651695302
            children[0]:
                type: 'disk'
                id: 0
                guid: 18181126314476381217
                whole_disk: 0
                metaslab_array: 256
                metaslab_shift: 34
                ashift: 12
                asize: 14000497885184
                is_log: 0
                create_txg: 4
                expansion_time: 1712610252
                path: '/dev/mapper/md3p1'
                devid: 'dm-uuid-CRYPT-LUKS2-291a6f7d67b049978cdfc4ef46119047-md3p1'
                phys_path: '/dev/disk/by-uuid/2087549526651695302'
        load-policy:
            load-request-txg: 18446744073709551615
            load-rewind-policy: 2

MOS Configuration:
        version: 5000
        name: 'disk3'
        state: 0
        txg: 1278326
        pool_guid: 2087549526651695302
        errata: 0
        hostname: 'pescaflix'
        com.delphix:has_per_vdev_zaps
        vdev_children: 1
        vdev_tree:
            type: 'root'
            id: 0
            guid: 2087549526651695302
            create_txg: 4
            children[0]:
                type: 'disk'
                id: 0
                guid: 18181126314476381217
                path: '/dev/mapper/md3p1'
                whole_disk: 0
                metaslab_array: 256
                metaslab_shift: 34
                ashift: 12
                asize: 14000497885184
                is_log: 0
                create_txg: 4
                expansion_time: 1712610252
                com.delphix:vdev_zap_leaf: 129
                com.delphix:vdev_zap_top: 130
        features_for_read:
            com.delphix:hole_birth
            com.delphix:embedded_data

 

Link to comment

I think that most likely you will need to restore that pool from a backup, but try these first:

zpool import -F disk3

and if still fails

zpool import -FX disk3

If both fail restore from a backup, if there's no backup, you can try a file recovery app, like UFS explorer.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...