jehan

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by jehan

  1. Okay - good to know. Yes - I'm a bit limited on my side here. zpool import root@unraid:~# zpool import pool: disk4 id: 7004989206944992445 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: disk4 UNAVAIL insufficient replicas sdh1 UNAVAIL invalid label root@unraid:~#
  2. Yes, I'm aware of the detected data corruption. Previously, I had some problems with newly attached hardware which forced me to hard-reset the system several times, causing the data corruption. However, this is not a significant issue since I have the corrupted data backed up. I previously conducted a memtest, which showed no problems with the memory. Upon plugging Disk 2 back into the array, Disk 5 became unavailable after I rebooted the Unraid machine. Consequently, I had two disks that could not be mounted. Surprisingly (for me), Unraid offered me the option to rebuild Disk 5, which I accepted. What do you think? Do I have any chance of rescuing the data on Disk 5?
  3. What I did was: stop the array, remove Disk 2, start the array - Disk 2 was emulated. Than I started to clear Disk 2. Than I stopped the array and plugged Disk 2 back to the array and started the array. At this point Disk 5 became unavailable. I haven't disabled Disk 5 at any point actively. I hope I answered your question.
  4. Hi, In my Unraid array, I have five ZFS single-disk stripes, labeled Disk 1 through Disk 5, along with one parity drive. The five ZFS disks have been formatted through the Unraid GUI. Since I encountered an issue with Disk 2, I did and completed a parity check, stopped the array, removed the disk, and began clearing it. After a few minutes, I canceled the clearing process, reinserted the disk into the array, and restarted it, assuming Unraid would recognize Disk 2 as failed and initiate a rebuild process. Upon restarting the array, Disk 2 was flagged as "Unmountable: Unsupported partition layout," which was somewhat expected. However, what surprised me was that Disk 5 was flagged as "Unmountable: Unsupported or no filesystem," and Unraid suggests rebuilding Disk 5, which I accepted. Unfortunately, Disk 5 remains unmountable even after the rebuild has finished. Additionally, what makes me wonder is that the "zpool import" command shows that disk sdi (Disk 5) has the pool name "disk4." root@unraid:~# zpool import pool: disk4 id: 7004989206944992445 state: UNAVAIL status: One or more devices contains corrupted data. action: The pool cannot be imported due to damaged devices or data. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-5E config: disk4 UNAVAIL insufficient replicas sdi1 UNAVAIL invalid label syslog: Mar 23 19:42:20 unraid emhttpd: mounting /mnt/disk4 Mar 23 19:42:20 unraid emhttpd: shcmd (58): mkdir -p /mnt/disk4 Mar 23 19:42:20 unraid emhttpd: /usr/sbin/zpool import -f -d /dev/md4p1 2>&1 Mar 23 19:42:23 unraid emhttpd: pool: disk4 Mar 23 19:42:23 unraid emhttpd: id: 12000237853854133759 Mar 23 19:42:23 unraid emhttpd: shcmd (59): /usr/sbin/zpool import -f -N -o autoexpand=on -d /dev/md4p1 12000237853854133759 disk4 Mar 23 19:42:27 unraid emhttpd: shcmd (60): /usr/sbin/zpool online -e disk4 /dev/md4p1 Mar 23 19:42:27 unraid emhttpd: /usr/sbin/zpool status -PL disk4 2>&1 Mar 23 19:42:27 unraid emhttpd: pool: disk4 Mar 23 19:42:27 unraid emhttpd: state: ONLINE Mar 23 19:42:27 unraid emhttpd: config: Mar 23 19:42:27 unraid emhttpd: NAME STATE READ WRITE CKSUM Mar 23 19:42:27 unraid emhttpd: disk4 ONLINE 0 0 0 Mar 23 19:42:27 unraid emhttpd: /dev/md4p1 ONLINE 0 0 0 Mar 23 19:42:27 unraid emhttpd: errors: No known data errors Mar 23 19:42:27 unraid emhttpd: shcmd (61): /usr/sbin/zfs set mountpoint=/mnt/disk4 disk4 Mar 23 19:42:28 unraid emhttpd: shcmd (62): /usr/sbin/zfs set atime=off disk4 Mar 23 19:42:28 unraid emhttpd: shcmd (63): /usr/sbin/zfs mount disk4 Mar 23 19:42:28 unraid emhttpd: shcmd (64): /usr/sbin/zpool set autotrim=off disk4 Mar 23 19:42:29 unraid emhttpd: shcmd (65): /usr/sbin/zfs set compression=off disk4 Mar 23 19:42:29 unraid emhttpd: mounting /mnt/disk5 Mar 23 19:42:29 unraid emhttpd: shcmd (66): mkdir -p /mnt/disk5 Mar 23 19:42:29 unraid emhttpd: /usr/sbin/zpool import -f -d /dev/md5p1 2>&1 Mar 23 19:42:32 unraid emhttpd: pool: disk4 Mar 23 19:42:32 unraid emhttpd: id: 7004989206944992445 Mar 23 19:42:32 unraid emhttpd: shcmd (68): /usr/sbin/zpool import -f -N -o autoexpand=on -d /dev/md5p1 7004989206944992445 disk5 Mar 23 19:42:35 unraid root: cannot import 'disk4' as 'disk5': one or more devices is currently unavailable Mar 23 19:42:35 unraid emhttpd: shcmd (68): exit status: 1 Mar 23 19:42:35 unraid emhttpd: shcmd (69): /usr/sbin/zpool online -e disk5 /dev/md5p1 Mar 23 19:42:35 unraid root: cannot open 'disk5': no such pool Mar 23 19:42:35 unraid emhttpd: shcmd (69): exit status: 1 Mar 23 19:42:35 unraid emhttpd: disk5: import error Mar 23 19:42:35 unraid emhttpd: /mnt/disk5 mount error: Unsupported or no file system Is there anything I can try to restore the ZFS filesystem on Disk 5 so that Unraid can mount it? Thank you for your help unraid-diagnostics-20240324-1349.zip
  5. Ok - just to clearify: recovering the data from btrfs is my only option? No chance to restore from parity?
  6. I have not saved this diags before rebooting. I tried several things to get the data from the btrfs disk but without success. At the moment I'm running a "rescue chunk-recover -v". Here is what I tried so far: "btrfs restore -vi" parent transid verify failed on 7053440794624 wanted 12999 found 13003 parent transid verify failed on 7053440794624 wanted 12999 found 13003 parent transid verify failed on 7053440794624 wanted 12999 found 13003 Ignoring transid failure WARNING: could not setup extent tree, skipping it Couldn't setup device tree Could not open root, trying backup super "btrfs rescue super-recover" All supers are valid, no need to recover "btrfs rescue zero-log" parent transid verify failed on 7053440794624 wanted 12999 found 13003 parent transid verify failed on 7053440794624 wanted 12999 found 13003 parent transid verify failed on 7053440794624 wanted 12999 found 13003 Ignoring transid failure WARNING: could not setup extent tree, skipping it Couldn't setup device tree ERROR: could not open ctree "mount -t btrfs -o usebackuproot" wrong fs type, bad option, bad superblock on /dev/sdb1, missing codepage or helper program, or other error. dmesg while mounting with "usebackuproot": BTRFS warning (device sdb1): 'usebackuproot' is deprecated, use 'rescue=usebackuproot' instead BTRFS info (device sdb1): trying to use backup root at mount time BTRFS info (device sdb1): disk space caching is enabled BTRFS info (device sdb1): has skinny extents BTRFS info (device sdb1): flagging fs with big metadata feature BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003 BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003 BTRFS warning (device sdb1): couldn't read tree root BTRFS error (device sdb1): open_ctree failed
  7. Yes! I read the documentation and made the mistake anyway. Damn I was too sure of myself. Thank you - here are the diagnostics. FYI: at the time I downloaded the diagnostics only the 8TB parity and the new 4TB data drive are attached because I'm doing a "btrfs rescue chunk-recover" on the 8TB data drive. unraid-diagnostics-20210620-1152.zip
  8. I added a new drive and started the array with "parity is already valid". Unfortunately the disk was not precleared but formatted. I'm unsure what this does with the parity drive - if I understand you correctly I have no chance to restore the data from the parity, right? Concerning the post to the gurus: could you give me a hint how to reach them?
  9. In my case I had no problems with the dock, all disks are individually detected and all information like the serial number are detected also correctly. Attached you can find an example.
  10. Hi! I tested unraid with one 8TB btrfs data disk and one 8TB parity disk in a 5 bay orico usb case for nearly a month and all went very well. Yesterday I decided to add another 4TB btrfs data disk since the 8TB was nearly full. I rebuilt the parity from the 8TB data drive just the day before. Now - after starting the array with "Parity is already valid" the 8TB data disk was not mountable with these errors: BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003 BTRFS error (device sdb1): parent transid verify failed on 7053440794624 wanted 12999 found 13003 BTRFS warning (device sdb1): couldn't read tree root BTRFS error (device sdb1): open_ctree failed To repair the 8TB data disk I tried several things without success so far (recover a btrfs partition). Next I took steps 1-3 from "Rebuilding a drive onto itself" in the docs to let the parity disk emulate the missing disc but after starting the array with the now missing 8TB disk I couldn't find the expected data under /mnt/user/[...] , only some directories but no files where visible. I tried this with and without the new attached 4TB drive. BTW: I always had to create a "New Config" because the distribution of disks to slots where always mixed up. Currently I'm running "btrfs rescue chunk-recover" on the 8TB disk and since this takes a very long time I decided to ask for help/alternatives in the unraid forum. My questions are: shouldn't the parity drive emulate the missing data drive after I removed the 8TB drive like I described before and what does it tells me if I only can see the directory structue in the emulatd drive - most likely all data lost? Is there any way to recover the data from the parity disk to the 8TB data disk? Any help is very apricated.