positive.bgw

Members
  • Posts

    7
  • Joined

  • Last visited

positive.bgw's Achievements

Noob

Noob (1/14)

0

Reputation

  1. ok, all clear now I tried to mount both at same time > this is reason. so, currently I am copying data now from disk to external disk. next step for me, is to pre-clear 3 TB disk and add it to array, and return data to array. then I will do moving files inside array (via mc or unbalance) > to somehow group data on disks, currently each share is almost on all disks
  2. thanks command has passed now > I can access disk data. funny thing is that xfs repaid passed also on old, problematic disk as well. previously I tried several times same, but no success. only difference is that now I have done it via terminal, not via GUI as before (and ddrescue command meanwhile, no system restart or change). only issue I have at the moment, is that I cannot mount new, 3 TB disk via UD plugin, while old problematic disk I can. currently I am copying data to array, so no big issue for me thanks again!
  3. I have finished ddrescue to different disk (from 2TB > 3 TB disk). I ran xfs repair command on new, 3 TB disk, but no luck: xfs_repair -vL /dev/sdb Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... ........................................ (and this goes forever) any idea how to recover data?
  4. I ran it from GUI (with no options to xfs_repair), but there is error at the end of process: " ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:553427) is ahead of log (0:0). Format log to cycle 4. xfs_repair: libxfs_device_zero write failed: Input/output error " complete output attached xfs_repair disk3 -new.txt
  5. I switched cables again, and I am able now to access disk3 > it still says read errors, but at least I can run xfs_repair command attached: - smart for disk3 - new diagnostic file - output of xfs_repair -n command should I run "xfs_repair /dev/md3 -vL" or something different to fix this error: "Unmountable: No file system" (in maintenance mode) tower-diagnostics-20201014-2202.zip tower-smart-20201014-2208.zip xfs_repair disk3.txt
  6. sorry, I didn't mention that: I already replaced SATA cable and same situation Also, I attached it to different SATA port and to add-on SATA card (to change SATA connector), and same disk behaviour I can replace SATA cable again, but, I already did this.
  7. I am specific situation on my unRaid with 4 data disks + 1 parity + 1 cache: One of my disks went offline > reason for this was failure of add-on card for additional SATA ports. I replaced add-on card and SATA cable and disk is now visible in array as "new" (although it is same disk as before). Before I have put it back to array (several hours later), different disk went offline, with following error: "Unmountable: No file system". So, at the moment, I have: - 2 disks in normal mode - 1 disk marked as "new" - 1 disk with error: "Unmountable: No file system" Array is started automatically upon system boot, but part of data is missing. I stopped array, started in maintenance mode and tried to repair disk3 with xfs_repair and got: " Phase 1 - find and verify superblock... superblock read failed, offset 562633482240, size 131072, ag 9, rval -1 fatal error -- Input/output error " I rebooted several times, and each time is different behavior of this 2 TB disk (disk3) > sometimes array is started, sometimes is marked as offline, sometimes I cannot start array in maintenance mode... I suppose that this disk od 2 TB is "dead". I have 1 parity drive, part of data is lost at the moment. Any suggestions how to recover data using dropped disk - 3 TB (marked as "new") and existing array? Or to rephrase question: how to return this disk to existing array, without array rebuild (since different disk failed)? Any other ideas is welcomed. tower diagnostic file and smart for disk 3 attached thanks in advance tower-diagnostics-20201014-1556.zip tower-smart-20201014-1631.zip