lustigpeter
Members-
Posts
14 -
Joined
-
Last visited
lustigpeter's Achievements
Noob (1/14)
1
Reputation
-
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
well now I'm 6% into a non correcting parity check and I have 517 errors, it isnt increasing however which is kinda good? No special errors in the logs tower-diagnostics-20230814-2038.zip -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
Did a full reboot, and now It's back showing progress, seems like everything works and no further errors in the log, thanks a lot, really a lifesaver. -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
-
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... clearing needsrepair flag and regenerating metadata - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 5 - agno = 8 - agno = 11 - agno = 6 - agno = 7 - agno = 9 - agno = 10 - agno = 4 - agno = 3 - agno = 2 clearing reflink flag on inodes when possible Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (7:338191) is ahead of log (1:2). Format log to cycle 10. done Mountable again!!! Thank you!!! But I still have two sync errors. Should I just fix em? -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
when running with -n (assuming its totally fine because no changes) Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 5 - agno = 7 - agno = 2 - agno = 3 - agno = 11 - agno = 4 - agno = 8 - agno = 10 - agno = 9 - agno = 6 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify link counts... Maximum metadata LSN (7:338172) is ahead of log (7:338100). Would format log to cycle 10. No modify flag set, skipping filesystem flush and exiting. -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. what now? should I do with -L -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
I removed all SATA cables, reordered them (to ensure it wasn't a dead sata controller) and replaced one sata cable that didnt have a clip to hold it. Sadly the same disk still isnt mountable. I'm guessing I fucked up by running "check parity" and forgot to remove "write corrections" and it fixed one error. Probably ripping disk 2. When I start the array in maintenance mode everything is fine, and when I run a check (without fixing) it only finds 2 errors after a few minutes. but every device shows up, but when I mount them the disk2 isnt mountable. Should I run in maintenance and let it fix the two errors? tower-diagnostics-20230814-1921.zip -
ata6.00: failed command: READ FPDMA QUEUED
lustigpeter replied to lustigpeter's topic in General Support
Okay... after rebooting one disk is not mountable because of a unsupported file format -
lustigpeter started following ata6.00: failed command: READ FPDMA QUEUED
-
Hi, suddenly I'm getthing those errors: ``` Tower kernel: ata6.00: failed command: READ FPDMA QUEUED Tower kernel: ata6.00: cmd 60/80:a0:40:60:a8/00:00:24:00:00/40 tag 20 ncq dma 65536 in Tower kernel: res 40/00:80:40:4b:a8/00:00:24:00:00/40 Emask 0x50 (ATA bus error) Tower kernel: ata6.00: status: { DRDY } Tower kernel: ata6: hard resetting link Tower kernel: ata6: found unknown device (class 0) Tower kernel: ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Tower kernel: ata6.00: configured for UDMA/33 Tower kernel: ata6: EH complete Tower kernel: ata6.00: exception Emask 0x50 SAct 0x8f000380 SErr 0x4070802 action 0xe frozen ```tower-diagnostics-20230814-1827.zip But I dont know what drive is the issue here, all of them seem available. A few days ago I had issues with a Cruical SSD freezing (see my post history) but after a firmware update that issue seemed gone Any way for me to figure out what sata controller or drive? Whats the issue exactly?
-
Hi, I have a CT500MX500 (firmware M3CR045) as cache drive, and it randomly had read/write errors, but as soon as I reboot and repair with btrfs it works fine for a few days (until the same happens). I figured its just dead and popped in a new drive. I pulled all the data from the btrfs pool and threw the disk into my Windows pc, where I'm doing a full scan, but everything looks fine. Furthermore, I read about firmware issues that randomly causes it to be stuck (which would kinda fit?) can I update the drive firmware on Windows and throw it back into unraid? Or should I just write it off and forget the 40€, or can I just update the firmware on Windows? Sadly didnt create a diagnostic zip when it happens, but generally the drive was just unresponsive and after reboot btrfs managed to fix all issues as it was a mirror
-
read errors during parity sync, what now?
lustigpeter replied to lustigpeter's topic in General Support
Wait, so the 2TB disk with the read errors is fine, what should I do with it? Also what about the 6TB data disk that had the write error? Are you saying I don't have to replace any of those? -
read errors during parity sync, what now?
lustigpeter replied to lustigpeter's topic in General Support
@JonathanMyes, sorry. I edited the post -
Hello everyone. I attached the diagnostic file, but I will explain everything here too. I have 3 data drives (6TB, 4TB, 2TB), 1 parity (6TB). yesterday my 6TB drive had a single write error. I then shut down everything and reconnected every sata connection, as I have replaced a disk a few days earlier, so it's not impossible that I loosened some cable mistakenly. Hoping that the disk is not fully toast, I started a parity sync. This ran fine until 30%, then the speed dropped to around 200KB/s for at least 5 Minutes, after that, the 2 TB drive reported read errors, after a few minutes it was about 3000 Errors (but it did not increase any further). During and after those errors, the speed continued with 200 KB/s. But, (luckily?) it finished reading that drive (the general progress of the parity snyc reached more than 2 TB) and currently is continuing to parity sync. What can I expect now? I'm hoping I will only have a few corrupted files, or will there be more severe damage? Also, what should I do now? I'm currently ordering a replacement drive for the 2 TB disk. The 6 TB drive seems to work fine during the parity sync and has not encountered any further error (to my knowledge). I'm guessing you will recommend me to also replace the 6 TB drive? tower-diagnostics-20220625-0128.zip