Jump to content

Kevin T

Members
  • Posts

    57
  • Joined

  • Last visited

Everything posted by Kevin T

  1. Here's the screenshot, I also ran fdisk if that helps at all. root@unRAID:~# fdisk -l /dev/md1 Disk /dev/md1: 4.55 TiB, 5000947249152 bytes, 9767475096 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
  2. I did originally try in normal mode and it said Unmountable: Wrong or no file system so then I stopped the array and started it again in Maintenance mode and this was all I got: root@unRAID:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... xfs_repair: error - read only 0 of 512 bytes xfs_repair: data size check failed xfs_repair: cannot repair this filesystem. Sorry. The xfs_repair barely did anything, it ran for only about a second or so. I find it weird it's not doing anything because this is the Parity drive I did a size upgrade on before anything started going wrong. The only thing I've done on the other 3 data drives is run xfs_repair checks on them but I haven't written any data to those drives.
  3. Here's the diagnostics. unraid-diagnostics-20221016-0335.zip
  4. No wonder, I was hitting Done instead of Apply, now I can configure. I must have been overly anxious. But I tried starting the array with the old Parity disk, removing the substituted Disk 1 and it won't mount (says Unmountable: Wrong or no file system), so I stopped the array and started in Maintenance mode and this is all I am getting: root@unRAID:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... xfs_repair: error - read only 0 of 512 bytes xfs_repair: data size check failed xfs_repair: cannot repair this filesystem. Sorry.
  5. I've tried that 3 times but it didn't change anything. I was originally advised to Retain All current configuration so that is what I'm doing.
  6. Hi, well now I'm completely confused, I got another 5TB drive as a Disk 1 placeholder so I can use my original Parity drive. I put all my old original working drives back in (which are all the same models and sizes except the Disk 4), ran the New Config and now I'm getting this message: "Disk in parity slot is not biggest" It seems to be a bug. I just upgraded to 6.11.1 from 6.11.0 earlier today so I thought it might be a bug in the new version, but downgrading didn't resolve it. Here's all my screenshots of all my drives, the new 5TB is an external and it is showing a larger partition size, but even if I exclude it, I still get the same error. I included screenshots of with it both included and excluded, with the same message, as well as the drive info on all the drives. Any thoughts how to circumvent this?
  7. It will rebuild automatically when you restart the array.
  8. Disk 4 failed first during a rebuild upgrading to a larger drive so the Disk 4 in it now is the original one, so I wasn't sure if the Parity would completely be valid and in sync with that. And as that was sitting in a degraded state, Disk 1 original failed when trying to copy a file off of it (the system froze) but that was the original on it. That's why I didn't know if I should rebuild the parity to match what's on the drives and maybe run a filecheck on the other drives as well just to be safe.
  9. Very good, at least I'm able to recover some so far and hoping for better results on the other one. I really appreciate all your help through this thus far, thank you very much. Should I go ahead and rebuild the new drive? Also wondering if I should rebuild this parity if I'm not going to need it for further recovery?
  10. It appears to be emulating Disk 1 (see below) but there is alot of free space and I believe the disk was pretty full before. Weird it's showing the Disk 1 as 6TB but the original one was 5TB, is that because I originally assigned the new drive which is 6TB and removed it? There's tons of folders/files in the lost+found directory on Disk 1 it's showing 29226 objects: 23985 directories, 5241 files (38.1 PB total) Do you think that's the best I can do on recovery from this parity or are there other options? Also should the files be okay that are still present and not in lost+found? I'm thinking to still try the other parity when I get the additional drive in a couple of days and will probably just backup what is on here if I'm out of options with this parity.
  11. I tried mounting it previously and it didn't mount so I was going to move on to my other Parity drive but I have to wait for another 5TB to come in. So this is the 2nd attempt on the newer Parity but I never did the xfs_repair before. I just realized that I am still in Maintenance mode after unassigning the disk to emulate and that is what xfs_repair is running it. I'm hoping that's not an issue, there were some similar postings I came across in the forums and they were in Maintenance mode while running xfs_repair. This is the tail end of xfs_repair when it completed: resetting inode 5524570396 nlinks from 3 to 2 resetting inode 5524595356 nlinks from 3 to 2 resetting inode 5524595374 nlinks from 3 to 2 resetting inode 5524603471 nlinks from 3 to 2 resetting inode 5524694124 nlinks from 28 to 12 resetting inode 5524765084 nlinks from 3 to 2 resetting inode 5532130247 nlinks from 3 to 2 resetting inode 5532130255 nlinks from 3 to 2 resetting inode 5532130258 nlinks from 4 to 3 resetting inode 5532130330 nlinks from 3 to 2 resetting inode 5532298543 nlinks from 3 to 2 resetting inode 5532298547 nlinks from 3 to 2 resetting inode 5532298558 nlinks from 5 to 4 resetting inode 5532298562 nlinks from 3 to 2 Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d1c11b30/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d1c11b30/0x8 Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d21689b8/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d21689b8/0x8 Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d216c4c8/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d216c4c8/0x8 Metadata corruption detected at 0x451c80, xfs_bmbt block 0x1d216a188/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d216a188/0x8 Metadata corruption detected at 0x451c80, xfs_bmbt block 0x132170f70/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x132170f70/0x8 Metadata corruption detected at 0x457850, xfs_bmbt block 0x1d1e5ac88/0x1000 libxfs_bwrite: write verifier failed on xfs_bmbt bno 0x1d1e5ac88/0x8 Maximum metadata LSN (1976772755:33719779) is ahead of log (1:2). Format log to cycle 1976772758. xfs_repair: Releasing dirty buffer to free list! xfs_repair: Releasing dirty buffer to free list! xfs_repair: Releasing dirty buffer to free list! xfs_repair: Releasing dirty buffer to free list! xfs_repair: Releasing dirty buffer to free list! xfs_repair: Releasing dirty buffer to free list! xfs_repair: Refusing to write a corrupt buffer to the data device! xfs_repair: Lost a write to the data device! fatal error -- File system metadata writeout failed, err=117. Re-run xfs_repair. This is what it's showing now:
  12. It's saying to mount the filesystem but Maintenance mode doesn't mount so should I stop and restart the array regular or just remain in Maintenance mode as it is right now? Also what's the exact commandline please, I don't want to screw anything up. UPDATE: Running xfs_repair -L /dev/md1 -hopefully that is correct Getting tons of messages and writes to the Parity drive Alot of resetting inode and Metadata corruption detected messages.
  13. Oh ok, the wording was different. I got this error popping up: Unraid Disk 1 error: 07-10-2022 02:59 Alert [UNRAID] - Disk 1 in error state (disk dsbl) No device identification () And this is what I'm getting now with xfs_repair in Maintenance mode: root@unRAID:~# xfs_repair -v /dev/md1 Phase 1 - find and verify superblock... bad primary superblock - inconsistent filesystem geometry information !!! attempting to find secondary superblock... .found candidate secondary superblock... verified secondary superblock... writing modified primary superblock - block cache size set to 142328 entries Phase 2 - using internal log - zero log... zero_log: head block 181991 tail block 181938 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this.
  14. Do I have to worry about "All existing data on this device will be OVERWRITTEN when array is Started" at the top right of the Parity drive?
  15. I guess with all the swapping of the Parity drives I swapped back to the newer Parity drive to do the xfs_repair and now none of the drives are showing as configured. Sorry for the questions, but I want to be safe than sorry but since I have the option at the bottom Parity is already valid, do I have to still do a New Config again? Also I noticed at the top right of the Parity drive it says "All existing data on this device will be OVERWRITTEN when array is Started", do I need to worry about that? Screenshot is below.
  16. I ordered a 5TB drive that will arrive in a couple of days to attempt to load my original 5TB Parity drive, but on the new 8TB Parity that didn't emulate the failed Disk 1, you mentioned if it doesn't to run a filesystem check. So if it's saying Disk 1 is unmountable, is there something I can try in the meantime working off the 8TB Parity to possibly make the DIsk 1 mountable? The other question I had that I mentioned previously but didn't get an answer on was if a Parity drive can be cloned to a different (larger) drive and used instead, or if a data drive is cloned, would the cloned drive still work or would it throw a parity error? I didn't know if Parity only calculated data or used hardware as well for it's calculations.
  17. Well the old Parity and Disk 1 were both 5TB so I don't really have many options on the size. What about some of the other things I mentioned like cloning the Parity to a larger drive or the parity-swap, which looking at the page on it (https://wiki.unraid.net/Manual/Storage_Management#Parity_Swap) seems to be somewhat like cloning the parity drive? If I can just get it to emulate I was thinking about just copying the data to an external drive instead of trying to rebuild it.
  18. Ugh, with the old Parity drive it's giving me a "Disk in parity is not biggest" message, I guess because my new replacement Disk 1 is 6TB even though I wasn't using all of it and I don't have any more 5TB drives that work. It also says "If you are adding a new disk or replacing a disabled disk, try Parity-Swap", not sure what parity swap is. I'm trying to do the Disk 1 assignment then unassignment and I have one unassigned external 4TB drive in the system but it has data on it (it says "All existing data on this device will be OVERWRITTEN when array is Started" next to the Parity drive when I select this drive for Disk 1), is there anyway to use that (or another drive) as a placeholder? I wasn't sure in that process if the same or larger sizing restriction applies. Or I do have a new 8TB I was going to use as the future Parity drive, is there a way I can clone the 5TB Parity to my new 8TB so I could use it instead so I don't have a size issue? Or any other ideas? This is more frustrating than I thought it would be.
  19. I was using the new Parity, but it is the one after I started having issues. I was originally upgrading to larger drives, so I did the Parity first which was a 5TB to an 8TB, which was successful. Then I did Disk 4 which the drive failed during the rebuild process and it won't recognize in BIOS (but I still have the original Disk 4 which I put back in and is fine). While that one was failed and before I could do anything else on it, I just accessed Disk 1 to copy a small file locally and the whole server locked up. Then Disk 1 wouldn't recognize either in BIOS. So I'm not sure if those events affected the new Parity drive, which is the same one I tried (the 8TB). I still have the original 5TB Parity untouched. And again, both these Parity drives were within the same day so it's not like the old one was sitting around for months.
  20. I upgraded to 6.11.0 and followed the steps above, but after I unassigned Disk 1 and restarted the array normally, below is what I got. I don't think this is right so I just stopped the array and not sure what to do. When browsing it appears Disk 1 is missing. I do have another previous Parity drive prior to me trying to upgrade drives before everything went screwy if it helps at all. The current Parity is throwing a SMART error. I also added new Diagnostics if needed. unraid-diagnostics-20221005-1956.zip
  21. Update? To what? If you mean a software update, I had issues in the past attempting to upgrade but I believe it was using the automatic method, so I will have to do the manual method. I just need to know which version is recommended for now even if it's not the most current (can do that later). Would it be okay with the array not working?
  22. So I don't need to worry about the invalidslot command that was mentioned? What about the message on the New Config page in the screenshot above that says "Do Not Use this Utility thinking it will rebuild a failed drive", it is safe to check the box and proceed? How does it know what disk to rebuild (Disk 1 according to unRAID)?
  23. That's all I had available, they were taken previously and they were for my own quick reference and I planned on deleting them afterwards, I had no idea I would have issues with it at the time. Everything's shut off right now, if you want screenshots let me know what you want them of and I can turn it on , I just want to keep it to a minimum since I had 2 unexpected drive issues within a day's time and don't want another one. I'm ready to do something on it now, I just need to know how to proceed. I have the new drive installed to replace Disk 1 (note it is now 6TB but was originally 5TB), screenshot shows all the correct drives (Disk 4 shows as new since the new one failed so I had to revert to the original one). I also included a screenshot under New Config, I'm not sure if what is says is normal.
  24. I tried automatically upgrading 2 times previously from 6.1.9 and had issues so I had to revert back (I believe the issue was it wouldn't boot but I never tried to do a manual upgrade), plus this server has been offline since November of last year. I was planning on trying to upgrade again once I got this straightened out, I just needed to figure out if I need to do an intermediary version first before upgrading to the current version and if so, which version would be safest. I'm not sure if that's something I could do with the array being down. I attached a couple of pictures I took, before and after the issues, the first picture was taken before I started trying to upgrade the drives in November when everything was fine. Disk 1 is the only that completely failed and is not recognized at all even though it is in place, I still have the original Parity 5TB and Disk 4 4TB which I put the Disk 4 back in since the one I did the upgrade also failed. The newer Parity 8TB seemed to upgrade okay so I put that in as you suggested. My drive arrived that is being used for Disk 1 so I am ready to start on this, I also bought another 8TB that I will use as the new Parity if I am able to rebuild on the new Disk 1 so I still have my 2 original Parity drives as backup if I need to try over. So do I need to proceed differently from the original instructions? I'm not sure what invalidslot is as I have never had to do anything commandline in unRAID before other than a file system check. It's always just run with minimal issues.
×
×
  • Create New...