Jump to content

stevencolvin

Members
  • Content Count

    7
  • Joined

  • Last visited

Community Reputation

0 Neutral

About stevencolvin

  • Rank
    Newbie
  1. Ah I see, so the parity has been updated to show that there is no data anymore then, so data rebuild will do nothing. Good thing I have a back up then I guess. Thank you very much for your help. I guess this issue is solved, you live and you learn.
  2. Yes I did try that before the data rebuild, I figured a data rebuild after the format would've restored the data but I kept getting the unmountable error. Are all of the files unrecoverable or can the parity still do a data rebuild to get the files back?
  3. Here you go, thank you both for your help, I really appreciate it. unraid-diagnostics-20200326-1128.zip
  4. Okay, upgrading to 6.8.3 fixed the drive issue, but it seems all of my files are gone. Anything I can do to try and recover them or am I stuck reloading everything from my backup? Should I run a parity check to see if that brings them back?
  5. What is the best way to go about doing an upgrade while keeping my configuration files and everything intact? Or would it be best to do a clean install?
  6. Phase 1 - find and verify superblock... - block cache size set to 750120 entries Phase 2 - using internal log - zero log... zero_log: head block 2 tail block 2 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Thu Mar 26 09:40:39 2020 Phase Start End Duration Phase 1: 03/26 09:40:31 03/26 09:40:32 1 second Phase 2: 03/26 09:40:32 03/26 09:40:39 7 seconds Phase 3: 03/26 09:40:39 03/26 09:40:39 Phase 4: 03/26 09:40:39 03/26 09:40:39 Phase 5: 03/26 09:40:39 03/26 09:40:39 Phase 6: 03/26 09:40:39 03/26 09:40:39 Phase 7: 03/26 09:40:39 03/26 09:40:39 Total run time: 8 seconds done Here are the results from running the command "xfs_repair -vL /dev/md1" I wasn't sure of the best way to capture the results so I just copied and pasted. These are the results it gives me every time, it doesn't seem to be doing anything to the disk. I started the array and downloaded another diagnostics file, is there anything else I need to attach? unraid-diagnostics-20200326-0953.zip
  7. So randomly I went to start up a VM from my unraid server and none of my dockers or VMs were available. I went to the main page and it said Drive 1 was unmountable because there is no file system, even though it says the filesystem is xfs. I've been running this server for about 4 months with one 4TB HDD parity drive and one 2TB HDD drive in the array with no problems whatsoever. So only 2 disks total in the array, and also 2 cache drives which are SSDs. I tried xfs_repair from maintenance mode with no luck. I even bought a new drive, replaced the drive that said unmountable, and ran a data-rebuild in maintenance mode from the parity with no luck, it still says "Unmountable: No filesystem". It's like it just writes the same corrupt data to the drive or something. I have all of my data backed up so it's not a huge deal to start over, I would just like to fix this instead of wiping everything and starting over from scratch. I spent a lot of time setting everything up on this server, not including my VMs and such, so I really don't want to start over with it all. I bought this OS in hopes that something like this wouldn't happen, or it would be easy to rebuild from parity if it did, but I haven't had any luck searching the forums for a solution to this problem. I'm not sure what all I need to attach for help with this problem but here is my diagnostics file if that helps anyone at all. I'm not very experienced with this unraid stuff, although I do have quite a bit of experience with linux. Unraid is just very new to me. Thanks for the help, Steven unraid-diagnostics-20200326-0137.zip