randomusername

Members
  • Content Count

    35
  • Joined

  • Last visited

Community Reputation

3 Neutral

About randomusername

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Bump - today's parity check showed 714 errors. Diagnostics attached once again, if anyone has any ideas I'd be very grateful. Also, this morning I noticed sabnzbd isn't allowing files to be added, and just transferring files to the unraid machine over the network also fails, saying some data can't be read or written. xeus-diagnostics-20210701-1812.zip
  2. Well, I had a month and a half of peace, but after this month's parity check I got a message that disk8 had errors (no parity errors though). Then this morning I woke up to two emails: I have immediately taken diagnostics, hopefully I'm learning from my past mistakes here. I'm so desperate to get this sorted. xeus-diagnostics-20210612-0644.zip
  3. Correct. None, but the array has been in maintenance mode for the last couple of days as I've been nervous about messing anything else up before seeking advice. Will do, no doubt it won't take long for the same thing to happen again. Thanks for your advice.
  4. Extended SMART test complete, I'm not sure how to read it but it says "SMART overall health passed" so I don't know what to think. ST8000DM004-2CX188_WCT0JB5R-20210422-1413.txt
  5. Running extended SMART test now. I noticed last night that some recent TV episodes on Plex disappeared and SabNZBD was having write issues, so I rebooted the server (perhaps naive, but I've been told that solution for desktops before). On reboot the log was full so I got the diagnostics, then did the xfs_repair. I haven't checked the lost+found share yet, would that give any hint as to the cause of the problem? From itimpi's previous comment I thought it just contained my files that I would need to go through and reorganise. The 200GB docker.img is because a
  6. Ahh, exactly the same thing has happened again. Disk8 xfs_repair shows many problems. Considering I've changed the cables this week, does this indicate a problem with the HDD itself, or am I doing something seriously wrong here? After getting the attached diagnostics I stopped the array, started in maintenance mode, checked xfs_repair with -nv then ran with -v. I checked all the disks because now I have a hammer everything is a nail, but only disk8 showed errors. xeus-diagnostics-20210420-0027.zip
  7. Okay I'll change that for the future, thanks very much for your help.
  8. Parity check just completed with no errors, so at least for the time being it looks like everything's sorted. Thank you all for helping me with this. Is a non correcting check the standard thing for a monthly parity check? I read somewhere to just always keep it correcting, so that's what I've been doing.
  9. Between the two checks I ran "xfs_repair -v /dev/sdl" which I now know to be incorrect. After the second check I ran the correct xfs_repair command through the GUI. I am now running a further parity check, following itimpi's previous post. Edit: xfs_repair was on disk8, not on parity1, sorry if that was not clear.
  10. Parity check shows 384 errors, diagnostics attached. Is there anything to suggest what the cause might be? Thanks for the lost+found tip, I'll make sure to use the correct procedure for xfs_repair in the future. xeus-diagnostics-20210417-1657.zip
  11. Okay I can view files on the disk through the GUI, and in Krusader the /user directory now shows the folders that used to be there, now with a new "lost+found" folder that I assume I need to go through and place its contents in their correct folders. Would the correct thing now to be a parity check? And assuming no parity errors, consider this solved?
  12. Running with -v gives the following output: Phase 1 - find and verify superblock... - block cache size set to 1473264 entries Phase 2 - using internal log - zero log... zero_log: head block 182074 tail block 182074 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - a
  13. Well now I feel foolish. Check completed in the GUI using parameters -nv got the following result: Phase 1 - find and verify superblock... - block cache size set to 1473264 entries Phase 2 - using internal log - zero log... zero_log: head block 182074 tail block 182074 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2
  14. I set it to run and walked away from the laptop as I assumed it would take a while (like a parity check), though now you mention it I believe the first message on the screen was something about not being able to find a superblock.
  15. Does this mean I should run it again using "xfs_repair -v /dev/sdl1"? Or not bother since it appears to have worked? When I ran the first xfs_repair, my laptop went to sleep and on waking the terminal window did not refresh. Since I didn't know how long the xfs_repair would take, I left the server for a few hours before rebooting it, checking the errors with xfs_repair -nv and then starting the parity check. So while I say it seems to have worked, this is based on the next xfs_repair -nv and not any message of success in the terminal after running "xfs_repair -v /dev/sd