Clay Smith

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Clay Smith's Achievements

Noob

Noob (1/14)

0

Reputation

  1. The parity sync finished this morning with 0 errors, no more SMART errors appeared on disk 4 during the process, and I can browse and write to the shares from windows. I enabled the docker service again and my docker containers seem to be up and running now as well. Thank you so much for helping me get this back up and running so quickly.
  2. Sorry, I meant that as a different thought. Once all is said and done (assuming the parity sync goes smoothly) I would like to replace the drive. This drive is an older WD Green drive that was originally supposed to be temporary while I waited for a different drive to be RMA'd after failing a pre-clear. I'll just worry about getting through this first and not get ahead of myself. From the state we're in now should I just power off the machine and leave it until I can check cables tonight? I don't think I quite follow this, sorry. What file are you referring to?
  3. I am able to mount the disk using UD and am able to browse the files from within the unraid webUi. What's the best plan from moving past here. You mentioned doing a new config and re-sync parity. Should I wait to do that until after I've checked the cables? In regards to replacing the drive, should I let parity re-sync, then swap the drive and let it build the new drive from parity?
  4. Should I cancel the current xfs_repair operation to do this or wait until it completes? If starting the array in normal mode gives me access to the (emulated?) disk, would it be worth trying to copy any of my recent writes to another drive before attempting to mount with UD?
  5. I've started it and it's doing the same thing so far Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... followed by it generating a bunch of periods. While it was started in normal mode I was able to browse the shares in windows and noticed that the appdata backups that weren't showing before were now visible and a video file that IIRC was located on Disk 4 was also visible.
  6. Here they are. Disk 4 also now reports 'Unmountable: No file system'. yuki-diagnostics-20191210-1530.zip
  7. My system has a weird quirk where it won't boot if it's not hooked to a monitor and I'm not home to plug one in right now. When I start the array, should I start it normally or in maintenance mode? Is there anything I can do in the mean time before I reboot tonight or should I just hold off and report back?
  8. Just completed with from when I ran it last night with -L Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... ...............(5.7 million dots)...........Sorry, could not find valid secondary superblock Exiting now. That's all it gives. I'd assume this means that it's not fixed. I'm not really sure where to go from here. Based on syslog line from my last post, did it run properly? Or do I need to use the terminal to run a better command?
  9. I used the webgui so I didn't type in a complete command. I clicked 'Disk 4' on the main page to get to the disk settings and then clicked the 'Check' button under the 'Check Filesystem Status' section. In the options box I had left just the -n that was there by default. At the time the syslog was full so I can't grab what it said then but I have since truncated the syslog. When I ran it when I got home last night with the -L this was the line in the syslog Dec 9 20:28:12 Yuki ool www[13176]: /usr/local/emhttp/plugins/dynamix/scripts/xfs_check 'start' '/dev/md4' 'WDC_WD60EZRX-00MVLB1_WD-WXL1H642CJCJ' '-L'
  10. The wiki page said to run a -n first as a test but I suppose if I already know there is a problem then there's no reason to test it for problems. I'm out currently but I suppose when I get back to the server I should tell it to cancel and start it over with -L yes?
  11. Did I misunderstand the page? Should I have run it with -n and -L? Would it be right to cancel it? On the settings page for the disk it still says it's running.
  12. I followed the directions in the link for running the test through the GUI and left the default option of just -n
  13. I followed the instructions in your link and put the array in maintenance mode to run the test but it's just been sitting on the same step for a few hours. Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... This is followed by a line with 3,708,871 periods.
  14. It is still connected. On the 'Main' page it has an X next to it and says 'Device is disabled, contents emulated'. Is it safe to spin up and check SMART while in this state? Under the 'Writes' column it claims it has 18,446,744,073,709,529,088 writes on that drive.
  15. Does this not apply to me? Disk 4 is the drive that failed. It also reads further down that for xfs I will have to start the array in Maintenance mode. I have been hesitant to shutdown or stop the array since disk 5 files were not appearing when browsing shares as I didn't know if something had happened to it as well. Unraid hasn't told me that anything is wrong with it but I wasn't sure with the missing files and all. I don't mean to doubt you, I'll run the test if you say that's what's best, I just want to make sure I lose as little data as possible. On a side note my VMs are located on an SSD mounted by Unassigned Devices and are still working currently. Tailing my syslog shows 5 more shares on the repeating error list, and my syslog is 91% full according to the dashboard.