Groo

Members
  • Posts

    12
  • Joined

  • Last visited

Groo's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Haven't rebooted yet, just in case there was information that would be lost. Hoping that a reboot will solve everything. Haven't seen an error similar before and have been running unraid on this hardware for a few years.
  2. Haven't rebooted yet. This can't be good. Any help would be appreciated. homer-diagnostics-20231210-2047.zip
  3. I'm getting very high cpu usage and IO wait when I try to download to my cache drive. system load skyrockets and download speed drops from 100mb/s to 5-10k/s. I have my dockers and download folder on a BTRFS cluster of 2 cache drives. I originally thought it was the sonarr process of moving the files to the array that was killing things, but this is happening well before the download completes. Prior to this, I have had no issues downloading at full speed. load shows: 15:14:41 up 16 days, 1:20, 1 user, load average: 9.05, 6.95, 3.65 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT root pts/0 10.0.1.154 14:57 1.00s 0.01s 0.00s w I've seen load hit 25+ over the past few days when I download larger files iotop is showing me: Total DISK READ : 0.00 B/s | Total DISK WRITE : 94.77 K/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 54.18 M/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 17449 be/4 root 0.00 B/s 0.00 B/s 0.00 % 99.99 % shfs /mnt/user -disks 1023 -o noatime,allow_other -o remember=330 15101 be/4 root 0.00 B/s 0.00 B/s 0.00 % 86.57 % [kworker/u32:10+btrfs-worker] 19429 be/4 root 0.00 B/s 14.58 K/s 0.00 % 6.41 % shfs /mnt/user -disks 1023 -o noatime,allow_other -o remember=330 26760 be/4 root 0.00 B/s 72.90 K/s 0.00 % 0.00 % [kworker/u32:5-bond0] 18835 be/4 root 0.00 B/s 7.29 K/s 0.00 % 0.00 % shfs /mnt/user -disks 1023 -o noatime,allow_other -o remember=330 Was going to try and remove a cache drive and switch to a single ssd as a next step? homer-diagnostics-20220109-1513.zip
  4. ok, stopped array, and restarted in maintenance mode. Ran a (successful) repair on the unmountable disk (had to use -L) and started the array. It appears to be happy. Thanks for your help! homer-diagnostics-20211214-1037.zip
  5. # smartctl -H /dev/sdm -d sat smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.10.28-Unraid] (local build) Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED Looks promising, but already have a replacement, so I'd rather not trust this disk. How do I identify which md device refers to my unmountable disk?
  6. is it possible that after it rebuilt the party on the new drive, the file system just needs to be repaired and it will be good? i.e. the array is healthy, but the filesystem on that particular disk isn't. i.e. if I run xfs_repair on the unmountable disk, it should recover?
  7. The disk in slot 1 is the new drive that appears to have been rebuilt, but shows up as unmountable. The old drive is mounted on /mnt/temp after I ran an xfs_check -L # xfs_repair -L /dev/sdm1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_fdblocks 500481120, counted 500481113 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 data fork in ino 4298782893 claims free block 805568742 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (1:3286931) is ahead of log (1:2). Format log to cycle 4. done
  8. Nothing was formatted. In the slot where the missing disk was (slot 1), it now shows as "Unmountable: not mounted"
  9. homer-diagnostics-20211214-0923.zip
  10. I Don't have earlier diagnostics, or syslogs. The drive that's failing is one of my oldest drives and started showing DMA errors prior to no longer showing up in the array. I'm sure I can get it active long enough to extract any data on it. I was able to mount the disk and fix the filesystem. But I'm still at a loss about the raid itself. Is it healthy? What happened to the data on the missing disk? Should I just copy the data from the failed disk back to the array, format the new drive and all is good?
  11. One of my drives failed in the array. (showed up as unavailable). I followed these steps to replace the disk: I stopped the array shutdown replaced the faulty drive with a new one booted and went into the array devices and replaced the missing drive with a new one The system rebuilt overnight Now, it's showing that my new drive is unmountable. It also appears that the array is healthy? Not sure what's going on here? homer-diagnostics-20211214-0854.zip