miggity

Members
  • Posts

    11
  • Joined

  • Last visited

miggity's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yes, it was still set to 2 slots but I changed it back to 1 slot during that quick process. Either way it is working so thanks. I'm gonna try to rename these now and nothing else. Hopefully it still works like it should.
  2. That seems to have fixed it, the NVMe is mounted and dockers and everything show up. Do you still want the diagnostics posted? Thanks for all the help, any idea what happened so I don't do it again by mistake in the future?
  3. Yes it was. Output below: Phase 1 - find and verify superblock... - block cache size set to 755024 entries Phase 2 - using internal log - zero log... zero_log: head block 367882 tail block 367882 - scan filesystem freespace and inode maps... - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 1 - agno = 2 - agno = 3 Phase 5 - rebuild AG headers and trees... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... XFS_REPAIR Summary Tue Mar 26 14:52:46 2024 Phase Start End Duration Phase 1: 03/26 14:52:45 03/26 14:52:45 Phase 2: 03/26 14:52:45 03/26 14:52:45 Phase 3: 03/26 14:52:45 03/26 14:52:46 1 second Phase 4: 03/26 14:52:46 03/26 14:52:46 Phase 5: 03/26 14:52:46 03/26 14:52:46 Phase 6: 03/26 14:52:46 03/26 14:52:46 Phase 7: 03/26 14:52:46 03/26 14:52:46 Total run time: 1 second done
  4. /dev/sda1: LABEL_FATBOOT="UNRAID" LABEL="UNRAID" UUID="7013-DA05" BLOCK_SIZE="512" TYPE="vfat" /dev/loop1: TYPE="squashfs" /dev/nvme0n1p1: UUID="71d7119e-49bf-45a6-b170-d4d72162860c" BLOCK_SIZE="512" TYPE="xfs" /dev/sdd1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="e851406a-cbd6-47c0-92a9-a5a3abf4a819" /dev/md2p1: UUID="9312d6d1-8645-4fbe-934b-b32720144681" BLOCK_SIZE="512" TYPE="xfs" /dev/sdb1: PARTUUID="85897617-0c65-4ac8-9127-e801a434c7c2" /dev/md1p1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs" /dev/loop0: TYPE="squashfs" /dev/sdc1: UUID="cb1e31c1-9a7f-4e6d-91fe-62caeb5e4563" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="3b44a867-05dc-450e-a026-5e3021db7281" Output from blkid above.
  5. Yes, originally I added the HDD to the existing "cache" pool with the NVME drive, but I don't remember if I starter the array or not before I moved the HDD into it's own pool. I did all of this in a few minutes during lunch so I was moving quick and clearly not paying full attention. Which I've learned always gets me in trouble with Unraid, yet here I am again...
  6. diagnostics attached. I have restarted the array 2 other times since renaming but besides changing "systempool" back to "cache" I haven't done anything. Also, I added the HDD to the "cache" pool originally, but I don't believe I ever started the array before I moved it to it's own pool "cachepool". Not sure this matters but figured I'd mention it. Please let me know if there's more needed, thanks again. illmatic-diagnostics-20240326-1325.zip
  7. In short, I had a single pool nvme drive ("cache") that has appdata, system & downloads (currently empty) on it. I added a HDD to a new pool ("cachepool") that I wanted to use for downloads. I decided, stupidly and in haste, to also rename the original "cache" drive to "systempool". Upon reboot, both drives are unmountable. I know the new "cachepool" just needs to be formated. That's easy. But I would like to get the original "cache" back and running, rather than rebuilding all my dockers. I tried changing the name back to "cache" but it is still unmountable. Is there any way to fix this, or do I need to start rebuilding my dockers all over again? Thanks for any help!
  8. Thanks, rebuilding now. Guess this is more motivation for me to do a proper cheap server build instead of using almost 10 year old hardware with nice HDDs...
  9. Thanks, looking back at the specs for this ancient mobo, it has 2x Marvell SATA 6Gs/s ports and 6x Intel SATA 3Gb/s ports. Two of my HDDs are plugged into the Marvell ports. I'm assuming it may work for awhile if I rebuild the array, but is likely to crash again unless I switch them to all Non-Marvell ports? Can I simply swap the 2 cables that are plugged into the Marvell ports into the unused non-Marvell ports and rebuild the array, or will switching SATA ports screw things up?
  10. Disk 1 is the disk that failed obviously. I checked the connections but it's still down, however it spit out DISK1 SMART, so that's attached. Should I go through the process of rebuilding the array, or wait? Thanks illmatic-diagnostics-20180625-1926.zip
  11. So I am somewhat new to unraid, but so far I love it. I re-purposed an old desktop just under a year ago and use it for media (plex streaming from my movie collection mostly) and backup of old stuff, not a continuous backup. I don't turn it on all that often since I don't need to, so it sits idle for days / weeks at a time. Watched a movie with the GF last night and forgot to turn it off after, woke up this AM to a faulty drive. No idea what caused this, can anyone lend some insight? I've had to rebuild 2 or 3 times before, because I was stupid and shut it down while transferring and lost power (and my old UPS was not enough to handle it shutting down properly), but this one I have no idea about. Thanks! illmatic-diagnostics-20180625-0653.zip