ManBullMoose

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ManBullMoose's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. Aaaah, thank you. And thanks to JorgeB for making that thread.
  2. Are there controllers without multipliers out there, or do they all have them? Is there a particular brand or model you would recommend? I only have six sata slots to work with on this motherboard at the moment.
  3. Okay, interestingly enough, I opened the case with the rocker switch off and kept getting shocked by the casing of the psu. The sata controller is the slot directly above the psu. Removed it, and unplugged things I no longer needed to make space on the board sata wise, and all of my shares popped back up and seems like the data is there. It reverted to no errors. Seems like I have a bad ground somewhere that may have caused this. Whether it is the UPS or the wall, I'm not sure. I do remember updating every socket in this room a few years ago. Here are the diagnostics. Anything else I need to be worried about? tempest-diagnostics-20240302-0750.zip
  4. All devices show as healthy on the smart readings other than one of my cash drives, which supposedly has a few failing sectors, but I couldn't detect that on any other software, and was a quirk of the machine long before this.
  5. Hello. I had left my server off most of the winter as I didn't need to use it, and the energy. I've had it up for almost two weeks now, and updated everything, and life was good. I went to access it today and all of my shares are gone, and I have a bunch of errors from Parity-Check. I've always run two main disks and two parity disks, but had a spare disk in there that wasn't in use, I was waiting to get larger parity disks and do some rearranging. Attached are the logs. Is my data recoverable or am I screwed? I had a lot of photos, and 3D files as well as writings that are important, taxes, etc. Any help is appreciated. Thank you. tempest-diagnostics-20240301-2331.zip
  6. I have solved this issue with assigning a new ip address to the vm through my router. Didn't know that was the issue, as I am no networking expert. I actually was able to bounce things off of ChatGPT and narrowed it down through the possibilities interestingly enough. Feel free to close this topic. Thank you
  7. Network stuff is quite a bit out of my wheelhouse here. The TLDR is that my motherboard bios had been reset to defaults during troubleshooting for another issue with pcie drives, and I forgot to turn on the amd virtualization options before trying to start my vm when I was finished. It gave me an angry prompt, and then my vm disappeared from the vm tab completely. After a heart attack, and re enabling everything in bios for virtual machines, I found it still wasn't there. After some skimming through the forums, I found that I could make a new vm, and just point it to the existing windows machine images. To my surprise, it worked, and my vm and all the unassigned drives attached to it were there with seemingly no data loss. Now, I am experiencing issues on certain websites that I'm not experiencing on any other pc in the house. Certain tabs fail to load, and I can't download anything from sites like nexus mods. I've tried different browsers with and without my extensions active. The posting that I found the idea to create the new vm and point to the existing images also spoke of having ip address issues but didn't go into detail about what issues they encountered or the solution. I'm not one to bug a stranger at 11pm about a six year old topic, so I'm posting here before I get too over my head. Diagnostics are attached, and I found an error and repeating warning in the logterminal/libvirt. The error reads: 2023-01-25 20:18:07.178+0000: 21327: error : qemuDomainAgentAvailable:8411 : Guest agent is not responding: QEMU guest agent is not connected and the warning reads: 2023-01-26 03:21:07.815+0000: 21324: warning : qemuDomainObjTaintMsg:6464 : Domain id=1 name='Windows 11' uuid=d6b6a36d-5b65-a325-508f-e66b512b1935 is tainted: custom-ga-command I've searched these, but didn't come up with anything with my exact codes. Not sure if this can be fixed in Windows, or on the Unraid side, or if I just have to delete the network config files off of the flash drive. Any help is appreciated. Thank you, Moose tempest-diagnostics-20230125-2251.zip
  8. Not that I can see anywhere. I've looked at the disk share through the main page by clicking on the drive itself, as well as checked my shares on the array, and I don't see any lost+found folder. Within the vm the disk shows up, and all the data I had on it is still there. I don't see anything out of the ordinary.
  9. Awesome, that was so easy I thought it was wrong. Drive is mountable, and shows up within the vm. Thank you, Trurl. Happy Holidays to you.
  10. FS: xfs Executing file system check: /sbin/xfs_repair -e /dev/nvme0n1p1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. Dirty log detected! I now have a button that says force zero logging.
  11. Ok, I realized there were more pages to that thread, I will run with correct flag
  12. I click on the check, and it's a slightly different than what the directions show. It gives a log, and towards the bottom there are two buttons, one is "Run with correct flag" and done. Am I supposed to run it with correct flag? This is what pops up in that window. FS: xfs Executing file system check: /sbin/xfs_repair -n /dev/nvme0n1p1 2>&1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... Metadata CRC error detected at 0x44108d, xfs_bnobt block 0x74704068/0x1000 btree block 2/1 is suspect, error -74 bad magic # 0 in btbno block 2/1 Metadata CRC error detected at 0x44108d, xfs_cntbt block 0x74704070/0x1000 btree block 2/2 is suspect, error -74 bad magic # 0 in btcnt block 2/2 Metadata CRC error detected at 0x44108d, xfs_bnobt block 0x3a382038/0x1000 btree block 1/1 is suspect, error -74 bad magic # 0 in btbno block 1/1 Metadata CRC error detected at 0x4728bd, xfs_refcountbt block 0x74704088/0x1000 btree block 2/5 is suspect, error -74 bad magic # 0 in refcount btree block 2/5 bad refcountbt block count 0, saw 1 agf_freeblks 121856122, counted 0 in ag 2 agf_longest 121856122, counted 0 in ag 2 Metadata CRC error detected at 0x44108d, xfs_cntbt block 0x3a382040/0x1000 btree block 1/2 is suspect, error -74 Metadata CRC error detected at 0x44108d, xfs_bnobt block 0xaea86098/0x1000Metadata CRC error detected at 0x46fd5d, xfs_inobt block 0x74704078/0x1000 btree block 3/1 is suspect, error -74 bad magic # 0 in btbno block 3/1 btree block 2/3 is suspect, error -74 bad magic # 0 in btcnt block 1/2 bad magic # 0 in inobt block 2/3 Metadata CRC error detected at 0x44108d, xfs_cntbt block 0xaea860a0/0x1000 btree block 3/2 is suspect, error -74 bad magic # 0 in btcnt block 3/2 Metadata CRC error detected at 0x4728bd, xfs_refcountbt block 0x3a382058/0x1000 Metadata CRC error detected at 0x46fd5d, xfs_finobt block 0x74704080/0x1000 btree block 1/5 is suspect, error -74 bad magic # 0 in refcount btree block 1/5 bad refcountbt block count 0, saw 1 btree block 2/4 is suspect, error -74 agf_freeblks 122094588, counted 0 in ag 1 bad magic # 0 in finobt block 2/4 agf_longest 122094588, counted 0 in ag 1 Metadata CRC error detected at 0x4728bd, xfs_refcountbt block 0xaea860b8/0x1000 btree block 3/5 is suspect, error -74 bad magic # 0 in refcount btree block 3/5 bad refcountbt block count 0, saw 1 agf_freeblks 122094586, counted 0 in ag 3 agf_longest 122094586, counted 0 in ag 3 Metadata CRC error detected at 0x46fd5d, xfs_inobt block 0x3a382048/0x1000 btree block 1/3 is suspect, error -74 bad magic # 0 in inobt block 1/3 Metadata CRC error detected at 0x46fd5d, xfs_finobt block 0x3a382050/0x1000 btree block 1/4 is suspect, error -74 Metadata CRC error detected at 0x46fd5d, xfs_inobt block 0xaea860a8/0x1000 btree block 3/3 is suspect, error -74 bad magic # 0 in inobt block 3/3 bad magic # 0 in finobt block 1/4 Metadata CRC error detected at 0x46fd5d, xfs_finobt block 0xaea860b0/0x1000 btree block 3/4 is suspect, error -74 bad magic # 0 in finobt block 3/4 sb_fdblocks 433178810, counted 70545081 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 3 - agno = 1 - agno = 2 would rebuild corrupt refcount btrees. No modify flag set, skipping phase 5 Inode allocation btrees are too corrupted, skipping phases 6 and 7 No modify flag set, skipping filesystem flush and exiting. File system corruption detected!
  13. Hello everyone. I'm an amateur with this system yet, and need some help. I have a system where I have unnassigned drives storing my main gaming vm. This m.2 drive is in unassigned devices. We had a bad winter storm here in Michigan, and our power went out. (I don't have a ups yet, hoping Santa leaves me one) Now this particular drive (drive 3) refuses to mount. I farted around on here and found how to use xfs check on drives while in maintenance mode. As far as I can tell, I can only do this option for disks on the array, the prompts do not appear on the drives check marks while in the unassigned devices tab. I thought I could move the drive into the array side, but it will say that it will erase everything on the disk once the array was started. This isn't the main disk of the vm, just another game storage drive, but I'd like to not wipe the whole thing. How does one perform xfs repair on an unassigned device? These are the logs: Dec 23 17:31:53 Tempest kernel: nvme0n1: p1 Dec 23 17:32:27 Tempest emhttpd: Samsung_SSD_980_PRO_2TB_S6B0NL0T618459Z (nvme0n1) 512 3907029168 Dec 23 17:32:28 Tempest emhttpd: read SMART /dev/nvme0n1 Dec 23 17:33:16 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:33:16 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:33:16 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:33:16 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:33:16 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' Dec 23 17:33:43 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:33:43 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:33:43 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:33:43 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:33:43 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' Dec 23 17:34:33 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:34:33 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:34:33 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:34:33 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:34:33 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' Dec 23 17:34:50 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:34:50 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:34:50 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:34:50 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:34:50 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' Dec 23 17:35:50 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:35:50 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:35:50 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:35:50 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:35:50 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' Dec 23 17:38:42 Tempest unassigned.devices: Adding partition 'nvme0n1p1'... Dec 23 17:38:42 Tempest unassigned.devices: Mounting partition 'nvme0n1p1' at mountpoint '/mnt/disks/S6B0NL0T618459Z'... Dec 23 17:38:42 Tempest unassigned.devices: Mount drive command: /sbin/mount -t 'xfs' -o rw,noatime,nodiratime,discard '/dev/nvme0n1p1' '/mnt/disks/S6B0NL0T618459Z' Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Mounting V5 Filesystem Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Starting recovery (logdev: internal) Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Metadata CRC error detected at xfs_refcountbt_read_verify+0x12/0x5a [xfs], xfs_refcountbt block 0x3a382058 Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Unmount and run xfs_repair Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): First 128 bytes of corrupted metadata buffer: Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_btree_read_buf_block.constprop.0+0x7a/0xc7 [xfs]" at daddr 0x3a382058 len 8 error 74 Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Failed to recover leftover CoW staging extents, err -117. Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Filesystem has been shut down due to log error (0x2). Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Please unmount the filesystem and rectify the problem(s). Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Ending recovery (logdev: internal) Dec 23 17:38:42 Tempest kernel: XFS (nvme0n1p1): Error -5 reserving per-AG metadata reserve pool. Dec 23 17:38:42 Tempest unassigned.devices: Mount of 'nvme0n1p1' failed: 'mount: /mnt/disks/S6B0NL0T618459Z: can't read superblock on /dev/nvme0n1p1. dmesg(1) may have more information after failed mount system call. ' ** Press ANY KEY to close this window **