Matthew Kent

Members
  • Posts

    71
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Personal Text
    If you say Orange slowly enough, it sounds like gullible

Matthew Kent's Achievements

Rookie

Rookie (2/14)

3

Reputation

1

Community Answers

  1. That's fine, I don't think any new info was written besides some of my media libraries emptying out because the drive had gone MIA. Thank you for your assistance
  2. I'm back up. I ended up running the repair in the gui w/ the -L option, and then told the server to run with the new config since I didn't want to do a data rebuild from the unmountable parity image. As far as I can tell there are no lost-and-found files. Not sure if I lost anything, but I'm back up *whew*. Of course the wife had to ask about files she needed just when the server went down.
  3. Uploading now. The results for xfs_repair from the gui is Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being ignored because the -n option was used. Expect spurious inconsistencies which may be resolved by first mounting the filesystem to replay the log. - scan filesystem freespace and inode maps... agf_freeblks 104761526, counted 104761523 in ag 1 agf_longest 104502791, counted 104502788 in ag 1 sb_fdblocks 1282724453, counted 1282724450 - found root inode chunk Phase 3 - for each AG... - scan (but don't clear) agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 bad nblocks 9198282 for inode 2196066468, would reset to 9198253 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - agno = 11 - agno = 12 - agno = 13 - agno = 14 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 8 - agno = 9 - agno = 10 - agno = 3 - agno = 5 - agno = 7 - agno = 1 - agno = 11 - agno = 13 - agno = 14 - agno = 4 - agno = 6 - agno = 12 bad nblocks 9198282 for inode 2196066468, would reset to 9198253 No modify flag set, skipping phase 5 Phase 6 - check inode connectivity... - traversing filesystem ... The result without the -n is Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. nas-diagnostics-20240108-1347.zip
  4. I ran it from the command line as I couldn't get it to run in the gui. xfs_repair /dev/sde
  5. also, this is the current output of xfs_repair on the drive now attempting to find secondary superblock... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... .found candidate secondary superblock... unable to verify superblock, continuing... ......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
  6. Hi all, Hoping someone can help. I've been running my server for awhile and my largest drive last Friday went offline. I didn't know it was happening until I got a bunch of discord alerts that my media library files were missing and a cleanup of all the files had started. When I logged in, Disk 1 (A newer 16TB drive), showed unmountable: unsupported or no file system. I've since tried to remove the drive and start the array w/ the drive disconnected to try and get it to start w/ the drive emulated. Unfortunately it still says unmountable: unsupported or no filesystem. Does this mean the parity somehow saved with the drive in this state and is now emulated an unmountable drive? I'm hoping my data is still there in the parity. I'm planning on running xfs_repair on the drive now to see if I can get it back up. Any other suggestions would be greatly appreciated. I'm including my diagnostics file. nas-diagnostics-20240108-1133.zip
  7. I don't have encrypted drives, how do I remove the passphrase requirement? I'd like my array to start on it's own after reboot, but I have to put in a passphrase everytime, even after setting the array to autostart
  8. Hey! I finally figured it out. I don’t know why I was able to get this working on my old server. But essentially everything started working for me when I replaced my CPU with a CPU/GPU chip. It might’ve been the case I had an extra junk GPU inside my old server as it’s a full tower and forgot about it. Either way, I think GPU pass-thru doesn’t work properly unless you have more than one
  9. Changed Status to Solved Changed Priority to Urgent
  10. Hey all, I'm back up, figured I'd fill in my steps for those that run into this odd issue I stumbled into. After removing my network.cfg file and rebooting, my system was somehow stuck in it's boot process. Once I got to my office, I did a hard reset and it came back up with a new MAC and IP. I went ahead and re-assigned it to it's previous IP with a new lease and began configuring my network setup to it's previous settings. As soon as I turned on VLAN's, all connectivity stopped, eth0 went down. I manually tried bringing it back up with: ifconfig eth0 up It wouldn't work. So I removed the network.cfg file again, rebooted, and started over. When everything came back up, instead of turning on VLANs first, I added a metric of 10 to the main eth0 interface. Again, everything went down, and I had to start over. After the next reboot, I was able to get back up with my custom config with the below steps: 1. Instead of adding the metric for the main eth0 interface, I scrolled to the bottom of the page and added it as a route with my desired metric. Hit Done to save 2. Turned VLAN'ing on, crossed fingers. Saved. Everything came back up in about 30 seconds 3. Added my 3 VLANs (1 is for WAN, 1 is for a virtual LAN, 1 is for an isolated LAB LAN environment) Crossed fingers, saved. Everything came back up! Here was the tricky part. I like having my Unraid directly on the internet, but for this, I need my Unraid and dockers to utilize a separate internal subnet and VLAN'd WAN. We have multiple WAN addresses available to us at my office, so I took advantage and created a VLAN'd bridge to our WAN subnet so I could have my own external IP running through an internal virtualized firewall... The key was setting up the routing with a metric that when not functioning would direct network failover to the bare metal eth0 interface. In the past I think I got this working by accident. For whatever reason, if you add the metric to the interface directly, failover does not work. But if you add a routing rule at the bottom of the network apge with a higher metric (lower priority number) than eth0, it works. I verified this by running curl ifconfig.co In multiple dockers after my virtual firewall spun on. All dockers reported the static WAN inside of my virtual network. Things are now back to the way they were. I still have no clue what brought everything down.
  11. I got a weird one. My server's been running solid for a few years now. But today I stood up a new Windows 11 VM and realized I accidentally stuck it on my array instead of in my cache. So after install, I stopped it before it finished it's final boot and moved it. As soon as it finished, the entire server just stopped functioning. I found I could ping and SSH into it, but had no GUI access, so I rebooted from the CLI. Ever since then, it boots, but I can't get the GUI to come up. I've removed some plugins and reset my network settings (because it wasn't getting an IP for some reason). But I still can't get the GUI up. Any help would be greatly appreciated. Including my diagnostics diagnostics-20220923-0057.zip
  12. Here is the current XML to the VM I created for the passthrough <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>GamerVM</name> <uuid>ed2f15bb-c32e-85d7-0de3-869fca92f0dc</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="Windows_11.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='12'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='13'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='14'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='15'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/ed2f15bb-c32e-85d7-0de3-869fca92f0dc_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source dev='/dev/disk/by-id/ata-MKNSSDCR60GB_MKN1203A0000041856'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/VMs/GamingVM/Storage.qcow2'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='0' bus='0' target='0' unit='3'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='7' model='pcie-to-pci-bridge'> <model name='pcie-pci-bridge'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:db:fc:4a'/> <source bridge='br0.4'/> <model type='virtio-net'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='3'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <audio id='1' type='none'/> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x07' slot='0x01' function='0x0'/> </video> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/appdata/vbios/Gigabyte.RX5700XT.8192.191105.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0291'/> </source> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x0b05'/> <product id='0x1939'/> </source> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> </domain> I've tried it with and without checkmarking the IOMMU vfio boxes.
  13. Hi All, One of the things I used to love about my unraid server was gaming. At one point I eventually upgraded to the 5700xt, followed the typical steps with enabling IOMMU in BIOS, editing the XML for multifuction capabilities, and had no problem gaming. Then a little less than a year ago, I decided to move my server to my office, and built a smaller unraid server for VM lab work with the eventual goal of gaming again. I finally decided to do some gaming, but for the life of me can't get passthrough working. Initially I only did the XML edits like I'd done before, then decided to add the bios file when the XML edits didn't work. I should also add, when I start the VM, within a minute, the entire server freezes up, and I have to do a hard reboot to get it back up. Eventually I decided to add IOMMU passthrough for the GPU in system devices. This helped a little, so the server no longer freezes up. But on my screen, I only get the POST screen. If I add VNC as screen 1, the VM loads, but the GPU still does not work. It shows up in device manager, but with an error. Does anyone have any ideas? I really want to get passthrough working again, I have other things I wanted to get up and running that needs GPU power.
  14. Any updates on this? I also have a gigabyte 5700xt, and once upon a time with a previous config was doing GPU passthrough without any issues. Then recently I built a new server, put my 5700xt into it, followed the same steps as before and for the life of me can't get the damn thing to work. Hoping someone's seen the light w/ this card + unraid combo