keithwlandry

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by keithwlandry

  1. Yea, unfortunately this just happened to me as well on a Norco RPC-2212. The top backplane just gave out and all four drives are showing missing. I haven't found a place to buy any replacement parts. I think ripping it out might be my only option unless I want a new case.
  2. Well thanks for your help. What do you suggest doing next? I have a new HDD that's bigger than everything I have that I was going to use to replace my parity drive. But I probably shouldn't do that until this drive is fixed, huh? Should I just wipe it and try to rebuild it? Thanks again.
  3. Here they are. tower-diagnostics-20211130-1149.zip
  4. Updated to v6.10-rc and ran xfs_repair -v /dev/md9 again. Same results.
  5. Duh, sorry. That was dumb. Same response as the GUI: Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... ............................................... .........Sorry, could not find valid secondary superblock Exiting now. (except with a lot more dots)
  6. /dev/md9: No such file or directory /dev/md9: No such file or directory fatal error -- couldn't initialize XFS library Is what I got back (Sorry for the slow reply, been on the road)
  7. I've tried to read through the forums but nothing seems to be quite the same problem I have (maybe it is, idk). But I have a HDD that's giving me the error "Unmountable: not mounted" Running XFS Repair I get this response: However, I can run an extended SMART report just fine. Comes back with no issues that I can tell. I'm not sure what to do to try to recover the drive, or if I should just replace it? It was recently a replacement of an older drive that failed. Probably hasn't been in the rack a full month. tower-diagnostics-20211117-1555.zip tower-smart-20211117-1535.zip
  8. I've gotten this to work by installing an instance of nginx and extracting a copy of the Phlex zip into the www folder.
  9. Thanks @Squid, I just realized this was for NodeLink. I was actually trying to get Phlex working. Just FYI; this thread is linked from the Support Thread link for Phlex in CA. Changing network mode didn't help Phlex but it was worth a try. Anyone else looking for Phlex support, haven't gotten a working docker for it. But, I've gotten Phlex to work by installing an instance of nginx and extracting a copy of the Phlex zip into the www folder.
  10. I've gotten this to work by installing an instance of nginx and extracting a copy of the Phlex zip into the www folder.
  11. Thanks @nox_uk, I followed his video for editing the Techpowerup BIOS. I was going to follow that one; but I don't have a spare GPU lying around to do a bump. Can I do it via Remote Desktop? So to avoid UEFI.....do I just make it SeaBIOS? Sorry for the stupid questions, thanks for your help. I really appreciate it. -Keith
  12. For months I have been trying to get my Nvidia 1050 Ti to passthrough on my own, without bugging the community, but I have failed. I have read hundreds of pages of this (and other) forum post. Watched hours of YouTube. And disassembled my UNRAID machine countless times. I come before you a broken man, pleading for assistance. Here's what I've tried A dozen or so Windows 10 VMs using OMVF & SeaBIOS Hyper V on/off Downloading a GPU BIOS from techpowerup, cutting out the Nvidia header in hex, and adding it to XML. A bunch of various XML tweaks Crying New Virtio drivers Switching PCI slots The closest I have come is with a fresh Win10 install, declaring the edited GPU BIOS in the XML, OMVF, Hyper V Off, and the most recent Virtio drivers. It showed me a picture off the bat using "Microsoft Basic Display Adapter"; but was stuck in 800X600. After finishing installing Win10 I updated the graphics card driver. It realized it was an Nvidia display adapter; but then really freaked out. I waded through that to get the rest of the updates downloaded; and installed. That got me back to a working screen; but the dreaded Code 43 was attached to the Nvidia Display Adapter in device manager; and my resolution is frozen at 800X600. Here's the current XML I have on this VM. <domain type='kvm' id='1'> <name>Win10</name> <uuid>950074cb-7037-b282-b9c6-a92ad0e2352e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>24641536</memory> <currentMemory unit='KiB'>24641536</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>16</vcpu> <cputune> <vcpupin vcpu='0' cpuset='16'/> <vcpupin vcpu='1' cpuset='17'/> <vcpupin vcpu='2' cpuset='18'/> <vcpupin vcpu='3' cpuset='19'/> <vcpupin vcpu='4' cpuset='20'/> <vcpupin vcpu='5' cpuset='21'/> <vcpupin vcpu='6' cpuset='22'/> <vcpupin vcpu='7' cpuset='23'/> <vcpupin vcpu='8' cpuset='24'/> <vcpupin vcpu='9' cpuset='25'/> <vcpupin vcpu='10' cpuset='26'/> <vcpupin vcpu='11' cpuset='27'/> <vcpupin vcpu='12' cpuset='28'/> <vcpupin vcpu='13' cpuset='29'/> <vcpupin vcpu='14' cpuset='30'/> <vcpupin vcpu='15' cpuset='31'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/950074cb-7037-b282-b9c6-a92ad0e2352e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/Media/VMs/Win10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/VM ISOs/Win10_1709_English_x64.iso'/> <backingStore/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <alias name='ide0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/VM ISOs/virtio-win-0.1.141.iso'/> <backingStore/> <target dev='hdb' bus='ide'/> <readonly/> <alias name='ide0-0-1'/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:c6:09:76'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-1-Win10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/VM ISOs/zotac.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x81' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='3' device='3'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x04d9'/> <product id='0x1702'/> <address bus='3' device='4'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> Diagnostics attached. What else can I do? Anyone have ideas ? Thanks. -Keith NOTE, While typing this out I rolled back the driver to the Generic Microsoft one, then tried updating the driver again, and I got that funky disco screen again. #FAIL diagnostics-20180121-0834.zip zotac.dump
  13. I've been monitoring the temperatures, mostly because I was curious how it would do in that network closet. . . . . . . . . doesn't seem to be affected. The case is kinda cheapy. . . . . it's not a supremely tight fitting front end Thanks
  14. Part List: Case - NORCU 2U Rack Mount RPC-2212 $293.04 MB - ASUS Z10PE-D16 $402.99 CPUs - Intel Xeon E5-2620 v4 - $414.99 X2 ($828.98) RAM - Crucial 8GBX2 DDR4 2133 RDIMM - $130.48 X2 ($260.96) PSU - Athena Power AP-U2ATX80FEP8 - $94.98 Heatsink - Dynatron R18 - $29.59 X2 ($59.18) Other - Mini-SAS to 4x SATA Reverse Breakout Cable $13.99 X3 ($41.97) Other - 15-Pin Male to Dual 4-Pin Molex Female Y Splitter $4.46 X3 ($13.38) *All prices at time of purchase Grand Total = $1995.48 Shit. .. . . . . . . . . . . . . That was about $750 over budget. Lol. Oh well.
  15. I wanted to give everyone a final, and thankfully happy update. Gigabyte MD60-SC0 never worked out. I returned that broken Power Supply. Ended up running with @garycase and @uldise's suggestion of trying two power supplies, one on each CPU..... no dice again. Contacted Gigabyte again, went through another round of phone tests. Even had me send photos and videos of the MB, boot up, etc. After a few weeks of that, they asked me to send it back for another round of testing. Considering the last time took 3 weeks; I opted not to. I contacted NewEgg, explained everything that had happened, and they were gracious enough to give me a store credit refund for the Gigabyte MB. Considering I had originally purchased the motherboard in December of 2016, and they took it back in May of 2017, I was super pumped with that. Props to NewEgg. With the refund I STUPIDLY purchased an open-box ASUS Z10PE-D16 and some registered memory. It came in missing the I/O panel, jumpers all screwed to hell, obvious abuse on it. . . . . it never even tried to boot. At this point I'm starting to doubt the rest of my components. Thinking it was a funky CPU, or that my power supply was the culprit. But considering this open-box special is in rough shape, I opted to send it back, assuming it was DOA. When it refunded, I order a brand new ASUS Z10PE-D16. . . . BOOM. Magic. Post. No issues. I run a bunch of tests off Ubuntu on a USB stick to make sure all the backplane connections work, all the case connections work, both CPUs, all RAM, etc. . . . everything working as expected. unRAID migration was easy peasy. Parity check with no errors. Move everything to new case. Boot it back up, all works. Added two new HDDs (cause now I have the space), and those get cleaned and formatted just fine. After 6 months of turmoil, I'm happy to announce that she is all assembled, finally out of the office, and sitting proudly on her throne in the network closet. Thanks to everyone that helped out and offered suggestions and encouragement. I literally almost wept when it was finally all done. I'll try to post a build log next if I get some free time. I already have friends asking what the grand total was, and I'm kinda concerned to tally it up myself Thanks again, -Keith
  16. Just got back in the country. Gigabyte returned it with nothing done. Said it tested fine. But same issue. Even tried to have a new PSU waiting for me when I got home; it was shipped broken. lol, my luck. Trying with NewEgg support now for help. It's way past the return period; but they've taken my statement that it was incorrectly described (description said it would work with v4 CPUs) and that I've tried going through Gigabyte's support; so we'll see what happened At 96% on old Server......considering buying a new MB, cause I need to do something soon to keep running.
  17. Just an update to everyone; sent off to Gigabyte for repair. One day I'll get my server done. . . . . . one day. . . . .
  18. I lied. I don't have two EPS-12V connections on my old PSU. Just one; so I can't test. Heard back from Gigabyte; they had me try to boot with both CPUs installed; but only RAM installed in CPU1. Didn't work. Still stopped at "79".
  19. Thanks Ashman. Very encouraging. Yea, I thought the dog-ears and squares were different. Thanks. Yea, I confirmed each CPU worked with a single CPU boot on each one. I tested the memory on single CPU boot too. It all showed up That's a great idea. I'll shutdown the current server and borrow it's PSU for a quick test. It's a corsair 800+ if I remember correctly.
  20. I THINK; I tried swapping my PSU's CPU power connections over the weekend when I was crying on the floor. But I can try again tonight. Good thought, thanks. I haven't gone down that route of buying two v3s to test. I had a spare v3 16## to update the BIOS; but I think I read that you have to have two 26##s to run dual CPUs. So I'd have to buy two more v3 CPUs; and as much stuff as I've returned to Amazon this week I'm worried they may start flagging me as a problem child I'm going to see about calling Gigabyte today when I get home. If I have to wait 3-5 business days for them to answer every reply on their online ticketing system I will never get this fixed. And the PSU only came with two 8 pin outputs. I don't know a ton about PSUs; but there some 6pin+2pin looking plugs but I don't think those are meant for CPU power. And I have no idea where my PSU box is to try to read up on it. Fun story, I stayed up reading the entire manual the other night when Gigabyte support responded with " It should be in PNP" . . . . . thinking maybe I was missing a simple BIOS setting. Only to realize they were trying to say it should be plug and play.
  21. Heard back from Gigabyte. They said it should be plug and play. Nothing helpful. I swapped out the RAM for some Kingston, confirmed they both work with a single CPU boot. Double still does nothing. Fun times.
  22. Yea, I'm going to buy a couple new sticks and try it later this week. I don't have a extra graphics card in. Just using the onboard. Funny enough Gigabyte figured that out and added that feature. BUT; the default BIOS setting is for it to be turned OFF. AND you have to do a BIOS update to use v4 chips so I had to find a one time use v3 to turn the update feature; and then put in the v4s . It was half off on Newegg; I'm starting to see why.
  23. Downgraded BIOS to R01; confirmed boot on one CPU; still no boot on two CPU I'll probably order two sticks of different ram to test. But, going through the manual; the BIOS POST Code for 79 means: "DXE CSM INT". Does that mean anything to anyone? I can't find anything that makes sense in Google.