Jump to content

hihihi8

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

hihihi8's Achievements

Noob

Noob (1/14)

0

Reputation

  1. 2018-04-18 04:53:48.283+0000: 3666: info : libvirt version: 4.0.0 2018-04-18 04:53:48.283+0000: 3666: info : hostname: Tower 2018-04-18 04:53:48.283+0000: 3666: warning : qemuDomainObjTaint:5531 : Domain id=1 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: high-privileges 2018-04-18 04:53:48.283+0000: 3666: warning : qemuDomainObjTaint:5531 : Domain id=1 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: host-cpu 2018-04-18 05:05:32.042+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=2 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: high-privileges 2018-04-18 05:05:32.042+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=2 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: host-cpu 2018-04-18 05:08:43.624+0000: 3665: warning : qemuDomainObjTaint:5531 : Domain id=3 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: high-privileges 2018-04-18 05:08:43.624+0000: 3665: warning : qemuDomainObjTaint:5531 : Domain id=3 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: host-cpu 2018-04-18 05:17:59.084+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=4 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: high-privileges 2018-04-18 05:17:59.084+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=4 name='Player 2' uuid=10e0963d-feaa-7df9-0ee7-1160e41b5636 is tainted: host-cpu 2018-04-18 05:18:29.700+0000: 3662: error : qemuMonitorIO:721 : internal error: End of file from qemu monitor 2018-04-18 06:02:00.635+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=5 name='Player 2' uuid=b0c2e046-bffe-72bb-53c3-0b957b22018a is tainted: high-privileges 2018-04-18 06:02:00.635+0000: 3664: warning : qemuDomainObjTaint:5531 : Domain id=5 name='Player 2' uuid=b0c2e046-bffe-72bb-53c3-0b957b22018a is tainted: host-cpu 2018-04-18 06:04:47.222+0000: 3665: error : virUSBDeviceFindByVendor:232 : internal error: Did not find USB device 093a:2510 2018-04-18 06:04:47.288+0000: 3665: warning : virHostdevReAttachUSBDevices:1915 : Unable to find device 000.000 in list of active USB devices 2018-04-18 06:05:01.518+0000: 3663: warning : qemuDomainObjTaint:5531 : Domain id=7 name='Player 1' uuid=dee09715-6d45-09ec-9074-8f49f185cf67 is tainted: high-privileges 2018-04-18 06:05:01.518+0000: 3663: warning : qemuDomainObjTaint:5531 : Domain id=7 name='Player 1' uuid=dee09715-6d45-09ec-9074-8f49f185cf67 is tainted: host-cpu Here is my libvirt log in case it's needed
  2. Something went wrong... The system at hand is meant to be a LAN-party rig that can run 2 VM's at once, and silly me didn't check the Dashboard before stopping the array. It hung on stopping array, and as it turns out, one of my VM's was running. I hurriedly ran to the VM and manually shut it down (Win 10). However, 10 minutes later, It still hadn't stopped, and the system was unresponsive to pretty much anything, so I hit the reset button on my motherboard. Restarting, I found that I couldn't access my primary VM, which is the one that was running during unclean shutdown. Said VM passes through my primary GTX1070 (the unRAID console card) as discrete graphics, and was done so using a BIOS dump file as the ROM. It will get to the login screen of Windows 10 before BSOD-ing out with "VIDEO SCHEDULER INTERNAL ERROR". After Googling the error, it seemed to be a driver error, so I set the graphics for the VM as VNC and remote-uninstalled the Nvidia drivers to the system. However, the error persisted, and I am beginning to suspect that it has something to do with the BIOS dump file being corrupt. On the other hand, my secondary VM works just fine. It passes through the second GTX1070 (without BIOS-dump), and is pretty much a carbon copy of the Primary VM. What is the easiest way that I can recover VM 1, and clone VM 2 over to it? They are both just OS's (Files are stored on user shares). I've attached my syslog.txt file below, as it's quite long. Thanks! PS: Is it possible that other unRAID files might have been corrupted in the process as well? Is there something similar to a checksum that can verify their integrity? syslog.txt
  3. Found a solution: https://www.drivereasy.com/knowledge/fix-dns-probe-finished-nxdomain-chrome/ Method 2: Releasing and Renewing IP Address.
  4. Note that I can still manage to connect to it, just with much greater difficulty than before. It takes sometimes a couple of restarts of Windows Explorer for it to show up, and then a restart of Chrome in order to access http://tower. After I do connect, it's fine though.
  5. Hi, I just set up my personal unRAID 6 nas/VM system not long ago, but have been experiencing some issues with connecting to it through my PC recently. I'm not an expert in this realm, so I guess I'll just tell the entire tale so I dont leave something important out. In the beginning, it worked perfectly. I had could run 2VMs off the system, and while those weren't in use, the server would act as a NAS for my main PC. the unRAID system is connected to my gigabit router through an ethernet cable, and so is my main PC. Problems began emerging one day when I was testing a GPU (Vega 64) to see if it was good or bad. I plugged it in the system, then booted it up. It turned out to be fine, and out of curiousity, I proceeded to try and dual-GPU a VM. The original GPU was a GTX1070, which is also the unRAID console card, which I passed through by dumping the BIOS. With both GPU's (1070 + Vega 64) selected in the VM page, I tried to boot the VM. However, IIRC, Windows didn't start, and all I was left with was a black screen. I proceeded to power down the system, and removed the Vega 64 since it wasn't meant for said system anyway. Afterwards, the unRAID system itself didn't show any abnormalities, but I did notice that I had a hard time connecting to the NAS portion off my other main PC. Before you ask, yes, I have network-mapped all the user shares that I wish to access, settings for those shares are correct, and the array Is started. I had to restart the computer before I had any success with connecting to it. Even so http://tower took a very long time to connect to, returning multiple times as "DNS_PROBE_FINISHED_NXDOMAIN" in Chrome. And even after I had logged in to the NAS, the network drives took around 3 minutes to appear; 3 minutes of me refreshing windows explorer constantly. It used to connect almost instantly, and would show up in seconds after I boot up and open file explorer. Now, it often just appears with a red "x" and if I try to connect to it, it gives me something along the lines of "Y: drive was not found" or something similar. I've also noticed that "TOWER" rarely shows up in Windows explorer's Network section either now. Is it possible that the GPU incident + a possibly unclean shutdown/force shutdown of a VM caused something to become corrupt, and in turn, lead to my issues? I've also been messing with Windows 10's adaptor and binding settings, setting the ethernet cable as default, but that hasn't helped either... Thanks in advance!
  6. Just noticed something... Is that showing up as 3 devices? It seems as if the Intel CPU's PCI bridge is grouped with the PLX chip (as it is sometimes, when grouped with a GPU), and then the PLX chip itself has 3 lines: #1: Grouped with PCI Bridge #2: Grouped with GTX1070#1 #3: Grouped with GTX1070#2 So essentially, the PLX chip is acting as a second PCI bridge?? Just my observations, to be taken with a grain of salt. However, hopefully that's helpful!
  7. Hi, just tried that, still doesn't work Since the motherboard uses a PLX chip, I've also tried passing through the PLX Chip ID in syslinux.cfg, but it hasn't had any noticeable effects. One thing probably worth noting is that the problem likely has to do with how the PLX chip inherently works: The SR-X's poor design means that one CPU's entire 40 PCI-E Lanes go to waste, and the rest are split/fabricated by a PEX8747 PLX chip, but isn't done perfectly either. According to mosie in his rant on the EVGA forums: The PLX chip controls both GTX1070s' PCI-E slots (slots 3+5), and is probably why they are grouped together with the PLX chip. My current plan is to swap the top card (currently the dual-slot GTX650) with a cheap single slot 9500GT, and then move GTX1070#1 up to slot 2, which is linked to the CPU directly. I'll see how that works out in a few days when the single-slot card arrives. If that doesn't work, I'll resort to getting rid of the GTX650, moving GTX1070#1 up to slot one, and use it as the console card, before dumping the ROM and passing it through to one of the VM's. Something I've read/watched, but haven't fully understood or attempted yet. Cheers and thanks!
  8. Nope. If I may, how would I implement that? Thanks!
  9. Hi, I'm currently troubleshooting an EVGA SR-X. On my machine, slot 1 is the unRAID console (GTX650) and slots 3+5 are GTX1070's meant to be passed-through to separate VM's running simultaneously. Haven't gotten it to work, but doesn't look pretty. The SR-X has a PLX Chip (PEX8747) onboard, and I think it is causing GPU passthrough issues. If my theory is correct, it really depends on your motherboard's design. The SR-X's PCI-E Lanes are wired terribly, one reason being that all 4 bottom slots to come from the PLX chip, and only CPU1's PCI-E lanes are used (Thats right, the entire CPU0's 40 LANES go to Waste...) There's this rant by mosie on the EVGA forum about this exact issue (not unRAID, but PCI-E Lane design), and it's where I get my reasoning from. According to mosie, both slot 3 and 5 are connected directly to the PLX chip, and if you look at my current scenario: They are grouped together even with ACS Override enabled. I think it inherently has to do with the way PLX chips work So for me, my plan is to replace the top GTX650 with a single slot card, and move slot 3's card up to slot 2 (which is controlled by the CPU), and give it a try. Again, I must stress that this is coming from me, a newb at unRAID myself, so I can't guarantee anything. Just know that you have at least one case against PLX unRAIDing... Cheers
  10. Hi guys/gals, it's my first time here, and I've run into some issues with my first unRAID build. Heres the hardware configuration: Motherboard: EVGA SR-X (Intel C606/X79 Platform) CPU's: Intel E5-2687W (v1) x2 RAM: Kingston DDR3 8GB *12 (96GB Total) SSD: Kingston 480GB + Intel 512GB HDD Mass Storage: 2TB GPU's: GTX 650 — PCI-E Port 1 (For Unraid GUI/Console) GPU's: GTX1070 x2 — PCI-E Ports 3+5 (One per VM) I'm new to virtual machines, and I'm trying to make my own version of Linus's <2 gamers 1 CPU> using old hardware. After rolling back not 1/2, but 3 BIOS versions, I finally got IOMMU and HVM enabled (EVGA took away the VT-d setting for no reason). However, the problem arises with both GTX1070's being in the same group (Group 3), even with ACS Override enabled. IOMMU group 0: [8086:3c00] 00:00.0 Host bridge: Intel Corporation Xeon E5/Core i7 DMI2 (rev 05) IOMMU group 1: [8086:3c02] 00:01.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 1a (rev 05) IOMMU group 2: [8086:3c04] 00:02.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 2a (rev 05) IOMMU group 3: [8086:3c08] 00:03.0 PCI bridge: Intel Corporation Xeon E5/Core i7 IIO PCI Express Root Port 3a in PCI Express Mode (rev 05) [10b5:8747] 05:00.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev ba) [10b5:8747] 06:08.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev ba) [10b5:8747] 06:10.0 PCI bridge: PLX Technology, Inc. PEX 8747 48-Lane, 5-Port PCI Express Gen 3 (8.0 GT/s) Switch (rev ba) [10de:1b81] 07:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 07:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) [10de:1b81] 08:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 08:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) Starting the second VM gives an error saying that one of the GPU's is being used by the other VM. I have tried jonp's Guide on Passing-Through NIC's, but to no avail — I've tried the following syslinux.cfg's (I honestly have no idea what I'm doing here): default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append pcie_acs_override=downstream pci-stub.ids=10de:1b81 pci-stub.ids=10de:10f0 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append pcie_acs_override=downstream pci-stub.ids=10de:1b81 pci-stub.ids=10de:10f0 initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append pcie_acs_override=downstream pci-stub.ids=10de:1b81 initrd=/bzroot label unRAID OS GUI Mode kernel /bzimage append pcie_acs_override=downstream pci-stub.ids=10de:1b81 initrd=/bzroot,/bzroot-gui label unRAID OS Safe Mode (no plugins, no GUI) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot unraidsafemode label unRAID OS GUI Safe Mode (no plugins) kernel /bzimage append pcie_acs_override=downstream initrd=/bzroot,/bzroot-gui unraidsafemode label Memtest86+ kernel /memtest (The second one has the GPU soundcard section (pci-stub.ides=10de:10f0) removed.) However, neither worked. At this point, I don't know what to do. If unRAID is able to change IOMMU groups using "ACS Override," is it possible to manually assign groups? Any help would be appreciated, Thanks!
×
×
  • Create New...