vincheesel

Members
  • Posts

    11
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

vincheesel's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Thanks - yeah, I figured that out while researching more. It ended up being the OS, I did an experiment, It worked fine on Windows 7 with KVM. I had used an older version of Windows 10 iso, After downloading the Creators Iso and installing, its working fine now. Hope this helps someone
  2. Hi All, Just thought I'd mention that the Onboard Video is definitely set to boot first, I did also try stubbing.
  3. Hi Guys, I've been pulling my hair out over this issue, spent countless hours, hoping someone can assist. The Issue is I'm having trouble getting the GPU passthrough to work for my Gigabyte GTX1060 on my KVM Windows 10 64 bit machine, it detects as a Video Controller (VGA Compatible) even after attempting to install drivers. I had a spare GTX760 lying around - If I use my GTX760 - it passes through to the VM with no issues. I tried using the exact same configuration I used from the GTX760 (with the exception of changing the video/audio card and VBIOS) in Windows, unfortunately no luck! So back to the GTX1060, When I attempt to install the driver package from Nvidia, it displays cannot find the hardware and the installer can't continue. When I attempt to use the Device manager wizard, it goes through the motions, tries to install the driver then states; "Windows encountered a problem installing the driver software for your device" Windows found driver software for your device but encountered an error while attempting to install it NVIDIA GeForce GTX 1060 3GB After that error, I checked the Windows Event viewer, it displayed the following Driver Management concluded the process to install driver nv_dispi.inf_amd64_633a9032a4737012\nv_dispi.inf for Device Instance ID PCI\VEN_10DE&DEV_1C02&SUBSYS_37241458&REV_A1\3&13C0B0C5&0&28 with the following status: 0xE0E00030. There are the following errors in the KVM logs (several repeats of this) smbus: error: Unexpected recv start condition in state 3 smbus: error: Unexpected read in state -1 smbus: error: Unexpected NACK in state -1 smbus: error: Unexpected NACK in state -1 I'm using the VBIOS via the tech power up website and removed the Nvidia headers using "SpaceInvader One" tutorial from YouTube, I checked the model its 100% the correct VBIOS. As for the troubleshooting I've tried the following Used another PCI-E slot (also tested the same slot I used for the GTX760) I tried both 440fx and Q35 and both BIOS's I also enabled/disabled the hyper-v feature and tried various toggles Tested the GTX1060 card on my other workstation Tested my old graphics card (GTX 760) on the unraid server and was able to passthrough and it worked with no issues. Tested various machine versions Tried the other BIOS Specs Running UnRAID 6.35 16GB RAM Running Docker I've enabled PCIe ACS Override setting Kernel & CPU Linux UNRAID 4.9.30-unRAID #1 SMP PREEMPT Fri May 26 13:56:36 PDT 2017 x86_64 Intel(R) Core(TM) i5-3570 CPU @ 3.40GHz GenuineIntel GNU/Linux VM configuration <domain type='kvm'> <name>Windows10</name> <uuid>4ff24b5b-5220-15bb-3516-3feff7fb6533</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>4194304</memory> <currentMemory unit='KiB'>4194304</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/4ff24b5b-5220-15bb-3516-3feff7fb6533_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/KVM/VM/Windows10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:68:c4:d3'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/KVM/VBIOS/gtx1060.dump'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain> System Devices PCI Devices and IOMMU Groups Warning: Your system has booted with the PCIe ACS Override setting enabled. The below list doesn't not reflect the way IOMMU would naturally group devices. To see natural IOMMU groups for your hardware, go to the VM Settings page and set the PCIe ACS Override setting to No. IOMMU group 0 [8086:0150] 00:00.0 Host bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor DRAM Controller (rev 09) IOMMU group 1 [8086:0151] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09) IOMMU group 2 [8086:0155] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09) IOMMU group 3 [8086:0152] 00:02.0 VGA compatible controller: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor Graphics Controller (rev 09) IOMMU group 4 [8086:1e31] 00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller (rev 04) IOMMU group 5 [8086:1e3a] 00:16.0 Communication controller: Intel Corporation 7 Series/C216 Chipset Family MEI Controller #1 (rev 04) IOMMU group 6 [8086:1503] 00:19.0 Ethernet controller: Intel Corporation 82579V Gigabit Network Connection (rev 04) IOMMU group 7 [8086:1e2d] 00:1a.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #2 (rev 04) IOMMU group 8 [8086:1e10] 00:1c.0 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 1 (rev c4) IOMMU group 9 [8086:1e14] 00:1c.2 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 3 (rev c4) IOMMU group 10 [8086:1e16] 00:1c.3 PCI bridge: Intel Corporation 7 Series/C216 Chipset Family PCI Express Root Port 4 (rev c4) IOMMU group 11 [8086:244e] 00:1c.4 PCI bridge: Intel Corporation 82801 PCI Bridge (rev c4) IOMMU group 12 [8086:1e1e] 00:1c.7 PCI bridge: Intel Corporation 7 Series/C210 Series Chipset Family PCI Express Root Port 8 (rev c4) IOMMU group 13 [8086:1e26] 00:1d.0 USB controller: Intel Corporation 7 Series/C216 Chipset Family USB Enhanced Host Controller #1 (rev 04) IOMMU group 14 [8086:1e44] 00:1f.0 ISA bridge: Intel Corporation Z77 Express Chipset LPC Controller (rev 04) [8086:1e02] 00:1f.2 SATA controller: Intel Corporation 7 Series/C210 Series Chipset Family 6-port SATA Controller [AHCI mode] (rev 04) [8086:1e22] 00:1f.3 SMBus: Intel Corporation 7 Series/C216 Chipset Family SMBus Controller (rev 04) IOMMU group 15 [10de:1c02] 01:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 3GB] (rev a1) [10de:10f1] 01:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1) IOMMU group 16 [10de:1187] 02:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 760] (rev a1) [10de:0e0a] 02:00.1 Audio device: NVIDIA Corporation GK104 HDMI Audio Controller (rev a1) IOMMU group 17 [1000:0072] 03:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) IOMMU group 18 [1b21:1042] 04:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller IOMMU group 19 [1b21:0612] 05:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01) IOMMU group 20 [1b21:1080] 06:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 03) IOMMU group 21 [1b21:1042] 08:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller CPU Thread Pairings cpu 0 cpu 1 cpu 2 cpu 3 USB Devices Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 003: ID 1b1c:1c00 Corsair Bus 001 Device 004: ID 051d:0002 American Power Conversion Uninterruptible Power Supply Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 002: ID 413c:2003 Dell Computer Corp. Keyboard Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 007 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 007 Device 002: ID 0781:5567 SanDisk Corp. Cruzer Blade Bus 008 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub SCSI Devices [0:0:0:0] disk SanDisk Cruzer Blade 1.01 /dev/sda 4.00GB [1:0:0:0] disk ATA ST3000DM001-1CH1 CC24 /dev/sdb 3.00TB [1:0:1:0] disk ATA ST3000DM001-1CH1 CC24 /dev/sdc 3.00TB [1:0:2:0] disk ATA ST3000DM001-1CH1 CC24 /dev/sdd 3.00TB [1:0:3:0] disk ATA ST3000DM001-1CH1 CC24 /dev/sde 3.00TB [1:0:4:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdf 2.00TB [1:0:5:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdg 2.00TB [1:0:6:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdh 2.00TB [1:0:7:0] disk ATA ST3000DM001-1CH1 CC27 /dev/sdj 3.00TB [4:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdi 2.00TB [5:0:0:0] disk ATA ST2000DL003-9VT1 CC32 /dev/sdk 2.00TB [6:0:0:0] disk ATA ST3000DM001-1CH1 CC27 /dev/sdl 3.00TB [7:0:0:0] disk ATA ST3000DM001-1ER1 CC25 /dev/sdm 3.00TB [8:0:0:0] disk ATA ST4000DM000-2AE1 0001 /dev/sdn 4.00TB [9:0:0:0] disk ATA KINGSTON SV300S3 BBF2 /dev/sdo 480GB Sorry for the lengthy post - If someone could help it would be greatly appreciated! Thanks in advance
  4. I'm no expert, but anyone after a template, create a file called my-BindDNS.xml in this SMB share <unraid server>\flash\config\plugins\dockerMan\templates-user I had to use host mode as port 53 UDP was used (I think by dnsmasq) on my unraid Add this code to the my-BindDNS.xml file then it should appear in your add container list. You will need to alter the path below to match your configuration <?xml version="1.0" encoding="utf-8"?> <Container> <Name>BindDNS</Name> <Description>Bind DNS to host a DNS in your network environment.[br][br]&#13; Default root password for webmin is password&#13; </Description> <Registry>https://registry.hub.docker.com/u/sameersbn/bind/</Registry> <Repository>sameersbn/bind</Repository> <BindTime>true</BindTime> <Privileged>false</Privileged> <Environment/> <Networking> <Mode>host</Mode> <Publish/> </Networking> <Data> <Volume> <HostDir>/mnt/user/Docker/BindDNS</HostDir> <ContainerDir>/data</ContainerDir> <Mode>rw</Mode> </Volume> </Data> <Version>1dc0cb06</Version> <WebUI>https://[iP]:[PORT:10000]/</WebUI> <Banner>http://blog.learningtree.com/wp-content/uploads/2015/10/dns_bind-190x190.png</Banner> <Icon>http://blog.learningtree.com/wp-content/uploads/2015/10/dns_bind-190x190.png</Icon> <ExtraParams></ExtraParams> </Container> Good Luck
  5. Thanks, The syslinux.cfg is a bit full as I was testing all versions, so the default 6.1.3 works with no issues - but I can only get the 6.0.1 to boot with the mediabuild. All the modified 6.1.3 versions failed I'm using a Sony Play TV. default /syslinux/menu.c32 menu title Lime Technology prompt 0 timeout 50 label unRAID OS 6.0.1 menu default kernel /bzimage append initrd=/bzroot label unRAID OS 6.0.1 Safe Mode (no plugins) kernel /bzimage append initrd=/bzroot unraidsafemode label unRAID OS 6.1.3 kernel /bzimage-6-1-3 append initrd=/bzroot-6-1-3 label unRAID OS Safe Mode (no plugins) kernel /bzimage-6-1-3 append initrd=/bzroot-6-1-3 unraidsafemode label unRAID OS Openelec kernel /bzimage-6-1-3-openelec append initrd=/bzroot-6-1-3-openelec label unRAID OS Openelec Safe Mode (no plugins) kernel /bzimage-6-1-3-openelec append initrd=/bzroot-6-1-3-openelec unraidsafemode label unRAID OS TBS kernel /bzimage-6-1-3-tbs append initrd=/bzroot-6-1-3-tbs label unRAID OS TBS Safe Mode (no plugins) kernel /bzimage-6-1-3-tbs append initrd=/bzroot-6-1-3-tbs unraidsafemode label unRAID OS Digital Devices Experimental MediaBuild kernel /bzimage-6-1-3-ddexp append initrd=/bzroot-6-1-3-ddexp label unRAID OS Digital Devices Experimental MediaBuildSafe Mode (no plugins) kernel /bzimage-6-1-3-ddexp append initrd=/bzroot-6-1-3-ddexp unraidsafemode label Memtest86+ kernel /memtest
  6. Hi Guys, Great thread, and good work on the constant updates. My Kernel fails to load, the bzimage loads, but then the bzroot fails after 3 dots, for a millisecond I can see a Kernel error and then loops back to the unraid boot menu. I've had to resort back to 6.0.0 to get this working... I tried 6.1 - same issue. I can't get any logs.. unless there is a way around it? Any ideas?
  7. Hi All, Really loving this Unraid v6 - its come a long way. I was using vmware prior to v6 to power my unraid, the only feature I really miss with this is the ability to snapshot. It would be great if this could be added to the roadmap. Great work so far, really impressed!
  8. Hi all, I've been running unraid 5.0 rc 11 for a long time without any errors/issues (over 6 months) I've decided to upgrade to the final (5.0) kernel 3.9.6 as I noticed it was available. That's when I started having major issues.. Physical Server = core i5-3570 processor (Ivy bridge) with 16GB Ram - unraid is virtualised The Adaptec card is a 1430sa - 4 port card via VM Passthrough, all the other drives are running via RAW device mappings via the motherboard. (cant remember the model, if its important I will share) So the problem is that all the drives using the Adaptec card have been producing errors, the RAW device mapped drives produced no errors at all. I tried upgrading to the latest RC version of unraid 5.01 (RC1) - kernel 3.9.11 however I had the same problem. I've downgraded to UNRAID 5.0 rc 13 with kernel 3.9.3 - everything is back to normal Linux UNRAID 3.9.3-unRAID #8 SMP Sun Jun 2 12:30:00 PDT 2013 i686 Intel® Core i5-3570 CPU @ 3.40GHz GenuineIntel GNU/Linux Did anyone experience these issues with the same card? Is this an incompatibility issue with the Kernel 3.9.3? Sorry I didn't keep any logs, I've seen this issue in the forum however its outdated - http://lime-technology.com/forum/index.php?topic=22253.5;wap2 Thanks in advance
  9. All good, Its working fine now, I gave it more resources - weird...
  10. Hi guys, I've updated my hardware on the server recently to the point of over kill for an unRAID server so I did some research and virtualised it to take full advantage of its resources. I'm currently running ESXi 5.1, I have an adaptec 1430SA and 8 onboard ports (ASUS P8Z77-Pro Motherboard) its a core i5-3550 processor with 16GB RAM. The adaptec is configured in a pass-through mode (Its not detected by ESXi but manages to pass-through nicely) and my onboard is configured as a Sata RAW Device mapping - LSI SAS. unRAID seems to work well, I can transfer files at 20-30mb which I am happy with - however since moving to ESXi the Parity runs extremely slow. on average 500kb-1mb which is not practical I have 11 Drives in total in this unraid 1430SA is running 4x 3TB Seagate Drives (one acting as the parity) Onboard running 5x 2TB Seagate Drives + 1x 1.5 Seagate drive + 1TB Seagate Cache drive ESXi is running on an onboard sata III port via SSD - unRAID is running as a VMDK with the usb key for the serial. What do you think my problem is here guys? Should I try move my parity to onboard? - Everything else seems to work nicely Anyone got any advice for me? I'm thinking of possibly getting a basic controller to run the SSD off and then passing though the onboard drives.