machineshake123

Members
  • Posts

    17
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

machineshake123's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Just edited my post above and incuded the diagnositcs and the issues. Thanks
  2. Hi guys, I keep getting these errors whenever I try to do any disk intensive tasks, usually tend to happen when writing to the array, nothing has changed, I am on the latest version of unraid: Jun 23 16:51:43 Tower kernel: ------------[ cut here ]------------ Jun 23 16:51:43 Tower kernel: WARNING: CPU: 0 PID: 0 at net/sched/sch_generic.c:316 dev_watchdog+0x181/0x1dc Jun 23 16:51:43 Tower kernel: NETDEV WATCHDOG: eth0 (e1000e): transmit queue 0 timed out Jun 23 16:51:43 Tower kernel: Modules linked in: xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod bonding hid_logitech_hidpp mxm_wmi x86_pkg_temp_thermal coretemp kvm_intel kvm e1000e i2c_i801 i2c_smbus i2c_core ptp hid_logitech_dj nvme pps_core ahci libahci nvme_core wmi Jun 23 16:51:43 Tower kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.9.30-unRAID #1 Jun 23 16:51:43 Tower kernel: Hardware name: ASUS All Series/X99-A II, BIOS 1701 03/31/2017 Jun 23 16:51:43 Tower kernel: ffff880c2f203db0 ffffffff813a4a1b ffff880c2f203e00 ffffffff819aa12f Jun 23 16:51:43 Tower kernel: ffff880c2f203df0 ffffffff8104d0d9 0000013c2f203e68 ffff880c25064000 Jun 23 16:51:43 Tower kernel: ffff880c23aa7800 ffff880c250643a0 0000000000000000 0000000000000001 Jun 23 16:51:43 Tower kernel: Call Trace: Jun 23 16:51:43 Tower kernel: <IRQ> Jun 23 16:51:43 Tower kernel: [<ffffffff813a4a1b>] dump_stack+0x61/0x7e Jun 23 16:51:43 Tower kernel: [<ffffffff8104d0d9>] __warn+0xb8/0xd3 Jun 23 16:51:43 Tower kernel: [<ffffffff8104d13a>] warn_slowpath_fmt+0x46/0x4e Jun 23 16:51:43 Tower kernel: [<ffffffff815a848d>] dev_watchdog+0x181/0x1dc Jun 23 16:51:43 Tower kernel: [<ffffffff815a830c>] ? qdisc_rcu_free+0x39/0x39 Jun 23 16:51:43 Tower kernel: [<ffffffff815a830c>] ? qdisc_rcu_free+0x39/0x39 Jun 23 16:51:43 Tower kernel: [<ffffffff81090ccc>] call_timer_fn.isra.5+0x17/0x6b Jun 23 16:51:43 Tower kernel: [<ffffffff81090da5>] expire_timers+0x85/0x98 Jun 23 16:51:43 Tower kernel: [<ffffffff81090ea5>] run_timer_softirq+0x69/0x8f Jun 23 16:51:43 Tower kernel: [<ffffffff8103642b>] ? lapic_next_deadline+0x21/0x27 Jun 23 16:51:43 Tower kernel: [<ffffffff8109b347>] ? clockevents_program_event+0xd0/0xe8 Jun 23 16:51:43 Tower kernel: [<ffffffff81050f59>] __do_softirq+0xbb/0x1af Jun 23 16:51:43 Tower kernel: [<ffffffff810511fd>] irq_exit+0x53/0x94 Jun 23 16:51:43 Tower kernel: [<ffffffff81036e19>] smp_trace_apic_timer_interrupt+0x7b/0x88 Jun 23 16:51:43 Tower kernel: [<ffffffff81036e2f>] smp_apic_timer_interrupt+0x9/0xb Jun 23 16:51:43 Tower kernel: [<ffffffff81680172>] apic_timer_interrupt+0x82/0x90 Jun 23 16:51:43 Tower kernel: <EOI> Jun 23 16:51:43 Tower kernel: [<ffffffff815533e4>] ? cpuidle_enter_state+0xfe/0x156 Jun 23 16:51:43 Tower kernel: [<ffffffff8155345e>] cpuidle_enter+0x12/0x14 Jun 23 16:51:43 Tower kernel: [<ffffffff8107c545>] call_cpuidle+0x33/0x35 Jun 23 16:51:43 Tower kernel: [<ffffffff8107c727>] cpu_startup_entry+0x13a/0x1b2 Jun 23 16:51:43 Tower kernel: [<ffffffff816746d8>] rest_init+0x7f/0x82 Jun 23 16:51:43 Tower kernel: [<ffffffff81ccbe8e>] start_kernel+0x3cf/0x3dc Jun 23 16:51:43 Tower kernel: [<ffffffff81ccb120>] ? early_idt_handler_array+0x120/0x120 Jun 23 16:51:43 Tower kernel: [<ffffffff81ccb2d6>] x86_64_start_reservations+0x2a/0x2c Jun 23 16:51:43 Tower kernel: [<ffffffff81ccb3be>] x86_64_start_kernel+0xe6/0xf3 Jun 23 16:51:43 Tower kernel: ---[ end trace d37413df375134d1 ]--- Jun 23 16:51:43 Tower kernel: e1000e 0000:00:19.0 eth0: Reset adapter unexpectedly Jun 23 16:51:43 Tower kernel: bond0: link status definitely down for interface eth0, disabling it Jun 23 16:51:43 Tower kernel: device eth0 left promiscuous mode Jun 23 16:51:43 Tower kernel: bond0: now running without any active interface! Jun 23 16:51:44 Tower kernel: br0: port 1(bond0) entered disabled state Jun 23 16:51:47 Tower kernel: e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx Jun 23 16:51:47 Tower kernel: bond0: link status definitely up for interface eth0, 1000 Mbps full duplex Jun 23 16:51:47 Tower kernel: bond0: making interface eth0 the new active one Jun 23 16:51:47 Tower kernel: device eth0 entered promiscuous mode Jun 23 16:51:47 Tower kernel: bond0: first active interface up! Jun 23 16:51:47 Tower kernel: br0: port 1(bond0) entered blocking state Jun 23 16:51:47 Tower kernel: br0: port 1(bond0) entered forwarding state Any help would be highly appreciated. Thanks Edit: I have attached the full diagnostics, here are the issues I am having: - When trying to copy a file to the array, it starts of really fast then slows down to a crawl, the server utilization goes upto 100%, and then the UI hangs. Like when I type http://tower it keeps displaying trying to connect to tower. Dockers hang, like stopping/starting a docker hangs the UI. - Randomly my dockers cause my web UI unresponsive, similiar to the problem above but not necessarily has to do with copying a file. When starting/stopping a docker the UI becomes unresponsive and I have to manually boot the server for it to come back. - A lot of issues are associated with the UI hanging, making restarting the server impossible, I can SSH in and try reboot/shutdown to no effect, it says shutting down but nothing happens. tower-diagnostics-20170623-2209.zip
  3. I noticed something interesting, if I run the speed test on /mnt/cache I get around 500MB/s for the SSDs, however if I run the test on a user share using the cache (/mnt/user/Movies), my speeds are around 40MB/s. I have triple checked that the share is using Cache and I have also checked on the drive to see if the item was indeed written to the cache SSD not the array, this doesn't make any sense.
  4. I have the exact same problem, I have a Samsung 850 Evo (500GB) and a 960 Evo (250GB), they are not in a cache pool, I have been testing them and my speeds are those of an untrimmed drive. I have ruled out all other factors like network etc. If I run the trim manually it says xyz bytes trimmed, however if I run the trip again exactly after the command it says the same xyz bytes trimmed, shouldnt it say 0 bytes as the previous trim should have trimmed?
  5. I passed through my GPU but in OSX it shows a generic adapter with 7mb of ram, the VM feels very sluggish, and choppy video playback, no audio over hdmi?
  6. Hi gridrunner, I messaged you on Youtube and you asked me to paste my VM XML and Devices. I am not sure how to passthrough a GPU, the XML I am using is your VNC template one, I am not sure which lines to get rid of and which to add, I tried using the alternate xml from your zip and got the GPU working, however I had a lot of tearing in videos and there is no audio over HDMI. Pasting my current XML below: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>OSX Sierra</name> <uuid>a43a6297-9dfe-7c01-8706-cc24e23d4691</uuid> <description>Mac OSX Sierra</description> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="linux.png" os="linux"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-2.7'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/a43a6297-9dfe-7c01-8706-cc24e23d4691_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/VMSSD/domains/OSXSierra/osxsierra.img'/> <target dev='hdc' bus='sata'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='dmi-to-pci-bridge'> <model name='i82801b11-bridge'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1e' function='0x0'/> </controller> <controller type='pci' index='2' model='pci-bridge'> <model name='pci-bridge'/> <target chassisNr='2'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x02' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:51:66:48'/> <source bridge='br0'/> <model type='e1000-82545em'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </interface> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes' websocket='-1' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='none'/> </devices> <seclabel type='none' model='none'/> <qemu:commandline> <qemu:arg value='-device'/> <qemu:arg value='usb-kbd'/> <qemu:arg value='-device'/> <qemu:arg value='usb-mouse'/> <qemu:arg value='-device'/> <qemu:arg value='isa-applesmc,osk=amanchoosesaslaveobeys'/> <qemu:arg value='-smbios'/> <qemu:arg value='type=2'/> <qemu:arg value='-cpu'/> <qemu:arg value='Penryn,vendor=GenuineIntel'/> </qemu:commandline> </domain> Devices: IOMMU group 34 08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Oland [Radeon HD 8570 / R7 240/340 OEM] [1002:6611] 08:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Cape Verde/Pitcairn HDMI Audio [Radeon HD 7700/7800 Series] [1002:aab0]
  7. This is great, but could someone please try the latency monitor to see what the latency is like in the VMs? I imagine the latency will be much higher since the cores are now dynamic and probably shared.
  8. I am also very interested in this, please share your method. thanks
  9. Hi, I would like to try this, how did you pass through the SSD? Does that mean I will have to do a reinstall (I am assuming vdisk image won't work?)? Could you please show me the passthrough section of your template. Thanks
  10. I know what you are saying, its just not as simple, I will have to source an SSD before I can try that.
  11. Gentlemen, We have success (somewhat). This is what I did. <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> <emulatorpin cpuset='4-5'/> </cputune> No CPU isolation, and all the other VMs are off. No latency issues, no performance issues, no audio issues. However, if I share a care with any of the other VMs, then severe latency issues, so summary is have the emulator tasks off loaded to a free core, do not share any cores with any VMs or unraid and everything works as expected.
  12. I will definitely give that a try and report my findings. Since you have the same CPU, do you have CPU isolation in your sysconfig? Or you are simply assigning cores from the GUI? My latency issues get worse when I don't isolate CPUs. And if it's not too much to ask is it possible that you could assign 2 instead of 4 cores to one of your Win 10 VMs and try playing a 4k youtube video and see if there is any stuttering? It is entirely possible that I might have some other device in my system that is causing the latency issues. Would really help to isolate the issue for me. Thanks
  13. cpu pinning does not need to correlate to actual cpu numbers on your host. If it did, you could never run more than 1 vm. In fact, it's better to not put a vm on core 0, as unRaid prefers it for host operations. The op is attempting to put the vm on "core" 3 and its hyper threaded pair. Some people swear by this method, but I've found for certain vm's that using cpu "sides" and not using their hyper threaded pairs in groups actually works better, at least in my case (using a dual processor server.) to take it a step further, and for better response in vm's, one should isolate the intended vm cores away from unRaid so no host functions run at the same time as the vm. Thank you, that's exactly what I was saying. Now the issue is, on a 6700 there are only 4 cores, if I assign only two cores a Win 10 VM, it runs really slow, unable to decode 4k youtube videos, or normal videos, stuttering etc. When I assign 4 cores (logical), the performance improves and the latency is much better however the problem is now I can't run any other VMs because the remaining two cores are assigned to Unraid for obvious reasons. If I give Unraid only two cores, the whole system and all the VMs come to a crawl, and this lead me into my initial argument, that had I known that for optimum performance and latency issues I will have to assign individual cores to VMs I'd have gone with a higher core processor for the build.
  14. From what I have read on these forums, you are supposed to assign the matching threading pairs to your VMs for better latency, the thread pair is shown on the Dashboard screen and also in the System Devices tab. That is the pairing I am using. Using it like you will result in worse latency, again this is what I have read on these forums and it's also mentioned in the CPU assignment thread sticky in the KVM forums?
  15. Here it is: <domain type='kvm' id='5'> <name>Mohsin</name> <uuid>8a8e48cd-1725-8f63-b038-059bd20d690f</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='6'/> <emulatorpin cpuset='1,5'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/8a8e48cd-1725-8f63-b038-059bd20d690f_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='1' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Mohsin/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:2f:1f:66'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-5-Mohsin/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='connected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='1' device='4'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>