Jump to content

snailbrain

Members
  • Content Count

    54
  • Joined

  • Last visited

Community Reputation

4 Neutral

About snailbrain

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yeh i did. it gets through the installation, but locks up, and if you reboot it locks up on "igb1 is coming up or whatever" it's as if it knows it's the quad card and is looking for the rest of the ports (psychic guess)
  2. thanks for the response (using acs) - when not using acs, i think they were in pairs IOMMU group 29: [8086:10e8] 1d:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) IOMMU group 30: [8086:10e8] 1d:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) IOMMU group 31: [8086:10e8] 1e:00.0 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) IOMMU group 32: [8086:10e8] 1e:00.1 Ethernet controller: Intel Corporation 82576 Gigabit Network Connection (rev 01) looks like this - but pfsense only likes all 4 going to it. I've tried the first 2, and also the first and last. installs fine.. but then seems to freezeup as soon as you bootup again it sticks on "igb0 is up" or words to that effect same thing with my working pfsense of which all 4 are passed through - if i change to pass through only 2 ports, it will freeze on bootup too.
  3. I'm really sorry for bumping this old thread... but this one came up and it's related. I have a quad intel GB card. I have stubbed it and i can see all 4 ports in vm configs to passthrough (they look like individual cards: Passing through all 4 ports to pfsense works great and it's been working like that for a while. I'm fine with the motherboard lan for unraid - but, i want to only use 2 ports for pfsense (lan/wan). Passing through all 4 ports works perfect - however i want to use 1 port for a windows VM. If I try to pass through just 2 ports, it doesn't work and pfsense locks up on bootup or during config. Passing through one port to windows does seem to work. Do you think: xen-pciback.hide= is the way forward or is there some issue with what i'm trying to do. hope you can help @saarg
  4. UPDATE: I disabled the virtual network adapter in windows, and passed through one of the ports on my intel quad 1gb pcie card and connected it to my switch. It works like a dream - i still see some high CPU usage for the unraid cores, but everything works great (except limited to 100MB/sec). BUT, unfortunatly, my pfsense box will not boot if i don't pass through the whole card for some reason, even though the card is stubbed and i can select each port to pass through... in anycase - this leaves me to believe that the problem is related to: 1. Caching from the VM (i've played with the numbers in the tips and tweaks plugin but either it becomes unusably slow or makes jerking worse) 2. The Virtual LAN adapter. (note: i also got a bit of an improvement by moving around the cores used by the VM. I noticed that cpu 8 (4th core), even though was isolated and nothing was pinned to it, and the VM was not running, was still using 1-4%, and have no idea why - so i used this as emu pin, and used cpu 6 as the 1st core of the VM. This definitely helped a bit.
  5. Hi and thanks again. No i'm not using those. I've also tried closing down the VMs and same issue. it's also going through the virtual switch so not touching pfsense atm - pretty much. here is copying from nvme to cached share (6-15 are isolated. and windows is using 8-15. emu on 6.
  6. Here is an example with more info: I'm copying from my nvme to a share which IS using the cache. Apart from it grinding to a halt, it's also using masses of CPU. The first 3 cores are used by unraid and are 100% during the attempted copy. Strangly in "top" it does not show what is using the CPU (is that normal or does it help diagnose) the terminal window there was inactive but same result (doesn;t show anything in top using 300%.. etc
  7. Hi thanks. Yes. but i just checked with the C drive, same thing. almost maxing out all 4 cores. note: the NVME i have passed through the entire controller (e drive) and the C drive is now a passed through by dev/disk/by-id/xxxx Also getting slow down now when my temp ssd (120gb ssd unassigned) copies (moves) my download torrents to the array (ignoring cache). I'm wondering if the 6x On Board Sata ports are the cause. I'm using all 6 (and 6 on the raid card) on the Motherboard I have 2x 6TB WD Red (2x Parity) 1x 120GB SSD (temp drive for torrents [they give moved onto array but not through cache, when completed]) 1x 240GB SSD (C Drive) 2x Toshiba 2TB Mechanical (on Array) on Raid Controller 4x 3TB WD Red (on array) 2x 1TB Kingston SSD (cache) I've ordered a spf cable for my raid controller and will connected the 2x Parity to it (or the SSDs) Wondering if: the motherboard ports are saturated (but they not transferring that fast..) the Network card is now in a shared slot (but it's a 4 port gigabit, and only using 1 of the ports for WAN and other for LAN (passed through to pfsense). The motherboard onboard lan is also being used for Unraid itself (and VMs/Dockers etc).
  8. I think could be related to networking (virtual). Downloading a game on steam (at 40MB/second), - Steam is using 50% of the 4 cores (+threads) in unraid the VM is using 60-70% in total.
  9. Thanks. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='7'> <name>WinMain</name> <uuid>7af4209a-1db7-2d25-02df-14750ef934ae</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16252928</memory> <currentMemory unit='KiB'>16252928</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='9'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='12'/> <vcpupin vcpu='5' cpuset='13'/> <vcpupin vcpu='6' cpuset='14'/> <vcpupin vcpu='7' cpuset='15'/> <emulatorpin cpuset='6'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-3.0'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/7af4209a-1db7-2d25-02df-14750ef934ae_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='8' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source dev='/dev/disk/by-id/ata-PNY_CS900_240GB_SSD_PNY0219000039020481A'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='scsi' index='0' model='virtio-scsi'> <alias name='scsi0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:bc:cd:4c'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/2'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/2'> <source path='/dev/pts/2'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-7-WinMain/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x20' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/Downloads/bios/Gigabyte2.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x20' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x22' slot='0x00' function='0x3'/> </source> <alias name='hostdev3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x0b' function='0x0'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> urhpg8-diagnostics-20190226-1206.zip
  10. Update: I got another SSD and using that for the C drive on windows now (instead of the cache). Same issue - although everything has improved since i started the threads. The issue now is just when using the cache drive(s), the 3 unraid cores (full cores + HT) - 6 threads) i've reserved max out at 100% and i get unusable windows.
  11. Update: Swapping the Raid card into it's own group has definitely improved things. Copying onto the Array now (onto a disk that is solely on the raid card and onto a disk on the motherboard [although parities are both on the motherboard]) without the cache drive seems pretty good. I do see the occasional spike but it's tolerable. Also the CPU usage is not major in unraid... is it recommended to put the parity drives onto the lsi raid card? BUT - copying to a cache share, even though it's copying really rally slow (slower than copying directly onto the array), uses the emu pin at 100% and the 3 cores i have for unraid.. they all max out in red. I then start to see the spikes again.. Copying onto the vdisk (c drive which is on the cache) is still fine except slow. and i only see the emu core at 100%, not every other unraid core. This leaves me to believe that unraid is doing something with the files that it is not doing with vdisk - for example, like the Intergrity file checksum checker (but i dont have that on, and it works fine when copying to the normal array). The ssds are cheapo though - A400 Kingston.
  12. Thanks, I've ordered an SSD for that purpose. We'll see how it goes. I didn't follow much of a guide - but seemed self explanatory and i've tweaked it by reading the forum to wher to i am now - great help from everyone. I am a bit disappointed i need to pin and isolate 1 core for emu. but as long as it works. The main issue i had was with HTEP, disabling this helped massively. Thanks again. Yes they are mirrored (or however unraid does it). The parity drives are on the motherboard, the cache drives are on the raid controller (with 4 other 3tb wd reds). 2x 2TB drives are also on the motherboard + a 120gb ssd for temp storage. The VM has the nvme controller + usb controller (4x 3.0 ports) passed through to them and they are not shared (when acs is disabled they are in their own group). However the Raid card is shared with various other devices.. Yeh i understand much of the sharing of lanes etc (- before unraid for several years i used to have my windows desktop as a VM in ESXI, only i was using an AMD card as they were the only ones you could passthrough (at the time at least). I guess one thing I could try is swap my raid card with my intel network card. The intel network card is in it's own group, the raid controller is shared with other stuff. It's currently passed through to a pfsense VM. The Intel card sharing with some other stuff might not hurt as much and giving the raid card the full lane may help? I played with the "cache" (for ram cache?) in Tips and Tweaks plugin and this has definitely helped a lot (as suggested by someone else in here) but the transfer rates are abysmal and still get spikes - performance is almost like there is no point having a cache drive though. I assumed that by having the cache drive, when I want to copy something onto my array (and use the cache), i'll be getting 10gbit/sec transfer rates.. (or max out the write speeds of my 2x 25" ssds, probably over 400MB/sec.
  13. Hi thanks. I'm writing from my nvme (which has its controller passed through) to the array. but i get the same issue if i write to a share which is using cache and also writing to a share that does not use the cache. (note - since i added writecache = none, writing from nvme to my C drive seems ok). (seperate drives). my C Drive is on the cache (domain folder on cache drive) my E drive is 1 TB nvme copying from E to array (cache or not) causes the issues. i'm wondering if it's something to do with "ram" caching somewhere. Copying to the cache or array transfers really fast in the first few seconds, then grinds to a halt.
  14. Thank you. I think it is related to some write cache too - as i'm only getting the issue "writing" now. A bit embarassed - the cache set to none was tip'd to me by the other guy in this thread. but i think because i removed the vdisk and tried running with just the nvme, it had got removed and put back to writeback itself when i added the vdisk back i guess. Results: Writing to the vdisk - (OS drive) is now not causing massive issues. Thank you.... but - writing to the array is still locking things up - e.g. if i write to a share (that uses cache), it will spike up to 500MB/s, then goes to 0 and waits there for a minute, then slowly goes up, then freezes again - i'm unable to stop the transfer for severalk minutes some times, and the cores on unraid are maxed out. I thought it could have been something to do with dynamix file integrity plugin, but i removed that. Are there some more cache settings?
  15. Hey and thank you very much for the response. I tried the iothreadpin but it didn't seem to help (i set them to the isolated cores that are used by the VM, ones that were reserved for unraid, and an isolated core on it's own[albiet just 1 then]). When i'm copying (from nvme to desktop (cache)), i see the emulator pin max out at 100% and an un-isolated core max out 10 (and it shifts around). i.e. i don't see the iothread cores get invovled when i'm transfering? I did not know what about Windows 10 and DPC latency checker. It does still seem to show an issue (i always use latancy mon), and have tested some games. If i start transfering to my OS drive (which is on cache) or to anything on the array (not just cache), the game becomes unplayable. Also transfering the files grinds to a halt. any other ideas?