neogenesisrevo

Members
  • Posts

    18
  • Joined

  • Last visited

neogenesisrevo's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Can anyone else shed some light on this? Which log should I be checking to figure out what's delaying start up?
  2. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='3'> <name>Linux Mint 20.1</name> <uuid>499fdc7f-ceb1-6232-cf5d-85ce7a191ded</uuid> <metadata> <vmtemplate xmlns="unraid" name="Linux" icon="mint.png" os="linux"/> </metadata> <memory unit='KiB'>12058624</memory> <currentMemory unit='KiB'>12058624</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='4'/> <vcpupin vcpu='1' cpuset='16'/> <vcpupin vcpu='2' cpuset='5'/> <vcpupin vcpu='3' cpuset='17'/> <vcpupin vcpu='4' cpuset='6'/> <vcpupin vcpu='5' cpuset='18'/> <vcpupin vcpu='6' cpuset='7'/> <vcpupin vcpu='7' cpuset='19'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/499fdc7f-ceb1-6232-cf5d-85ce7a191ded_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Linux Mint 20.1/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0xa'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0xb'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0xc'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xd'/> <alias name='pci.6'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x5'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xe'/> <alias name='pci.7'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x6'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:eb:90:7d'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-3-Linux Mint 20.1/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/Data/1080ti.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x07' slot='0x00' function='0x3'/> </source> <alias name='hostdev2'/> <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0064'/> <address bus='5' device='2'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x1532'/> <product id='0x0227'/> <address bus='5' device='3'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='2'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  3. I don't have any storage connected to the USB controller. The problem happens even if nothing is attached to the USB slots. Would adding "boot order" to a USB controller do anything?
  4. Hello everyone, I have a USB controller that I'm passing through to my VMs. Whenever I do so, the VM takes a really long time to pass the TianoCore boot screen (Over 5 minutes). Once it gets past it, everything runs normally. This doesn't happen on my Windows 10 VM. I've tried 2 different versions of Linux Mint and Ubuntu but the problem persist. I'm not sure which logs I should be looking at in order to try and find the problem. Any help would be appreciated.
  5. I really hope someone can help me. I wanted to mention that it does appear that the server is indeed receiving the login request but for some reason it is being dropped. I don't have the slightest idea what could be causing this. Is there some sort of option that needs to be enabled to allow access outside the network? If my memory serves me right, I know that is the case in things like SQL or maybe SSH.. But I'm not entirely sure. If anyone has any suggestions, please let me know.
  6. Let's get this part out of the way first: I know that ftp isn't secure and I shouldn't expose my computer to the internet, and a VPN along with other things would help, so-on & so-on. I tried setting up both pure-ftpd and crushftp9. I am able to access the ftp server locally but for some reason I can't seem to get access from outside my network. I'm forwarding ports 9921, 9443, & 2222 from my router to my host as suggested, and the docker container is also matching the ports. Is there a setting I'm missing? I tried using https://ftptest.net/ and I'm getting the following: Reply: 257 "/" PWD command successful. Status: Current path is / Command: TYPE I Reply: 200 Command ok : Binary type selected. Command: PASV Reply: 227 Entering Passive Mode (69,202,243,35,39,21) Command: MLSD Error: Could not establish data connection: Connection refused Check that is part of your passive mode port range. If it is outside your desired port range, you have a router and/or firewall that is modifying your FTP data. Make sure that this port is open in your firewall Port needs to be forwarded in your router In some cases your ISP might block that port. In this case configure the server to use a different passive mode port range. Contact your ISP for details which ports are blocked. tower-diagnostics-20201021-2349.zip
  7. Hey, this might be silly for me to even ask but during passthrough are you keeping VNC enabled? I have the exact same issue when I have both VNC & a GPU enabled for a VM. By disabling VNC, everything just works for me. Edit: Also, I just reviewed your setup and it's very similar to mine, only I have 4 1080tis. I had this exact problem on two different systems, both due to chipset limitations. With my 1st system my m.2 slot was directly connected to one of my PCIe slots and both couldn't function at the same time. (weirdly enough it was one of the PCIe slots in the mid, which made me even more confused). My 2nd system was advertised as having multiple PCIe 4.0 slots, but I later found out that they couldn't all run as PCIe 4.0s at the same time. My 4th PCIe slot for my last simply just runs at only PCIe 1.0 when the other 3 are being used, which produced a black screen if I try to pass it through, effectively making my 4th 1080ti little more than a brick.
  8. Would it be possible to let a VM detect which motherboard chipset is on the host? This would come in very handy to get something like SLI working which is only enabled if the driver is being ran on Nvidia authorized motherboards. I might be wrong about this, but I vaguely remember something like this for CPUs. (Also, yes I know SLI is crap, but I'm just passing the time).
  9. I'm passing my Logitech headset to my Ubuntu VM but it sounds terrible. Scratchy is probably how I would describe it. I found similar threads a few years back and the solution was to buy a sound card. I was checking in to see if USB headsets work properly now. I tried changing the USB Controller settings to 3.0 (tried both options) but no difference.
  10. Unraid Version: 6.9 With the new version of Unraid supporting additional pools in combination with a few other features that I haven't been taking advantage of, I thought this would be a good time to rethink how I'm currently using my storage devices on my server. I would like to run RAID0 on at least 2 of the SSDs if not 3, but I also don't want to make sure they are put to a task that will benefit from the speed increase. I also have an NVME just sitting around at the moment not really doing a whole lot. The server will primary have 2 VMs running most of the time (although I have a few other I will be switching to). The 1st VM, running windows 10, is mainly used for work, mostly remote access to my offsite office. The other VM, running Ubuntu, will mostly be used for programming, gaming, etc. You could call that 2nd VM the "main" VM. Additionally, I do have a few Docker containers such as Emby, but they are rarely used. How can I best utilize all the hard drives to maximize performance? I don't mind complex setups so feel free to suggest utilizing the 9p protocol, device passthrough, etc. I look forward to hearing suggestions. Bonus points for anything cool, unique, or different. Setup: 1x 1TB HHD 3x 512GB SSDs via SATA 1x 512GB SSD via NVME (Also one additional HHD that will be dedicated to parity, so no need to consider that)
  11. I understand the basic idea, but I'm not sure on how to optimize. I guess my confusion is with how emulatorpin and iothreadpin work. My i7-5820k is a 6 core 12 threads CPU. I'd like to split the cores between 2 VMs, and for now, I'd like to optimize it for gaming. So I've isolated CPUs 2,3,4,5,8,9,10,11 in Syslinux and I am currently assigning 4 to each VM. This part is pretty straightforward. Now, with the remaining 4 cores, (0,1,6,7) I'm a bit confused on how to best utilize them. Should emulatorpin be set to 0,6 in both VMs and then iothreadpin cpuset to 1,7, should iothreadpin and emulatorpin simply share all 4 remaining cores? Or should each VM have separate cores defined for emulatorpin and iothreadpin? Shouldthe iothreadpin CPUset actually come from threads already assigned to the VM? So if vm1 has 4,10,5,11 should iothreadpin CPUset='4,10'? Please help.
  12. Questions A) In your opinion, what is the best way to utilize a 512gb SSD drive, and a 512gb NVMe drive (I'm guessing SSD for cache, but not sure what the use for the NVMe would be). 2a) Is there anyway to setup partitions my NVMe, and pass through the partitions to various VMs instead of the whole device? 2b) What about using the NVMe as a diskshare or sharing the disk using something like SMB, NFS, etc? My knowledge on all this is very limited but my logic is that since the drive is physically in the computer, performance shouldn't be affected. Full System: Samsung 960 PRO 512gb - NVMe (currently unassigned, formally my cache disk) Samsung 850 PRO 512gb - SSD (Array Disk 3) 1TB - HDD (Array disk 4) 2TB - HDD (Parity) 128gb - SSD (Array disk 1) 40gb - SSD (Array disk 2) 4x gtx1080ti 48GBs DDR4 3000Mhz i7-5820k (28 PCIe lanes) Asrock x99 extreme4 motherboard A bit more info: I've read using an NVMe disk for cache is a waste. After watching Spaceinvader one's NVMe passthrough video I finally removed it as my cache and currently have it unassigned. I thought I could setup 3 partitions on the disk and instead of passing through the whole device, I could passthrough partitions individually to different VMs, but sadly this is not the case. Ideally, I'd like to use the NVMe disk to boot 2 VMs and also use it to share my steam library folder between the two VMs. There is one particular game that I play that (I believe) would greatly benefit from the extra speeds provided from the NVMe disk which would results in shorter loading screens.
  13. I believe I tried this already, but it just caused crashes. I was thinking something like, maybe give each vm only 6 "pinned" CPUs.. like vm A gets cpus 0-5 and vm b gets 6-11 but then just tell each VM to have 12 CPUs? The point would be 6 dedicated CPUs to each VM + 6 more to each when ever those are free... IDK if this can be done as my knowledge and experience with VMs is mostly based on the top 5-10 results in a Google search.. lol.
  14. I have a host machine with 48Gb or RAM, and 12 logical cores. I have 2 VMs that are used constantly so I've given each of them 6 cores and about 14Gb of RAM. Sometimes though, 1 of the 2 VMs isn't being used and having 6 cores sitting idle is such a waste. How can I setup my system to give either VM all 12 cores (and maybe a lot more RAM) when they are available, but to split up the cores evenly when one of the 2 VMs needs them? I've done some reading on libvirt.org and tried Googling "Dynamic CPUs in KVM" and tried a few things but there always seems to be a problem. I've installed Virt-manager and tried a topology with 12 sockets 1 CPU and 1 thread. Both VMs 'function' fine until I start running any major tasks on both of them, then they'll just lock up. I reverted back to just assigning 6 cores to each VM via Unraid VM panel for now so posting my XML 'as is' now wouldn't be all that helpful.
  15. Hahaha, thanks man! That's exactly what they meant.