m00nman

Members
  • Posts

    39
  • Joined

  • Last visited

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

m00nman's Achievements

Noob

Noob (1/14)

32

Reputation

  1. That's a bad combination. Video driver is emulated + VM decodes the video then reencodes it back to send over the internet via remote desktop app. Remote desktop application aren't really designed for streaming videos at full fps, besides game streaming apps like parsec, but then again you are putting a lot if stress on that cpu without a dedicated gpu to decode and then encode a video stream in real time. Can't you stream video directly to your PC?
  2. Pinning is only necessary when you want to have predictable performance for a VM/container, especially under heavy load, at the expense of pinned cores just idling and doing nothing when the VM pinned to them is doing nothing. Lettings the host's kernel distribute the load (no pinning) between unused or underutilized cores will give you better efficiency, and possibly better performance for all the VMs/containers at the expense of 0 predictability whether that particular VM that you use for, let's say, playing games will have adequate CPU utilization headroom for playing games (there are ways to assign priorities for VMs, but it's beyond the scope here). So it really depends on your use case. Enterprise solutions almost never use cpu pinning because they want to extract maximum performance for all of the VMs/containers.
  3. Interesting.... Some thing just happened to me as I upgraded to B760 motherboard from c23x board (haswell/broadwel generation). It took me by surprise and as1064 controller just stopped being recognised. I actually thought I got a lemon motherboard at first, but started googling and found there was a f/w update for many 106x chips with PCIe gen2. But 1064 is PCIe gen3 so it must be incompatible. Well, thanks to this thread I flashed a f/w that was meant for asmedia 1166 on 1064 and it actually worked. I had nothing to lose so... Anyway, thanks OP!
  4. Just to confirm what @trurl said. I just bought 2x Yottamaster 10Gbit/s 5 bay enclosures thinking I could reduce power usage by using the enclosures + a modent laptop. I reduced idle power usage by around 40W, however the issue with the enclosures was that it was contstantly substituting enclosure serial number for some drives' serial numbers at random every boot. So the serial number for drives would change randomly ever boot which caused unraid to think the drives are missing. After messing around for 2 days I returned the enclosures today. The speed was great. If UNRAID used PARTUUID instead of identifying the disks by-id it would likely have worked, but unraid does it by-id so it's a no go, unfortunately. I guess PARTUUID is just a partition id, so it may not be feasable.
  5. I am saying I don't believe Windows XP supports Hyper-V Enlightenments. So for XP, you would need to enable HPET timer. See below. I striked through what you do NOT need to add for XP, but change the HPET line to "yes"
  6. XP may need HPET. I don't believe it supports anything better than that.
  7. First try to change the machine type to Q35. i440fx does not support PCIe, just regular PCI. You need PCIe since you are passing through NVMe drive, which is PCIe If that makes no difference, try changing "threads" to only 1 and bump number of cores instead.
  8. Hi SimonF, I believe it was version 6.11.x, the version that was current at the time of writing OP. Yes, it is my understanding that that you need all of the following options <vpindex state='on'/> <synic state='on'/> <stimer state='on'/>, additionally some people reported settings migratable='off' resolved issues with GPU PCIe passthrough stutter. migratable='on' is only useful when miograting VM from one host to another, and unraid does not support it anyway so it's safe to turn it off for everyone.
  9. Thanks, this will probably help a few people on here who have GPUs in PCIe passthrough. I didn't really notice any difference for my server windows VMs, however, since unRaid doesn't support migration of a VM to different host (like, for example, Proxmox) it makes sense to disable migration by default as well. I added your suggestion to OP. @Lebowski89 have a look at the post above, it may be the fix you are looking for.
  10. I would check out multiple suggestions on proxmox wiki page https://pve.proxmox.com/wiki/PCI(e)_Passthrough Did you switch to UEFI (OVMF) for the VM as well? I believe you can run "mbr2gpt /convert /fullos" in the VM before conversion, if you don't want to reinstall. I also had a weird issue with a proxmox rig and nvidia card stutter. I put the card into a different pcie slot (connected via chipset, not directly to CPU, I believe it's actually x8 slot... but it's just for video playback) and the issue went away. Also, apparently, passthrough works better with some brands of cards than others https://forum.proxmox.com/threads/vfio_bar_restore-reset-recovery-restoring-bars.107318/#post-462777 I hear you about nvidia being greedy. I will probably not buy an nvidia card again as well, but at this point I have a 3080 and a 3070 so it will probably be a while till I upgrade again. I used to have RX580 and loved that card, and AMD adrenaline software. I thought it was much better than nvidia's solution. Screen tear and input lag reduction option (can't remember the actual name) worked much better as well compared to nvidia (i don't have {g-,free-}sync monitor or VRR tv).
  11. Hi Everyone, I am little confused, is it possible to have "appdata" and "domains" fully from SSD while also having mover sync these to the array in case SSD dies? It looks like I made the mistake of making mover do SSD -> Array sync, which deleted these from SSD and all containers and VMs failed to load after.
  12. It's hard to say what the issue is. For your case I would only do #1 from this guide. You don't need KSM if you don't have any other Windows VMs, and you probably want to actually pin the cores to get maximum performance for gaming if you don't care about docker/other VM performance. As for the bad performance, I would try to put your 1080 back in and see if it resolves these issues for you. If it does (and I'm guessing it will), it is probably the AMD card. I don't really have any experience passing through AMD GPUs, but I read it's a pain in the butt.
  13. Something is off. I use it with Windows Server 2022 as well. No issues whatsoever.
  14. Someone made a plugin in unraid (Thanks jinlife!). You should be able to use that now