zeus83

Members
  • Posts

    62
  • Joined

  • Last visited

Everything posted by zeus83

  1. I'm also having troubles with sync in the docker. It's awfully slow and yes I'm plotting on the machine using the same docker. So indeed it may worth trying plotting via docker and move the farmer to the dedicated VM... I originally thought to run 2 dockers - one solely for plotting and another for farming. What do you think guys?
  2. Try to execute something like this: ps aux | grep nvidia-persistenced It might also be that there are processes on your host that utilizing the card. Try to run this and see if there are any processes bounded to the gpu: nvidia-smi Otherwise I recommend you to remove host nvidia drivers and check if gpu passthrough works in that case.
  3. Hi, do you have nvidia-persistenced running in your host machine ?
  4. I didn't run Battlefiled 1 , but any DX12 game I tried runs perfectly fine. Have you tried any other DX12 game ? It still worth checking that you clock timer used is TSC because some gaming APIs are dependent on this timer during draw calls.
  5. Try to put this section into you VM xml settings: <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <reset state='on'/> <vendor_id state='on' value='1234567890ab'/> <frequencies state='on'/> </hyperv> <kvm> <hidden state='on'/> </kvm> <vmport state='off'/> <ioapic driver='kvm'/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='8' threads='1'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> <feature policy='require' name='svm'/> <feature policy='require' name='apic'/> <feature policy='require' name='hypervisor'/> <feature policy='require' name='invtsc'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> <timer name='tsc' present='yes' mode='native'/> </clock>
  6. Hi, First you don't need that many virtual CPUs for a gaming vm. Passing 8 cores might improve gaming experience. Second it worth reading this post which is extremely useful for setting up a gaming VM https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ Third, in games what is your GPU load percentage ?
  7. Well, overall it works but... SAM doesn't work in virtual machine. On the bare metal enabling SAM yields up to 10% improvement depending on a game. Remote play is hardly feasible. I used to use Moonlight / Parsec. Moonlight is not an option for Radeon cards so Parsec remains, but encoding latency is awful. In a game that maintains 100fps the Parsecs barely encodes 30fps (1440p). Switching to 1080p helps a little. But no stable 60 fps still. If anyone know if it's the case with 5700 XT as well? I tried my GTX 1650 super and it performs way better with Parsec... I don't understand how this is possible.
  8. What is your VM settings? Do you have these lines in VM xml view? <vendor_id state='on'value='28a2c82d201d'/> <kvm> <hidden state='on'/> </kvm> <ioapic driver='kvm'/> Do you specify the vga bios dump for you video card?
  9. I've decided to put it here because just spent several hours trying to figure out AMD Radeon 6800XT passthrough to my Win 10 VM. Previously I've only worked with Nvidia gpus and experienced zero issues with them. Radeon a bit different story. I'm running unraid on Ryzen 5950X Asus Crosshair VIII Hero. The GPU is Gigabyte Radeon 6800XT (ref). So not all tips may be relevant to your hardware/software. So the hints are: BIOS: disable CSM compatibility mode and leave only UEFI. I couldn't get GPU passed through in Legacy mode. Only use Q35 virtual machine. i440fx didn't worked for me. Couldn't install a driver. Don't install amdgpu host drivers. Couldn't pass through the GPU to VM in that case. Passthrough didn't work on unRAID 6.8 for me (got error 43 always). So I had to switch to unRAID 6.9rc2 on 5.10 kernel. For 6800XT passthrough four devices: Enabling the AMD Smart Access Memory (SAM) didn't work for me. Got error 43 in windows vm immediately (seems like it's not yet implemented and exposed in vfio https://github.com/qemu/qemu/commit/3412d8ec9810b819f8b79e8e0c6b87217c876e32). Encoding delay in Parsec is awful > 20 ms (h.265, 1440p). Something wrong with driver/encoder I think. Didn't figure out what's wrong yet, but saw someone complains on reddit about it as well.
  10. I suspect you need to have compatible nvidia drivers in the host system whether unraid nvidia or manually installed. I checked my 3080 and the fans never stop while in host, however if I passthrough it to my Windows 10 VM where the latest drivers installed the fans stop.
  11. Hi, I recently bought RTX 3080. I can confirm passtrough works like a charm. It has only two devices (video & audio) compared to the previous series. So it's even easier to passtrough. I confirm Moonlight streaming also works. You only need to update driver in your VM to the 30xx supported version. Got average +30% performance over my RTX 2080 Ti. However Nvidia unraid plugin doesn't ship newer driver yet. So nvidia-smi doesn't work for me on the host system.
  12. I normally play using Dualshock 4. No delays at all. As for mouse I didn't play with it but overall I don't like how mouse behaves at desktop with Moonlight. If I use some programes that require mouse I'm using Google remote desktop. Zero mouse issues there.
  13. Hi, I'm running a Ryzen build on Asus ROG Crosshair Hero VIII X570 chipset. I initially had ECC memory and had no issues with it, however I can't prove if it correct errors or not. ECC I used wasn't expensive at all compared to OC memory for gamers, but the thing is that only unbuffered ECC is supported and not many manufacturers offers it. And you'll be limited to stock frequency 2666Mhz that means your Ryzen infinity bus will run at 1333 which means worse performance. I then switched to Kingston DDR4 32Gb (2x16Gb) 3466 MHz pc-27700 HyperX FURY Black . My gaming VM benchmarks merely showed 1-2 FPS improvement . So you can easily game on ECC as well.
  14. Hi guys, I've recently setup a Hive OS VM and ecountered a couple of issues during this. So I've decided to prepare a quick start guide. This guide won't explain you what Hive OS is and used for, neither it will show how to setup your Hive OS account and crypto wallets. There is a lot of guides on this on the web. [Prerequisite] The only prerequisite is you have to setup your Hive OS account and properly configure at least one farm and worker. You can go through the guide here or just google any appropriate guide you like. [Instructions] Download the latest Hive OS GPU image here . You should then unpack it and put into your isos share temporarily. For me it was a 7Gb file named [email protected] . Normally this image file is supposed to be written to the usb stick, but we instead just use it as our VM disk. You need to copy the image to your VM share. cd /mnt/disks/vm/ mkdir hiveos cp /mnt/user/isos/[email protected] hiveos/vdisk1.img Now let's create a VM itself which will use our Hive OS disk image. Use Ubuntu template because Hive OS is ubuntu based distro. There are however a couple of issues we must to address. First I suggest to leave OVMF bios as default. If the VM won't boot then try the same steps with SeaBIOS. I wasn't able to boot some cards with default OVMF bios. Second it seems that Hive OS doesn't contain the Virtio drivers normally found in Ubuntu and most Linux distros. I could overcome this with setting Primary vdisk bus to SATA . As for CPU and memory don't assign much resources to it. You have to pass your dedicated GPU as well because Hive OS isn't much useful without it. Switch to XML view to change the network adapter, because as I told before there are no Virtio drivers preinstalled in Hive OS. I chose rtl8139 and it worked fine for me. You must remember that whenever you edit anything using the form view the XML part would reset and you'll have to add that part once again. Now we are ready to start our VM. Once VM is started properly you need to figure out it's assigned IP. I use my router admin page for that. Then we need connect to our VM and finish the worker configuration. Since we passed through GPU and didn't configure the VNC we'll use Hive OS integrated shellinabox. Open https://YOUR_IP:4200/ in a browser. Default login is user password 1 . If everything is ok you'll see a welcoming screen like this. Your GPU must be present and identified correctly. It's now time to finish our Hive OS worker setup. Type firstrun and follow the instructions to enter your worker RIG_ID and RIG_PASSWORD . You'd better reboot your VM after configuration finished. After completion you might check your Hive dashboard to see that the worker is online and assign a flight sheet to it. [FAQ] todo
  15. Yep, 3200Mhz is max officially supported, but it's AMD itself who advertised 3600Mhz as defacto standard for 3rd gen Ryzen builds. Altough I understand it is sheer marketing. I'll monitor the stability and will revert to 3200 if the issue persists. As a backup I also have 32Gb of ECC DDR4 2666 which were running incredibly stable. However I just wanted gain more from my Ryzen 🙂 Switching to 3600 gave me 1-2 FPS in benchmarks 😁
  16. Caught the similar error yesterday running Ryzen 3900X and memory at 3600 Mhz (OC from 3466). Reddit folks say it's due to overclocked RAM instability:
  17. Hi, I'm a bit new to Desktop Linux. Can anyone advise if it's possible at all to setup a completly headless GPU accelerated remote desktop in Linux VM ? I've googled a lot but couldn't find any suitable step-by-step guide on this. I have no problems in passing a GPU to the Linux and when the display is connected then the desktop session is accelerated. But I can't achieve the same using remote connection. I tried Google Remote Desktop, VNC and even Steam Remote Play. Just can't figure out how to do it. I install Nvidia driver and GPU is detected via nvidia-smi but it has no effect on any desktop speed improvements.
  18. Hi, Trying adding this to your hyperv hints: <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vpindex state='on'/> <synic state='on'/> <stimer state='on'/> <vendor_id state='on' value='1234567890ab'/> </hyperv> This is a blind guess, but synic hint has something to do with interrupts so it may help. And moreover it may give better performance.
  19. Hi, If you see your devices in windows Device Manager then this is not a passthrough issue. Looking at your Nvidia drivers installation error I can suggest you to do what is says and download standard drivers package (those without dch in it's name). Here is the latest version: https://www.nvidia.com/download/driverResults.aspx/156778/en-us
  20. Thank you guys for quick response, still some questions arise Can you elaborate more on this bit ? Why should I bypass shfs ? Will it be ok to access through /mnt/disks ? When I perform an ordinary cp command in btrfs (no reflink) will it copy all file chunks physically to a new location ? Will the copy be defragmented then as a result?
  21. Hi, Need a bit of advice. I got two spare nvme drives in unassigned devices from which I want to create a secondary cache pool for VM images solely. I will probably store few Windows 10 images and few linux images. For gaming VM I will use another nvme drive passed through directly, but I still prefer the system itself to be in the image file on the cache pool. I like the idea of doing easy backups and copy images to create another VM instances rapidly. So far I've read through many guides over here and there on btrfs and decided to make a raid0 pool for maximizing the performance. I still however have something to clarify: Do I understand correctly that if doing proper backups of vm images and libvirt I'll be on a safer side wtih raid0 and no reason to go with raid1. Given I'm ok to pool be temporarily out of service in case of failure until I recover the data. Is it correct to assume raid0 btrfs give the maximum raw read+write performance ? I've read that CoW enabled for VM image files leads to heavy defragmentation. But i'ts not clear how critical is this for SSD drives. Should I turn off CoW for VM image files ? If so and I disable CoW what is the reason then to use btrfs ? For instance I won't be able to do this for free right ? cp --reflink=always vdisk.img vdisk_2.img