Jump to content

alexciurea

Members
  • Posts

    42
  • Joined

Everything posted by alexciurea

  1. Thanks 1812 for such detailed tests! I wonder if having a mix between cpu HT pairs and sequence assignment is a good compromise for achieving both better performance overall, and some pressure valve for when there is heavy load of entire unraid box... For example in a 6/12 CPU, with 2 VM's under heavy load, to assign: VM1: 1,7 + 2-4 VM2: 5,11 + 8-10 (assuming 0, 6 for unraid) I mean, by doing sequence assignment only (e.g. VM1: 1-5, VM2: 7-11), if both VM's are under heavy load you'll probably will end with some responsiveness issues? Whereas if you assign in pairs, you won't utilize to the max the cpus. just a though... will have to try
  2. plus 1 from me if it can be a dropdown showing the available dump files in a default location, in same way it's done for the virtio or isos
  3. in this situation, ideally, you should be able to passthrough your 980 without specifying the rom good luck!
  4. sure if you see the GPU in the VM definition listed for passthrough, it does not mean that it will work if it's the only GPU in your system (something that you didn't clarify), it will not work (give video output) unless you use the rom trick (this is relevant for NVIDIA; AMD i heard deos not have this issue, but i cannot personally confirm)
  5. @zerrikan unraid uses 1 GPU for displaying the console output usually, that's the integrated GPU. Or in case of platforms without integrated GPU, unraid will use the GPU in the first pcie slot (a.k.a primary). This guide refers to passing through an NVIDIA primary GPU. Basically telling unraid to give up using that GPU for console, but to use it as passthrough in a VM. A rom file will be required in order for the procedure to work. As of today, one cannot passthrough an NVIDIA primary GPU without doing the steps in this guide. to comment on your specific confusion, i assume you have 2 GPU's in your system (Integrated + 210). Passing the non primary GPU 210 to a VM will be easier and might not require the steps in the guide.
  6. use your 470 connected to a monitor and login directly from there using your root user. then try dmesg once logged in, while you try to start the VM. also can you try also with different OS (e.g. ubuntu gnome), just to see that passthrough working?
  7. i see others facing similar-ish issue: https://lime-technology.com/forum/index.php?topic=52857.0 something in common - i see these folks with i440fx 2.7 and also unraid 6.3. something to keep in mind, so that i'll not update from 6.2.3
  8. Log for vm is without error. All looks ok... I assume titan in slot 2, gtx 470 in slot 1, and you are trying to passthrough titan x. I would try to see whats displayed in the host at the time of vm start - try dmesg from command prompt while logged in the tower. I would also play with the pcie gpu's speed in bios, to specifically set it to gen 1/2/3 instead of auto. Also try to play with machine type, seabios vs ovmf... Have you tried diff os instead of windows? Try also i440fx version 2.5. 2.7 probably its the default in 6.3.0... Good luck
  9. Hi guys, maybe you can suggest on this issue I have a windows 10 vm with vdisk. But also i had old ssd with previous Windows 10 partitions. I enabled destructive mode in unassigned devices. And then i deleted the old partitions in ssd. Then I converted the vdisk to physical disk with "dd if of" command, targetting this ssd. Configured the vm to use the ssd device, by id... Basically i did the steps from gridrunner's guide - thanks a lot Started the vm, all ok except that i cannot extend the partition to utilize entire ssd ... The option is grayed out and the disk split does not show any unallocated space... What i can check further? Thanks! Alex
  10. Hi many times, we want to do a stop/start swap of the VM's because they are using common resources (keyboard/mouse, GPU, USB controller, etc). Let's say that we have VM1 started, VM2 stopped. The VMs screen (where all the VM's are listed) should allow definition of such logical entanglement between 2 (and only 2) VM's. So that a stop command is sent to VM1 and once it's executed (so VM stopped, all resources released) a start command for VM2 is sent. It should be easily identifiable what VM's are entangled (e.g. suggestion - list them in pairs) And add a "Swap" option to to the VM control menu (e.g. the menu where you can issue start/stop/force stop commands). Swap option should be enabled only if one VM is up AND the other down. If both up OR both down, "Swap" option should be disabled WDYT? PS: It might be useful/abused also as a way of high availability?
  11. hi not able to tell all the differences, other than seabios is suitable for passing through non-uefi devices, while ovmf is recommended for uefi ones. to my understanding, seabios is more stable but has less features than ovmf. i also noticed that success depends on the os you are trying to install (some linux distros worked for me only with seabios, while others only with ovmf, although i was passing through same GPU) (on a new generation mobo)
  12. And not only this, but to be able to reallocate the resources without editing individually each VM. For example, by having an overview of the entire VM's (e.g. a table with all VM's and their most changed resources) so that one could allocate CPU pairs, RAM, GPU, Audio, USB controllers in bulk.
  13. I would expand it further... Have a general table/allocation dashboard, where the header should have columns like: VM Name, Description, GPU, Audio, USB controller, CPU Pair 1, CPU Pair2, CPU Pair3, CPU Pair 4, ... and so on CPU Pair - Should be more like 'CPU0;7' , 'CPU1;8', and so on, for an 8 core 16 thread CPU for example... And this table filled initially with the currently defined VM's - each row for an existing VM and it's current resources. And have dropdowns for the installed resources like GPUs, Audio, USB controllers... Users will mark with 'x' the CPU thread pairs they want to allocate for each VM. Probably there should be some initial configuration to pair GPU devices with respective rom files as well (e.g. first slot Nvidia GPU passthrough) So that 'rom' element is also added to the xmls. In this way, one could easily reallocate the resources to the VM's as per their immediate needs, without editing each VM separately, and loosing track of the CPU allocation, GPU, etc... This kind of feature could be a nice addition also to the ControlR mobile app.
  14. thanks gridrunner @KRSogaard - my understanding is that you will use the integrated gpu for unraid and gt610 should passthrough regularly, without the steps mentioned by gridrunner. gridrunner's instructions are required when no GPU can be allocated to unraid.
  15. hello Is there performace loss when playing the game in the vm with the gtx1070 passthrough? possible to share some benchmark comparison, between native Windows 10 OS and W10 in VM, on same number of CPU cores allocated? thanks
×
×
  • Create New...