scorcho99
Members-
Posts
190 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by scorcho99
-
Can multiple guests access the same virtioFS drive at the same time?
-
What I'd like to do is prefer using disks that already have share data on them until they are full and only then start writing to extra disks. Basically I want to limit unnecessary spin ups. I'm looking at the docs and I don't think I can do this. I can set include to only use the current disks but that won't automatically use others when it runs out of space. I thought fill up might do this but it sounds like that just goes for the lowest number disk with free space.
-
I'd like the compile flags to be set to that I can use virgl3d, which can accelerate video decode in VMs without having to passthrough a physical GPU.
-
I am still on 6.9.2 at the moment. I plan to convert the VM to UEFI. There is a janky way to trick libvirt into taking snapshots with UEFI so I guess I'm stuck hacking it together since there seems to be no priority in fixing this. I'm not sure this solution will work for everyone. I might also test using an nvidia card in place of the radeon.
-
No updates for me, still on 6.9.2. I may test 6.12 but I doubt it will be an improvement. I will also try an nvidia card in that slot as I now have a spare, but that is academic knowledge. I can probably sort of make the VM work with OVMF I guess but I'll lose easy snapshots.
-
Safe/clean shutdown and system restart from the command line
scorcho99 replied to scorcho99's topic in General Support
So it took a little digging and another post on here, but it seems like the 'powerdown' command itself calls a script which then calls the 'stop' file which doesn't exist by default but can be created. It runs very early in the process so it should fit this use case nicely. -
Is there a way to do this? I'd like to perform some actions before shutdown/restart using bash scripts. Is there a way that I can hook into the existing shutdown/restart to do this? Failing that I guess I'd like to make my own shutdown script that calls the safe shutdown somehow so that I don't need a parity check on reboot.
-
This seems like it is possible but I don't know how to do it since I don't know much about syslinux or how unraid implements it. I'd like to add some utilities, including a different version of memtest that auto reboots on successful completion, to the unraid boot menu. Can I already boot ISO files directly? Or how do I generate a syslinux OS image that I can add to the boot menu? Has anyone done something similar?
-
So this seems to be working fine (so far) for my purposes at least. Since I was reusing a previously used unassigned device disk all I did was enable destructive mode and delete the paritions off then create the new parition and filesystem as so: NOTE: * is the unassigned device name, this will be different parted /dev/sd* mklabel msdos mkpart Partition type? primary/extended? primary File system type? [ext2]? ext4 Start? 0% End? 100% quit mkfs.ext4 /dev/sd*1 At this point unassigned devices picked up there was a parition and I just clicked the mount button.
-
This is an old post I know... I have this problem with the pegged core with my secondary GPU, a radeon r7 360. The funny thing is I always have that problem in 6.10 and later. So I stuck on 6.9. But I recently noticed that I actually sometimes have it on 6.9 as well. The sporadic nature, and the fact it seems to never happen on first boot, but once it does happen one VM that uses it might book OK but made me thing it is some kind of resource leak or something. Can you explain the 'memory not mapped properly' issue I might encounter and how I would troubleshoot it? I have plenty of free memory but I could see some sort of fragmentation issue being involved. I'm pretty stumped on this one though.
-
I have a similar issue with an older Radeon that works fine in 6.9.2 but breaks in 6.10 and later, with a black screen and one core pegged to 100% when the VM is up. If I use a OVMF VM it works ok but I don't really use those because they do not fully support VM snapshots. Plus it is a pain to switch a traditional bios windows VM install to UEFI.
-
GPU virtualization (virtio-gpu, virGL, sr-iov, MxGPU, VDI, spice)
scorcho99 replied to matthope's topic in Feature Requests
VirGL supports video decoding acceleration (and actually recently added some encoding acceleration) which might be an even bigger boon than its 3D acceleration. I'd really like to have access to that. -
Thanks @ghost82 that is kind of a headache to parse but I feel I understand the structure a bit better now. I actually think based on reading this that provided the VM is off (nothing mapped to memory in that case) that there is no negative effect, provided I restore pflash before startup. (I actually never tried actually booting with the 'rom' value set. Maybe it works fine? It seems like an invalid, or at least not covered config based on the above) That is acceptable for my use anyway, I generally do all my snapshots with the VM off anyway to avoid the restores being in a crash state.
-
@SimonF Did you ever find any issues with this backdoor way of allowing UEFI VMs to internal snapshot? I gather it did not work with with TPM Windows 11 but I don't need that, just windows 10 and linux VMs. I don't expect it will save and restore the virtual bios settings (which I think was the main reason this was initially disabled, ugh) but I don't care much about those. It is very frustrating that they didn't leave an override option for this.
-
This would be a nice feature but I just link up to unraid with virt-manager (SpaceInvaderOne's tutorial) and manage snapshots with that. You could also do it commandline if you want to do it the hard way.
-
You can try using network filters to block all traffic except to certain ip ranges. https://libvirt.org/formatnwfilter.html
-
Modify VM XML with script, where are these defintions located?
scorcho99 replied to scorcho99's topic in VM Engine (KVM)
There are indeed VM xml definitions here, but when I modify them the changes don't see to apply to the running VMs. Is there a way to force the changes to be updated? Edit: I ended up just using virsh define on the modified xml file. I think I'm just going to virsh dumpxml, modify that xml and then use virsh define to commit back the changes. -
I have since tried 6.10.0 and 6.10.2 (based on the reading the release notes there was a change in the default passthrough method for 6.10.3). Unfortunately, there was no difference. I guess I'm going to remain on 6.9.2.
-
Mounting vdisk in read only while VM is running
scorcho99 replied to Guillaurent's topic in VM Engine (KVM)
Doesn't rsync do delta copies as an option? Are you concerned about downtime of the VM (it's shutdown while the giant file is copying) or the bandwidth needed? vdisks can be mounted directly, I've done it with nbd and qcow2 disks anyway. I'm not sure about concurrent with a single read only. I want to say I've heard of that as something you can do with vware vdisks but it's only a vague memory. For my backups I shutdown all my VMs and create a read only btrfs snapshot and then immediately start them backup. Then I backup from the snapshot. The VMs are only down for a few minutes and can change and run while the backup is slowly performed on a snapshot of them in an off state. Maybe something like that? -
Switching a VM between these is always a mess since i440fx uses a legacy PCI topology with PCIe stuff just stuck on top of it (which works fine in my experience despite being a nonsensical layout) and Q35 simulates an actual PCIe layout. I find it a lot easier to just make a whole new VM and add in the missing pieces than to change the VM machine type. As you know I have some similar problems that prevent me from running 6.10.3. I never tried this, but when I was doing a vbios dump with space invader one's guide today some one used this option to get the dump script to work. Adding this to syslinux.cfg: vfio-pci.disable_idle_d3=1 Probably won't work but the option title is prescient sounding at least.
-
Well, I'm stumped on this. I tried blacklisting the amdgpu and radeon drivers, binding to vfio and neither helped. Then I rolled back to 6.9.2 and used space invader one's guide to dump the vbios, confirmed it worked in 6.9.2 and updated. No difference. So it seems like I'm stuck with only OVMF VMs if I want to pass through this card in 6.10.3. But that breaks some other things with the VM so I think I'm stuck on 6.9.2.