scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Posts posted by scorcho99

  1. What I'd like to do is prefer using disks that already have share data on them until they are full and only then start writing to extra disks. Basically I want to limit unnecessary spin ups.

     

    I'm looking at the docs and I don't think I can do this. I can set include to only use the current disks but that won't automatically use others when it runs out of space.

     

    I thought fill up might do this but it sounds like that just goes for the lowest number disk with free space.

  2. On 9/21/2023 at 11:08 AM, cyp909 said:

    Sorry to bother you after one year, but I would like to ask have you dealed the issue, since I am also facing the same issue.

     

    I am still on 6.9.2 at the moment. I plan to convert the VM to UEFI. There is a janky way to trick libvirt into taking snapshots with UEFI so I guess I'm stuck hacking it together since there seems to be no priority in fixing this. I'm not sure this solution will work for everyone.

     

    I might also test using an nvidia card in place of the radeon.

  3. This seems like it is possible but I don't know how to do it since I don't know much about syslinux or how unraid implements it.

     

    I'd like to add some utilities, including a different version of memtest that auto reboots on successful completion, to the unraid boot menu. Can I already boot ISO files directly? Or how do I generate a syslinux OS image that I can add to the boot menu? Has anyone done something similar?

  4. So this seems to be working fine (so far) for my purposes at least. Since I was reusing a previously used unassigned device disk all I did was enable destructive mode and delete the paritions off then create the new parition and filesystem as so:

     

    NOTE: * is the unassigned device name, this will be different

    parted /dev/sd*

    mklabel msdos

    mkpart                                                            

    Partition type? primary/extended? primary

    File system type? [ext2]? ext4

    Start? 0%

    End? 100%

    quit

    mkfs.ext4 /dev/sd*1

     

    At this point unassigned devices picked up there was a parition and I just clicked the mount button.

  5. 8 minutes ago, dlandon said:

    It has never been an option to format ext4 disks, but UD can mount them.  Reiserfs was removed some time ago as a format option because it is deprecated.

     

    Sorry, I guess the post here made it sound like the formatting option was removed to me:

     

    Regardless, can I format them with another method (either command line or I guess I could pass the whole disk to a VM or even remove it temporarily) and then have them automount as usual?

     

    Thanks

  6. On 9/21/2021 at 1:40 AM, ghost82 said:

    When you have issues with gpu passthrough, most of the times we can be of some help if you attach unraid diagnostics file and the output of terminal command cat

    cat /proc/iomem

     

    Common errors are not splitting iommu groups, memory not mapped properly, not passing proper gpu components to the vm, wrong target topology.

     

    This is an old post I know...

     

    I have this problem with the pegged core with my secondary GPU, a radeon r7 360. The funny thing is I always have that problem in 6.10 and later. So I stuck on 6.9. But I recently noticed that I actually sometimes have it on 6.9 as well. The sporadic nature, and the fact it seems to never happen on first boot, but once it does happen one VM that uses it might book OK but  made me thing it is some kind of resource leak or something. Can you explain the 'memory not mapped properly' issue I might encounter and how I would troubleshoot it? I have plenty of free memory but I could see some sort of fragmentation issue being involved. I'm pretty stumped on this one though.

  7. Thanks @ghost82 that is kind of a headache to parse but I feel I understand the structure a bit better now.

     

    I actually think based on reading this that provided the VM is off (nothing mapped to memory in that case) that there is no negative effect, provided I restore pflash before startup. (I actually never tried actually booting with the 'rom' value set. Maybe it works fine? It seems like an invalid, or at least not covered config based on the above) That is acceptable for my use anyway, I generally do all my snapshots with the VM off anyway to avoid the restores being in a crash state.

  8. @SimonF Did you ever find any issues with this backdoor way of allowing UEFI VMs to internal snapshot? I gather it did not work with with TPM Windows 11 but I don't need that, just windows 10 and linux VMs. I don't expect it will save and restore the virtual bios settings (which I think was the main reason this was initially disabled, ugh) but I don't care much about those.

     

    It is very frustrating that they didn't leave an override option for this.

  9. This would be a nice feature but I just link up to unraid with virt-manager (SpaceInvaderOne's tutorial) and manage snapshots with that. You could also do it commandline if you want to do it the hard way.

  10. On 8/26/2022 at 11:03 AM, Squid said:

    /etc/libvirt/qemu

    There are indeed VM xml definitions here, but when I modify them the changes don't see to apply to the running VMs. Is there a way to force the changes to be updated?

     

    Edit: I ended up just using virsh define on the modified xml file. I think I'm just going to virsh dumpxml, modify that xml and then use virsh define to commit back the changes.

  11. Doesn't rsync do delta copies as an option? Are you concerned about downtime of the VM (it's shutdown while the giant file is copying) or the bandwidth needed?

     

    vdisks can be mounted directly, I've done it with nbd and qcow2 disks anyway. I'm not sure about concurrent with a single read only. I want to say I've heard of that as something you can do with vware vdisks but it's only a vague memory.

     

    For my backups I shutdown all my VMs and create a read only btrfs snapshot and then immediately start them backup. Then I backup from the snapshot. The VMs are only down for a few minutes and can change and run while the backup is slowly performed on a snapshot of them in an off state. Maybe something like that?

  12. 16 hours ago, PeteyBoPetey said:

    I'd try Q35 instead of i440fx only I don't know how to configure the bus/slots

    Switching a VM between these is always a mess since i440fx uses a legacy PCI topology with PCIe stuff just stuck on top of it (which works fine in my experience despite being a nonsensical layout) and Q35 simulates an actual PCIe layout. I find it a lot easier to just make a whole new VM and add in the missing pieces than to change the VM machine type.

     

    As you know I have some similar problems that prevent me from running 6.10.3. I never tried this, but when I was doing a vbios dump with space invader one's guide today some one used this option to get the dump script to work. Adding this to syslinux.cfg: vfio-pci.disable_idle_d3=1

     

    Probably won't work but the option title is prescient sounding at least.

  13. Well, I'm stumped on this. I tried blacklisting the amdgpu and radeon drivers, binding to vfio and neither helped. Then I rolled back to 6.9.2 and used space invader one's guide to dump the vbios, confirmed it worked in 6.9.2 and updated. No difference. So it seems like I'm stuck with only OVMF VMs if I want to pass through this card in 6.10.3. But that breaks some other things with the VM so I think I'm stuck on 6.9.2.