scorcho99

Members
  • Content Count

    137
  • Joined

  • Last visited

Community Reputation

15 Good

About scorcho99

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks, I have it working now. Is there a simple way to constrain the server to only my local network? I'm just playing around with it at this point. Perhaps its fine but I noticed the doctor was aware of the WAN IP in the status page. Edit: Nevermind again. I think I'd have to setup my router to forward ports for this to work so nothing to do.
  2. Hello, I get an error that the java path is missing when I try to start. I tried changing it from 8 (what i starts with) to 16 but it didn't make any difference. This is my only docker. The docker is installed on an unassigned devices disk but not sure if that matters. Attached is a log. Thanks Edit: I see there is a post on the top that probably covers this... log.txt
  3. @jwoolen I appreciate this thread since it seems like its the first info I've seen about the iommu groups on rocketlake. Just to confirm, you do not have ACS override options enabled, is that correct? (In other words, these are the true groups?) If so then Intel has finally fixed the broken ACS separation on the root ports. While rocketlake has a lukewarm response to a lot of people, this and the increased lanes for the DMI and m.2 slots make it much improved platform for passthrough.
  4. I don't know anything about this thread but was pinged because of my gvt-g thread. Its also been awhile since I did that. "All" I had to do to get unraid to use gvt-g was rebuild the kernel used with the gvt-g flags turned on as they were disabled in the default build. The qemu version was fine as is IIRC, although it was compiled without openGL support (not surprising) and openGL is required to allow the gvt-g slices to render to a dma buffer which many people make use of with gvt-g. It still had use even without that feature.
  5. This is a resolved post but I appreciate the data and resolution information. My hardware behaved similarly: On my MSI x470 I noticed that when I used the 3rd slot for VGA passthrough that some of my SATA disks in the array dropped out which sounds like the issue you were having as well. My theory was that the iommu was having some sort of resource conflict or misdirection with devices running off the chipset. Given I had to use the ACS override to force this configuration to work sort of leans on that too, I've always thought people were a bit eager to solve the proble
  6. I'm trying to troubleshoot a piece of hardware I had working in passthrough in the past which doesn't seem to work now. But I can't find installations for download for 6.4 series or 6.5. Those are the versions I probably had it last working in. I tried moving an old installation backup to my new flash drive to test with but since the backup was is from a different flash drive it refuses to give me a trial. I happened to have 6.3.5 installer files saved but was surprised to find that the older installers seemed to be gone from any of the posts. Here's another
  7. I like this idea. I don't even really see why btrfs needs to be involved though. You just need the concept of a second simulated failed disk that has the file that failed hash check. That dynamix file integrity plugin could probably be linked up to that. Make sure you keep "write corrections to parity disk" off I suppose you could do it now by temporarily unplugging the disk but that is hardly seamless.
  8. There's a third party (asmedia) PCIe to PCI bridge listed in your first post. So the device is actually a legacy PCI device with a converter chip basically. Not a lot of people try to run legacy PCI stuff but it does work sometimes at least. Maybe try binding the device to vfio at boot if it isn't already.
  9. No. You'd have to handle it in the guest some how in that case, or perhaps some drive image software could be used when the VM was offline. Its the main reason I don't use direct device drive passthrough personally.
  10. @SpaceInvaderOne This is an old necro thread so this might be old news. I run a virtual sound device on my unraid server (ac97) and use it fine through virt-manager with spice to view. I found your post kind of amusing since I used your guide to setup the virt-manager link. Maybe unraid devs added something since your post though that made this possible. If that didn't work I was probably going to use scream audio or some other network sound device but it would be a long clunkier of a workflow.
  11. Try installing the adapter+card in different PCI-e slots. I've noticed the IRQ sharing actually comes into play with legacy PCI devices and passthrough. Its also possible this device is simply not going to get along with PCI passthrough, not all of them do. I think there is also an option in VM settings for "allow unsafe interrupts". I've never used it and am not sure what it does but it might be worth a shot.
  12. Wish they made some of those new cases in white. I'm not interested in building in anymore black hole cases, can't see anything.
  13. @meep I came to the same conclusion on this as you did. The price and availability of working 4 port cards is kind of wonky so rolling your own has some advantages. I use a 4 port mining riser which I bought for ~$15, they're pretty easy to find all over. I seem to have better luck with ones using the asmedia chipset which is what most of them use. Advantages are easier to find and flexibility. You can mix and match devices (including SATA controllers and different types of USB controllers, even video cards). Also you can install 4 port cards instead of need
  14. So you backup my VMs on my cache drive I have a userscript that runs. It shuts down all running VMs, creates a read only btrfs snapshot of the VM directory and then boots them all back up for minimal downtime. Then the backup runs off the snapshot. This has worked fine for a few years. A second share appears in the share list (VMs_backup in my case) whenever the snapshot is active, I can't remember if I actively set this up or it happened automatically. Anyway, I suddenly cannot access this share from anywhere. The base share (VMs) still works fine, it is only the snapshot share th
  15. The secondary slot is going to be a chipset driven pcie 4x 2.0 on that board. 50% performance loss still seems a bit extreme in that configuration but there would be a hit. Regarding your option 1: Try to find an option to disable your CSM in the bios. On Ryzen at least, that seems to swap the primary boot GPU to chipset ports.