scorcho99

Members
  • Posts

    190
  • Joined

  • Last visited

Everything posted by scorcho99

  1. If you're looking for a GUI to interact with VM snapshots with today, you can use virt-manager with unraid. That's what I use and I like it a lot. Keep in mind you can't snapshot ovmf machines (that's not unraid's fault its an old bug/limitation with KVM that I'm surprised isn't fixed) and you'll need to change how the VMs are stored because their states will be lost on reboot otherwise. (unraid by default creates its VMs from scratch essentially every time it boots but it has no system to maintain snapshot information). Certainly would appreciate some tools for btrfs though, I have to go all command line with that.
  2. I put mine as "at array first start" I remember having a lot of problems with the chained directories and can't remember if I figured it out. I had to clean them all out before I got things appearing to work OK again though. I think the symlinks were messed up when the script was run to many times so it kept creating redundant ones. Its been to long since I messed with this unfortunately. If I were to improve the script I would make it better handle checking for the symlinks existing.
  3. I would like to have wireguard setup to certain VMs (and maybe some dockers but no use case for this right now) connect to the internet ONLY through a VPN like mullvad and other VMs and existing dockers connect without the VPN. I read the thread but I was a little confused about whether that was possible by the end. Basically I want to offload VPN connections to a single configuration location instead of having to set it up on every potential client.
  4. Thanks, I have it working now. Is there a simple way to constrain the server to only my local network? I'm just playing around with it at this point. Perhaps its fine but I noticed the doctor was aware of the WAN IP in the status page. Edit: Nevermind again. I think I'd have to setup my router to forward ports for this to work so nothing to do.
  5. Hello, I get an error that the java path is missing when I try to start. I tried changing it from 8 (what i starts with) to 16 but it didn't make any difference. This is my only docker. The docker is installed on an unassigned devices disk but not sure if that matters. Attached is a log. Thanks Edit: I see there is a post on the top that probably covers this... log.txt
  6. @jwoolen I appreciate this thread since it seems like its the first info I've seen about the iommu groups on rocketlake. Just to confirm, you do not have ACS override options enabled, is that correct? (In other words, these are the true groups?) If so then Intel has finally fixed the broken ACS separation on the root ports. While rocketlake has a lukewarm response to a lot of people, this and the increased lanes for the DMI and m.2 slots make it much improved platform for passthrough.
  7. I don't know anything about this thread but was pinged because of my gvt-g thread. Its also been awhile since I did that. "All" I had to do to get unraid to use gvt-g was rebuild the kernel used with the gvt-g flags turned on as they were disabled in the default build. The qemu version was fine as is IIRC, although it was compiled without openGL support (not surprising) and openGL is required to allow the gvt-g slices to render to a dma buffer which many people make use of with gvt-g. It still had use even without that feature.
  8. This is a resolved post but I appreciate the data and resolution information. My hardware behaved similarly: On my MSI x470 I noticed that when I used the 3rd slot for VGA passthrough that some of my SATA disks in the array dropped out which sounds like the issue you were having as well. My theory was that the iommu was having some sort of resource conflict or misdirection with devices running off the chipset. Given I had to use the ACS override to force this configuration to work sort of leans on that too, I've always thought people were a bit eager to solve the problem with the ACS override since I've heard it has potential stability concerns. I only recall testing the an older AMD GPU. I believe all tests were with unraid 6.8. Since 3rd slot passthrough was just a test and not required for my use case, I kept using the board and it has worked fine with dual GPUs passed through off the cpu slots. I'm still running the ACS override option to pass through USB cards, it only seemed like the GPU mucked things up.
  9. I'm trying to troubleshoot a piece of hardware I had working in passthrough in the past which doesn't seem to work now. But I can't find installations for download for 6.4 series or 6.5. Those are the versions I probably had it last working in. I tried moving an old installation backup to my new flash drive to test with but since the backup was is from a different flash drive it refuses to give me a trial. I happened to have 6.3.5 installer files saved but was surprised to find that the older installers seemed to be gone from any of the posts. Here's another question: If I have a working 6.3.5 trial install, can I just copy the bz files from the 6.4 install to get it up and running? I'm just trying to test one thing and move on.
  10. I like this idea. I don't even really see why btrfs needs to be involved though. You just need the concept of a second simulated failed disk that has the file that failed hash check. That dynamix file integrity plugin could probably be linked up to that. Make sure you keep "write corrections to parity disk" off I suppose you could do it now by temporarily unplugging the disk but that is hardly seamless.
  11. There's a third party (asmedia) PCIe to PCI bridge listed in your first post. So the device is actually a legacy PCI device with a converter chip basically. Not a lot of people try to run legacy PCI stuff but it does work sometimes at least. Maybe try binding the device to vfio at boot if it isn't already.
  12. No. You'd have to handle it in the guest some how in that case, or perhaps some drive image software could be used when the VM was offline. Its the main reason I don't use direct device drive passthrough personally.
  13. @SpaceInvaderOne This is an old necro thread so this might be old news. I run a virtual sound device on my unraid server (ac97) and use it fine through virt-manager with spice to view. I found your post kind of amusing since I used your guide to setup the virt-manager link. Maybe unraid devs added something since your post though that made this possible. If that didn't work I was probably going to use scream audio or some other network sound device but it would be a long clunkier of a workflow.
  14. Try installing the adapter+card in different PCI-e slots. I've noticed the IRQ sharing actually comes into play with legacy PCI devices and passthrough. Its also possible this device is simply not going to get along with PCI passthrough, not all of them do. I think there is also an option in VM settings for "allow unsafe interrupts". I've never used it and am not sure what it does but it might be worth a shot.
  15. Wish they made some of those new cases in white. I'm not interested in building in anymore black hole cases, can't see anything.
  16. @meep I came to the same conclusion on this as you did. The price and availability of working 4 port cards is kind of wonky so rolling your own has some advantages. I use a 4 port mining riser which I bought for ~$15, they're pretty easy to find all over. I seem to have better luck with ones using the asmedia chipset which is what most of them use. Advantages are easier to find and flexibility. You can mix and match devices (including SATA controllers and different types of USB controllers, even video cards). Also you can install 4 port cards instead of needing to use USB hubs, most 4 controller cards only give you one port per controller. Disadvantages are if you're buying all the controller cards from scratch its not much cheaper than the all in one card and it takes up a lot of space and creates a wiring mess. So you'll need a custom case or a case with a lot of expansion slots in many cases. And when using USB 3.0 you have a bandwidth bottleneck. I've never seen an (affordable) mining riser that doees 4x pcie to 4qty 1x pcie splits, they're all 1x to 4qty 1x pcie split. The bottleneck doesn't really matter to me since I really only use USB 2.0 and 1.1 devices.
  17. So you backup my VMs on my cache drive I have a userscript that runs. It shuts down all running VMs, creates a read only btrfs snapshot of the VM directory and then boots them all back up for minimal downtime. Then the backup runs off the snapshot. This has worked fine for a few years. A second share appears in the share list (VMs_backup in my case) whenever the snapshot is active, I can't remember if I actively set this up or it happened automatically. Anyway, I suddenly cannot access this share from anywhere. The base share (VMs) still works fine, it is only the snapshot share that I cannot access. I confirmed that the snapshots are still happening, that the folder is accessible on the server and has the right contents (through command line). The share permissions in unraid seem OK. The only new thing I did was create a new vdisk folder and copy in a vdisk from another location and make a new VM. This makes me think it might be a permissions thing. Is there a requirement here? The thing that is weird is I only added things beneath the share. I did notice some of the folders have my user account and not nobody or root as owner.
  18. The secondary slot is going to be a chipset driven pcie 4x 2.0 on that board. 50% performance loss still seems a bit extreme in that configuration but there would be a hit. Regarding your option 1: Try to find an option to disable your CSM in the bios. On Ryzen at least, that seems to swap the primary boot GPU to chipset ports.
  19. Anyone get this working with a more recent version of virt-manager? With mint 20 there is not method drop down (~7:35 in video) to pick TCP. Nevermind, I solved this. Custom "URIs" are created for this behind the scenes. I created a connection on my old rig and copied the text. Example: qemu+tcp://[email protected]/system
  20. Anyone ever solve this? I'm having the same issue with linux mint 20. Nevermind, I solved this. Custom "URIs" are created for this behind the scenes. I created a connection on my old rig and copied the text. Example: qemu+tcp://[email protected]/system
  21. Good tip on the uefi shell. I think I actually had to do that when I installed the modded bios to disable write protection or something, I didn't get that far but I did wonder at the time if I could have just used the shell to change the option. I don't know much about bios modding (more than I did now) A guy on win-raid forums created the bios mod for me and I sent him a amazon card as thanks. Anyway, I ultimately decided against using GVT-g, but it was a fun experiment...required me to figure out a lot of pieces I didn't know anything about. It was a death of a thousand cuts for final purpose though. In addition to having pretty weak performance potential, it didn't support old Windows OSes, without openGL in unraid I couldn't get it to display in spice, it didn't support saving VM states and I believe I had a fair amount of trouble with it in linux guests? I can't remember. And there is (currently) no (straightforward) way to direct output to another display which I wanted. It just wasn't a great fit for any of the use cases I'd dreamed up for it. Maybe after it bakes a little longer. I ended up selling my Intel hardware and went with a Ryzen setup for more cores. I ended up deciding that a mining riser with a bunch of zerocore supporting AMD graphics cards was more flexible even if it is a big cabling mess. Only thing I feel like I'm missing out on is quicksync which is pretty good.
  22. Is there a reason you have all the acs override options on? I'm mostly asking because I don't know if its required for this chipset or not since its so new!
  23. I tried yesterday to run 3dmark firestrike for first time ever on a z77 and rx 470 with a windows 8.1 VM, seabios based. I got crashing on scan systeminfo as well. So as a sanity test, I installed Windows 10 (1709?) fresh onto bare metal and tried it today and the benchmark ran without any apparent issues. I'm not sure the cause or if it ever worked. I was running unraid 6.8.3, its just on my test rig. I'll have to try and isolate the issue I guess. It seemed pretty stable, I was running heaven benchmark before. The system was having a lot of issues once I tried 3dmark though, not sure if all of them were related. I assume you were running Windows 10 in your VM? I know there is a MSRS fix that you sometimes have to do, I've never done it myself but that seems like a possible culprit to me.
  24. Are you doing primary GPU passthrough? (You only have one GPU) If so have you tried adding video=efifb:off,video=vesafb:off to your syslinux.cfg? I had to do that to get primary working with my nvidia gpus. Note you will lose unraid console video out early in the process.
  25. I don't know much about Volumino (never heard of it before now) but it sounds like its debian based. I run GTX950 and GTX1060 3GB cards in linux mint 18 VMs for my HTPC/Couch gaming VM and they run fine. They are seabios 440bx based VMs. For sound, I needed to use the MSI fix (its similar to the one used for Windows) otherwise the audio was garbled and the VM ran awful but otherwise I didn't do anything special.