meep

Members
  • Posts

    758
  • Joined

  • Last visited

Everything posted by meep

  1. Hi again! try to make a change using the visual editor, say adjust memory assignment in the vm, or toggle a cPU. Click update and see if that sticks. if it does, try the xml edit again. You need to be careful in editing the xml that you don’t inadvertently cause an error in the file, like accidentally leaving out a quotation mark, or deleting an angle bracket. try making the xml changes one at a time to see if you can narrow down which one is the problem. also, when you get it working, get into the habit of saving datestamped known working versions of the xml that you can revert to in case things go wrong with further edits. plus do post diagnostics (tools -> diagnostics)
  2. Also, does your vm boot ok when using vnc, with no gpu passthrough. it might be worth your while stepping back to vnc, solve the usb issue, then tackle the GPU.
  3. A little perseverance and you’ll overcome these issues. Once you develop an understanding of what’s going on, you’ll be spinning up VMs like a pro. for gpu issue, what is the machine type and bios you have configured for the vm? Seabios usually works best for gpu passthroug, and switching between q35 and i440 machine types often solves a lot of problems.
  4. Choose one of the options, reboot and see what impact it has on your setup. We cannot tell you which to chose as the result will be different for various motherboards.
  5. To pass through any pcie device, like the USB adapter, it must either be in its on iommu group, or you must pass through all the devices in the group. its not clear what you mean when you say Vito manager is not in your apps. Did you install it from community applications? wed need to see your iommu groupings to help on this, but you should certainly see if there are iommu settings in your bios, and investigate acs override settings in unRAID. for the GPU issue, it’s unclear what machine type and bios you have set for the vm. If you don’t pass a gpu rom file, what happens?
  6. Great to hear it’s working for you. Thanks are enough, just pay it forward sometime!
  7. Hey @Masterbob221 Good to chat with you just now. For others, to go over what we discussed; 1. You set up a new VM using SeaBios 2. You made edits to XML to ensure that GPU Video & Audio devices are not split across two virtual slots (don't forget the 'multifunction' tag) If when your VM comes up, you still don't have picture, you can try following SpaceInvaderOnes more recent videos on obtaining, editing and passing through a ROM Bios file for your specific card. If you do get a picture, but have issues installing nVidia drivers or see a -43 error in Device Manager, you can add the configuration to your XML that I outline here. Good luck!
  8. When editing you VM, look at the top right of the page. See a toggle widget labeled 'form view'? Switch this to XML vide. This shows the VM configuration data in a code (XML) view. Please paste that here. Please also paste the 'PCI Devices and IOMMU Groups' section when you navigate to Tools->System Devices The fact that Splashtop is not enabled when you boot with GPU only indicates that your system is not booting, but you cannot see why. My advice here would be to install the VIRT-Manager docker, add a QXL adapter alongside your GPU, edit the VM XML to ensure this new adapter is the first to init and observe the boot process in VIRT-Manager. However, the fact that you are not aware of what your VM XML is means that's quite a hill to climb! Le's start with the XML and PCI IOMMU device listing first and take it from there.
  9. Hi there to ensure you get help, you will need to provide more information, specifically; 1. What have you done so far? What guides have you followed? 2. What OS have you installed in the VM? 3. Does the vm work ok with vnc display? 4. Share your vm xml 5. Share any details from the vm log file 6. What other devices are in your system. Is the gpu the only one? to get started with troubleshooting, I would suggest the following; 1. Remove the gpu passthrough 2. Boot the vm using vnc 3. Open netplwiz (if Windows) and allow users logon without entering password. This will ensure your system boots to desktop and does not get stuck at a logon prompt you cannot see. 4 Install splashtop streamer in the vm. 5 install splashtop viewer on some other system 6. Reboot the vm, still in vnc mode, and ensure you can access it using splashtop 7. Shutdown the vm, and add back the gpu passthrough, ensuring you follow all advice about bios dumps, not splitting the video and audio devices etc. now when your system boots, even if your display is dark, you should be able to access the vm via splashtop and see what’s going on. I suspect you’ll see an error -43 when you view the gpu in device manager. if the above does not work, you may need to be lame familiar with virt-manager docker which will allow finer control over vm configuration and troubleshooting.
  10. Thanks for the kind words. Do click on some of the sponsors there - it all helps! I have a few final tweaks to make to my setup and then I’ll do a detailed write up or maybe a video. stay tuned! Though it’s never actually completed. There’s always something new to try out. You need to ensure your MB supports bifurcation. Not all do.
  11. One thing I don't like about the X570 platform is the low PCIe slot count. When you get into VMs, you'll find that you'll have much better performance when passing through hardware. If your motherboard has multiple USB controllers on board, thats a good start. If not, adding discrete USB adapters is the way to go. You can pass through mouse and keyboard directly, of course, but once you start looking at, say, USB audio devices, you really need to attach these to a hardware devices that are passed through to the VM. Worst case, for your two VMs, you'll need 2 GPUs and 2 PCIe USB adapters. Thats 4 slots, without even thinking about any other cards you might need like, for example HBAs if your storage expands in the future. I think you're right to go for AMD, but do have a look at threadripper if budget allows. There should be good value on 2nd generation. I always buy high-end but behind the curve, so maximising performance, but not at bleeding edge prices by looking towards previous gen. tech. Whatever way you go, do verify that your motherboard has excellent support for IOMMU. By that, I mean discuss with other users on here if your intended motherboard breaks out devices well. This is important for VMs and hardware passthrough. When passing through a hardware device, you must pass through all devices in the same IOMMU group. It's therefore important that your devices break into the smallest possible groups. In terms of cases, it depends on the form factor you're after. I've just moved from a ThermalTake Core X9 monster to a rackmount server case. If I was looking for a tower today, I'd go for a Fractal Design 7 XL - amazing case with great flexibility and space for loads of hard drives (if you can find the caddies). On the other thread, I mentioned my blog which you might find useful in your research.
  12. The 2 MacOs workstations have GPUs and USB Adapters passed through. They are accessed via Displays, Mice & Keyboards located throughout the house. One is connected with HDMI/USB cables, the other using HDBaseT over Cate5e The utility VM is headless, I use Splashtop. For the Gaming/Movies VM, I have a direct connection to my HT receiver, but also use Parsec to stream games to the other VMs. System is a Threadripper 2950X on a Taichi X399. You'll see more details on my blog. I'll have a look at your thread.
  13. MacOS Catalina 16GB - General Workstation (Photoshop & Illustrator) MacOS Mojave 16GB - General; Workstation (Some FCPX, Some coding, but usually just lots of Apps) Windows 10 Utility 12GB - Home Control incl Blue Iris security Windows 10 entertainment 16GB - Just Games and Home Theatre With 64GB, i had to pare back RAM on those systems to allow for a stack of Dockers and, of course, space for unRaid to run. the extra 32GB just gives the system room to breath and allows me maximise it. For example, it also allows me spin up additional VMs as testers or trials without disturbing anyone else. The above VMs are in pretty much constant use by one of the family at any given time and announcing I need to shut one or two down for a bit of experimentation would not go down well. There are really no other computer systems in the house - everything is consolidated into my unRaid server so it's something of a workhorse. I've been using Macs since a venerable LCII on OS6.x and always found them to run better with more RAM, particularly for the visual applications, or when using lots of apps simultaneously. For the Windows VM use cases, I find the same. You can never have enough RAM. I'll have it etched on my headstone.
  14. Don't be alarmed if your parity build is taking a long time. 10s of hours is normal.
  15. Installing directly into unRaid is likely a bad idea as unRaid is memory resident and any changes will be wiped on restart, unless you set up scripts to reinstall at boot time. If all of your tools are available as dockers, that is one way to go. Dockers tend to be less resource intensive than VMs, if thats a concern. I like VMs for no reason other than I'm oldschool. But also, I don't have a separate standalone workstation - ALL my systems are VMs in unRaid. If you have questions about specific dockers, the best place to ask is on the specific docker support thread - you'll find a link to this when looking up the docker in Community Applications. Or just go ahead and install the docker and explore. They are easy to uninstall if not needed. Finally, be aware of unRaid NerdPack which installs a lot of useful tools and widgets into the OS itself. Might come in handy.
  16. Hi @ChoKoBo It doesn't look like you have any actual iommu groups there. Is there an option to switch this on / configure it in your bios?
  17. Hi That's a massive co-incidence, I have also recently retired a Core X9. It's a massive case, but really, for its size, has very very poor support for multiple disks. Happy to try to answer your questions; HBA = Host Bus Adapter. There are many types. A SAS HBA is just one type of HBA, but since it's the most prevalent for unRaid purposes, the terms are used somewhat interchangeably around here. The HBA sits in a PCIe slot on your main system motherboard, and gets its own power from there. You are correct, though, it handles data only - you still need to power the drives in your external chassis. There are a few ways to do this. If you purchase a dedicated 'disk shelf' chassis, it will usually come with its own power supply, often two 'server' type PSU for redundancy. If you are using a standard CP chassis as a DIY disk shelf, you wont have this luxury, so there are. a few options. At a basic level, you could run a lengthy molex cable from your main system PSU out to the chassis and use that. However, if you have more than a couple of disks, you run the risk of overloading / overheating. A much better solution is to install a dedicated PSU in the external chassis. This, in itself, presents a further challenge - how can you get this secondary PSU to turn on if you have no motherboard? Again, two options; You can use a multi-PSU adapter. This device takes a power cable from your main PSU, and the ATX from your secondary PSU. When your main system is powered on, the secondary PSU also receives the 'on' signal and powers up. The downside here is that you still need to run a power cable between the two systems. My preferred solution is to use a PSU Jumper Bridge on the secondary PSU. This gadget shorts the power on pins, so as soon as you flick the physical power toggle switch on the secondary PSU, it springs to life as it receives the 'on' signal from the adapter. The downside is you lose the sync between the two systems - you secondary system is 'always on' until you physically switch it off. My solution to this is to use a power strip with slave sockets that are only powered when there's power draw on a master docket (the main PC) With a secondary PSU in the external chassis, you have plenty of power for drives, fans and anything else needed. If you want to move your SAS card to the external chassis, you need to extend your PCIe there. Then, you need to look at PCIe expanders, bifurcation etc. This can be expensive. However, if you place a SAS EXPANDER in your chassis, you have more options. (you can pick these up cheaper than the amazon link - I just added that for illustrative purposes). The SAS Expander looks like a PCIe card, but its not. It does not need to plug into a PCIe slot. It can, but doing so is only for the purpose of providing the card itself with power. The card also has a molex connector that is an alternative for power. So such an expander can be mounted in a PCIe slot, or mounted anywhere in your chassis. (they come in different form factors, some are integrated into hot swap drive backplanes etc.) The SAS expander connects to your main system by connecting to your HBA. One or more of the HBA SAS Ports is connected to the SAS Expander which in turn connects to your drives. In this way, s ingle 4 channel SAS port can expand to up to 20 channels (drives). One handy way of powering and mounting the SAS expander in the external chassis is to use a PCIe expander typically used by the mining community. In this scenario, you are not using it for data at all, just as a mounting point and power provider for the SAS expander. Of course, most of the above relates to the DIY approach. Splashing out on a proper disk shelf system will negate a good deal of the hackery and provide everything in a unified package - but often at a cost!
  18. Yes, I have this kind of setup. You can externalise hard drives in a specialised 'disk shelf' chassis, or simply use any suitable PC case you have lying around. The systems are usually connected through a SAS controller (HBA) in the main system. The controller will have internal or external SAS ports that are typically connected to the target / external drive array using something like a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter. unRaid will see these external drives just like regular drives. You can add them to the array, use them for parity, cache or as unassigned disks. You might be interested in this blog post where I touch on the idea. It can get a lot more complicated with expanders, disk backplanes etc., but in basic terms, what you envisage is possible, and quite common. Search on here or the interweb for 'disk shelf', 'hba', 'SAS controller', 'backplane' to get started.
  19. You will be able to start the windows vm automatically Should be no problem shutting down array daily, provided that you do it correctly , not just pulling the plug! windos performance will be largely on par with a bare metal system for most tasks, assuming you are passing through a Gpu, and sufficient cpu cores and RAM. Conventional wisdom says you’ll have a 5-10% performance drop, but you’ll only really notice it in benchmarks or serious gaming. You do need to reserve some system resources to allow unRAID run. for usb, for best results, pass through a whole usb controller / adapter. This is especially relevant when working with usb audio devices which won’t perform at all well if passed through directly to a vm. Much better to pass through the usb controller and attache the usb device to that. good luck,
  20. Check out Art of server on ebay That guy supplies genuine cards, pre flashed to IT mode. It’s also worth checking out his YouTube channel as he’s got tons of info there.
  21. Since your primary use case is VMs, you might like to consider the platform (cpu/motherboard combo) that provides the most pcie slots and discrete usb controllers. unless you plan to remote in to all your VMs, you’re likely going to want to be passing through multiple GPUs, USB adapters and possibly other devices. You’ll be running out of pcie slots quite quickly. for ultimate flexibility, choose a motherboard that supports bifurcation. also, before making a final choice, come back here and see if you can find anyone using the specific mb you have chosen. You’ll be interested in how well the system breaks out devices into discrete IOMMU groupings. This is really important for smooth passthrough and reduced headaches. i have a threadripper 2950x and ASRock Taichai x399 and it is excellent with regard to all of the above. It’s not the only option, of course, but one to consider. i run 2x MacOS workstations and a Win10 system in VMs with gpu/usb passthrough as well as a couple of other ‘headless’ Win10 VMs. Dont skimp on RAM. I found 64GB to be tight for my needs. I upgraded to 96GB and have much more flexibility.
  22. Wouldn’t be my cup of tea. As an option, maybe, but just because lots of services are moving to the cloud doesn’t mean that every service has to. if I have a problem with my internet, I can’t boot or access my storage? No thank you! Cloud is good, and has benefits in lot of cases. Local is better for some things still.
  23. I would suggest asking that question on the docker support thread. I don’t recall any such dependency requirements when I set up. However, I run both virt manager and unRAID vm manager and there is no conflict at all. Changes made in virt manager show up in unRAID vm xml, but get wiped if I use unRAID visual editor. to be safe, get expert advice before proceeding. y
  24. My next step would be to install virtual manager docker and add a vnc vga display device. Boot the vm and watch the virt manager display to see what’s happening with your vm. It’s possible that it’s going into recovery mode before gpu initialisation, or some such.