Jump to content

meep

Members
  • Posts

    758
  • Joined

  • Last visited

Posts posted by meep

  1. 1 minute ago, bubbleman441 said:

    Ok I will give that a try. As for the GPU ROM BIOS, I tried to boot the VM without it and it runs with nothing on the screen. I'm very confused and ready to return my new hardware.

    A little perseverance and you’ll overcome these issues. Once you develop an understanding of what’s going on, you’ll be spinning up VMs like a pro.

     

    for gpu issue, what is the machine type and bios you have configured for the vm? Seabios usually works best for gpu passthroug, and switching between q35 and i440 machine types often solves a lot of problems.

  2. To pass through any pcie device, like the USB adapter, it must either be in its on iommu group, or you must pass through all the devices in the group.

     

    its not clear what you mean when you say Vito manager is not in your apps. Did you install it from community applications?

     

    wed need to see your iommu groupings to help  on this, but you should certainly see if there are iommu settings in your bios, and investigate acs override settings in unRAID.

     

    for the GPU issue, it’s unclear what machine type and bios you have set for the vm. If you don’t pass a gpu rom file, what happens?

  3. 9 minutes ago, Masterbob221 said:

    it all works now just for some reason my mouse didnt passthrough ill work that out tommorow. id like to reiterate my gratefulness for your support and if there is a way to + rep you please let me know and id be happy to do it!

    Great to hear it’s working for you. Thanks are enough, just pay it forward sometime!

  4. Hey @Masterbob221

     

    Good to chat with you just now. For others, to go over what we discussed;

     

    1. You set up a new VM using SeaBios

    2. You made edits to XML to ensure that GPU Video & Audio devices are not split across two virtual slots (don't forget the 'multifunction' tag)

     

    If when your VM comes up, you still don't have picture, you can try following SpaceInvaderOnes more recent videos on obtaining, editing and passing through a ROM Bios file for your specific card.

     

    If you do get a picture, but have issues installing nVidia drivers or see a -43 error in Device Manager, you can add the configuration to your XML that I outline here.

     

    Good luck!

     

  5. When editing you VM, look at the top right of the page. See a toggle widget labeled 'form view'? Switch this to XML vide. This shows the VM configuration data in a code (XML) view.  Please paste that here.

     

    Please also paste the 'PCI Devices and IOMMU Groups' section when you navigate to  Tools->System Devices 

     

    The fact that Splashtop is not enabled when you boot with GPU only indicates that your system is not booting, but you cannot see why.

     

    My advice here would be to install the VIRT-Manager docker, add a QXL adapter  alongside your GPU, edit the VM XML to ensure this new adapter is the first to init and observe the boot process in VIRT-Manager.

     

    However, the fact that you are not aware of what your VM XML is means that's quite a hill to climb! Le's start with the XML and PCI IOMMU device listing first and take it from there.

     

  6. 22 minutes ago, Masterbob221 said:

    for my vm using gpu passthrough i have tried starting it up and it up and nothing shows on my display

    please help me!

     

    my setup is

    3900x

    x570 aorus elite

    gtx 1660ti

    Hi there

     

    to ensure you get help, you will need to provide more information, specifically;

     

    1. What have you done so far? What guides have you followed?

    2. What OS have you installed in the VM?

    3. Does the vm work ok with vnc display?

    4. Share your vm xml

    5. Share any details from the vm log file

    6. What other devices are in your system. Is the gpu the only one?

     

    to get started with troubleshooting, I would suggest the following;

    1. Remove the gpu passthrough

    2. Boot the vm using vnc

    3. Open netplwiz (if Windows) and allow users logon without entering password. This will ensure your system boots to desktop and does not get stuck at a logon prompt you cannot see.

    4 Install splashtop streamer in the vm.

    5 install splashtop viewer on some other system

    6. Reboot the vm, still in vnc mode, and ensure you can access it using splashtop

    7. Shutdown the vm, and add back the gpu passthrough, ensuring you follow all advice about bios dumps, not splitting the video and audio devices etc.

     

    now when your system boots, even if your display is dark, you should be able to access the vm via splashtop and see what’s going on. I suspect you’ll see an error -43 when you view the gpu in device manager.

     

    if the above does not work, you may need to be lame familiar with virt-manager docker which will allow finer control over vm configuration and troubleshooting.

     

     

     

     

  7. 18 minutes ago, Kich902 said:

    Saw your blog; quite excellent it is. Was hoping to see ur completed "Rig" if that's the word to use😃 esp' to do with the bifurcation adapters you spoke of. i'm now quite interested in it, seems like something i'll do later when i can and when the need arises.

    Thanks for the kind words. Do click on some of the sponsors there - it all helps!

     

    I have a few final tweaks to make to my setup and then I’ll do a detailed write up or maybe a video. stay tuned! Though it’s never actually completed. There’s always something new to try out.

     

    You need to ensure your MB supports bifurcation. Not all do.

  8. One thing I don't like about the X570 platform is the low PCIe slot count. When you get into VMs, you'll find that you'll have much better performance when passing through hardware. If your motherboard has multiple USB controllers on board, thats a good start. If not, adding discrete USB adapters is the way to go. You can pass through mouse and keyboard directly, of course, but once you start looking at, say, USB audio devices, you really need to attach these to a hardware devices that are passed through to the VM.

     

    Worst case, for your two VMs, you'll need 2 GPUs and 2 PCIe USB adapters. Thats 4 slots, without even thinking about any other cards you might need like, for example HBAs if your storage expands in the future.

     

    I think you're right to go for AMD, but do have a look at threadripper if budget allows. There should be good value on 2nd generation. I always buy high-end but behind the curve, so maximising performance, but not at bleeding edge prices by looking towards previous gen. tech.

     

    Whatever way you go, do verify that your motherboard has excellent support for IOMMU. By that, I mean discuss with other users on here if your intended motherboard breaks out devices well. This is important for VMs and hardware passthrough. When passing through a hardware device, you must pass through all devices in the same IOMMU group. It's therefore important that your devices break into the smallest possible groups.

     

    In terms of cases, it depends on the form factor you're after. I've just moved from a ThermalTake Core X9 monster to a rackmount server case. If I was looking for a tower today, I'd go for a Fractal Design 7 XL - amazing case with great flexibility and space for loads of hard drives (if you can find the caddies).

     

    On the other thread, I mentioned my blog which you might find useful in your research.

     

    • Like 1
  9. 1 hour ago, Kich902 said:

    Nice and true; u can never have enuff RAM! Could you tell me ur rig specs and the setup of the VMs as used by ur fam' i.e how they connect to them e.g vnc or through dedicated monitors and such.

     

     

    The 2 MacOs workstations have GPUs and USB Adapters passed through. They are accessed via Displays, Mice & Keyboards located throughout the house. One is connected with HDMI/USB cables, the other using HDBaseT over Cate5e

     

    The utility VM is headless, I use Splashtop. For the Gaming/Movies VM, I have a direct connection to my HT receiver, but also use Parsec to stream games to the other VMs.

     

    System is a Threadripper 2950X on a Taichi X399. You'll see more details on my blog.

     

    I'll have a look at your thread.

  10. 1 hour ago, Kich902 said:

    96GB of RAM? What's that you put them VMs through i wonder.

    • MacOS Catalina 16GB - General Workstation (Photoshop & Illustrator)
    • MacOS Mojave 16GB - General; Workstation (Some FCPX, Some coding, but usually just lots of Apps)
    • Windows 10 Utility 12GB - Home Control incl Blue Iris security 
    • Windows 10 entertainment  16GB - Just Games and Home Theatre

     

    With 64GB, i had to pare back RAM on those systems to allow for a stack of Dockers and, of course, space for unRaid to run. the extra 32GB just gives the system room to breath and allows me maximise it.

     

    For example, it also allows me spin up additional VMs as testers or trials without disturbing anyone else. The above VMs are in pretty much constant use by one of the family at any given time and announcing I need to shut one or two down for a bit of experimentation would not go down well. There are really no other computer systems in the house - everything is consolidated into my unRaid server so it's something of a workhorse.

     

    I've been using Macs since a venerable LCII on OS6.x and always found them to run better with more RAM, particularly for the visual applications, or when using lots of apps simultaneously. For the Windows VM use cases, I find the same.

     

    You can never have enough RAM. I'll have it etched on my headstone.

     

     

    • Like 1
  11. Installing directly into unRaid is likely a bad idea as unRaid is memory resident and any changes will be wiped on restart, unless you set up scripts to reinstall at boot time.

     

    If all of your tools are available as dockers, that is one way to go. Dockers tend to be less resource intensive than VMs, if thats a concern. I like VMs for no reason other than I'm oldschool. But also, I don't have a separate standalone workstation - ALL my systems are VMs in unRaid.

     

    If you have questions about specific dockers, the best place to ask is on the specific docker support thread - you'll find a link to this when looking up the docker in Community Applications. Or just go ahead and install the docker and explore. They are easy to uninstall if not needed.

     

    Finally, be aware of unRaid NerdPack which installs a lot of useful tools and widgets into the OS itself. Might come in handy.

     

  12. 6 hours ago, Aerodb said:

    Thank you for this info and your blog post. It helped explain a lot of what i think i need to do to grow beyond my CoreX9 Case (disk space) and current mobo (lack of SATA ports). 

     

    With the new info, I have three questions I was hopping you could elaborate on...

    1- With the SAS/HBA cards (not sure of the difference on these). If the card sits in an PCIx16 slot, how does the SAS card handle power to the diskshelf? it seems like it would only handle the data... Disks->SAS card in disk shelf->SAS cable->SAS card in host computer->host mobo.

    2- With the SAS card not handling power for the disks. You mentioned a link of two PSU units. Does this power the disk shelf (fans, disks)?

    3- The SAS card in the diskshelf, does if physically mount to anything? i mean, i dont think the diskshelf has a mobo...

     

     

    Any help with these questions is appreciated but also any other info you think would help. Any feedback is appreciated. 

    Hi

     

    That's a massive co-incidence, I have also recently retired a Core X9. It's a massive case, but really, for its size, has very very poor support for multiple disks.

     

    Happy to try to answer your questions;

     

    HBA = Host Bus Adapter. There are many types. A SAS HBA is just one type of HBA, but since it's the most prevalent for unRaid purposes, the terms are used somewhat interchangeably around here.

     

    The HBA sits in a PCIe slot on your main system motherboard, and gets its own power from there. You are correct, though, it handles data only - you still need to power the drives in your external chassis. There are a few ways to do this.

     

    If you purchase a dedicated 'disk shelf' chassis, it will usually come with its own power supply, often two 'server' type PSU for redundancy. If you are using a standard CP chassis as a DIY disk shelf, you wont have this luxury, so there are. a few options.

     

    At a basic level, you could run a lengthy molex cable from your main system PSU out to the chassis and use that. However, if you have more than a couple of disks, you run the risk of overloading / overheating. A much better solution is to install a dedicated PSU in the external chassis. This, in itself, presents a further challenge - how can you get this secondary PSU to turn on if you have no motherboard? Again, two options;

     

    You can use a multi-PSU adapter. This device takes a power cable from your main PSU, and the ATX from your secondary PSU. When your main system is powered on, the secondary PSU also receives the 'on' signal and powers up. The downside here is that you still need to run a power cable between the two systems.

     

    My preferred solution is to use a  PSU Jumper Bridge  on the secondary PSU. This gadget shorts the power on pins, so as soon as you flick the physical power toggle switch on the secondary PSU, it springs to life as it receives the 'on' signal from the adapter. The downside is you lose the sync between the two systems - you secondary system is 'always on' until you physically switch it off. My solution to this is to use a power strip with slave sockets that are only powered when there's power draw on a master docket (the main PC)

     

    With a secondary PSU in the external chassis, you have plenty of power for drives, fans and anything else needed.

     

    If you want to move your SAS card to the external chassis, you need to extend your PCIe there. Then, you need to look at PCIe expanders, bifurcation etc. This can be expensive. However, if you place a SAS EXPANDER in your chassis, you have more options. (you can pick these up cheaper than the amazon link -  I just added that for illustrative purposes).

     

    The SAS Expander looks like a PCIe card, but its not. It does not need to plug into a PCIe slot. It can, but doing so is only for the purpose of providing the card itself with power. The card also has a molex connector that is an alternative for power. So such an expander can be mounted in a PCIe slot, or mounted anywhere in your chassis. (they come in different form factors, some are integrated into hot swap drive backplanes etc.)

     

    The SAS expander connects to your main system by connecting to your HBA. One or more of the HBA SAS Ports is connected to the SAS Expander which in turn connects to your drives. In this way, s ingle 4 channel SAS port can expand to up to 20 channels (drives).

     

    One handy way of powering and mounting the SAS expander in the external chassis is to use a PCIe expander typically used by the mining community. In this scenario, you are not using it for data at all, just as a mounting point and power provider for the SAS expander.

     

    Of course, most of the above relates to the DIY approach. Splashing out on a proper disk shelf system will negate a good deal of the hackery and provide everything in a unified package - but often at a cost!

     

     

     

     

     

     

     

     

    • Thanks 1
  13. On 6/21/2020 at 12:18 AM, Aerodb said:

    So doing some more research, i found the video that level 1 techs did with gamers nexus where they built a rig that would act as the "brain" of the disk shelf.

     

    Has anyone done anything like this before? Im curious how to connect the two boxes or what the controller card was that wendell mentions but doesnt talk about. 

     

    Also, will i need to use ZFS or is what unraid uses(if thats the same) good enough? i havent had any issues so far.

     

    Long story short, Im thinking this will be a cheap way to gain drive slots. anyone know much about this?

    Yes, I have this kind of setup.

     

    You can externalise hard drives in a specialised 'disk shelf' chassis, or simply use any suitable PC case you have lying around.

     

    The systems are usually connected through a SAS controller (HBA) in the main system. The controller will have internal or external SAS ports that are typically connected to the target / external drive array using something like a Dual Mini SAS 26pin SFF-8088 to 36pin SFF-8087 Adapter.

     

    unRaid will see these external drives just like regular drives. You can add them to the array, use them for parity, cache or as unassigned disks.

     

    You might be interested in this blog post where I touch on the idea. It can get a lot more complicated with expanders, disk backplanes etc., but in basic terms, what you envisage is possible, and quite common.

     

    Search on here or the interweb for 'disk shelf', 'hba', 'SAS controller', 'backplane' to get started.

     

     

     

    • Like 1
  14. You will be able to start the windows vm automatically

     

    Should be no problem shutting down array daily, provided that you do it correctly , not just pulling the plug!

     

    windos performance will be largely on par with a bare metal system for most tasks, assuming you are passing through a Gpu, and sufficient cpu cores and RAM. Conventional wisdom says you’ll have a 5-10% performance drop, but you’ll only really notice it in benchmarks or serious gaming. You do need to reserve some system resources to allow unRAID run.

     

    for usb, for best results, pass through a whole usb controller / adapter. This is especially relevant when working with usb audio devices which won’t perform at all well if passed through directly to a vm. Much better to pass through the usb controller and attache the usb device to that.

     

    good luck,

    • Like 1
  15. Since your primary use case is VMs, you might like to consider the platform (cpu/motherboard combo) that provides the most pcie slots and discrete usb controllers.

     

    unless you plan to remote in to all your VMs, you’re likely going to want to be passing through multiple GPUs, USB adapters and possibly other devices. You’ll be running out of pcie slots quite quickly.

     

    for ultimate flexibility, choose a motherboard that supports bifurcation.

     

    also, before making a final choice, come back here and see if you can find anyone using the specific mb you have chosen. You’ll be interested in how well the system breaks out devices  into discrete IOMMU groupings. This is really important for smooth passthrough and reduced headaches.

     

    i have a threadripper 2950x and ASRock Taichai x399 and it is excellent with regard to all of the above. It’s not the only option, of course, but one to consider.

     

    i run 2x MacOS workstations and a Win10 system in VMs with gpu/usb passthrough as well as a couple of other ‘headless’ Win10 VMs. 

     

    Dont skimp on RAM. I found 64GB to be tight for my needs. I upgraded to 96GB and have much more flexibility.

  16. 2 hours ago, opentoe said:

    I'd love to see a roadmap. I know cloud based services are getting more and more popular, hey why not, throw our little USB 2.0 stick that boots our system and use PXE.

    Wouldn’t be my cup of tea. As an option, maybe, but just because lots of services are moving to the cloud doesn’t mean that every service has to.

     

    if I have a problem with my internet, I can’t boot or access my storage? No thank you! Cloud is good, and has benefits in lot of cases. Local is better for some things still. 

    • Like 1
  17. I would suggest asking that question on the docker support thread. I don’t recall any such dependency requirements when I set up. However, I run both virt manager and unRAID vm manager and there is no conflict at all. Changes made in virt manager show up in unRAID vm xml, but get wiped if I use unRAID visual editor.

     

    to be safe, get expert advice before proceeding.

     

    y

  18. 12 minutes ago, JohnSnyder said:

    Using Virtual Machine Manager in Manjaro Linux,, I can edit the template and increase the video memory which will allow me to display my virtual machines at full 4K resolution.  When I examine the unRaid VM template of a virtual machine, I do not see any line which addresses the subject of video memory.

     

    Is there a way to edit an unRaid VM template to increase the video memory and therefore allow full 4K resolution?

     

    Hi

     

    I would recommend installing the VIRT Manager docker. This provides much finer control over VM configuration. It also doesn't have the issue that unRaid native manager exhibits in that using the visual editor destroys custom XML changes.

     

    Its a great tool that exposes more options for VMs and includes a display. It's gotten me out of quite a number of holes over the years.

     

    You should be able to define the virtual GPU memory, or just use it as a guide for how to edit the unRaid XML manually, if you like.

     

    vmmasnager.thumb.png.93ca404cd013fd6b02a26dff816ae936.png

×
×
  • Create New...