Timothyy

Members
  • Posts

    10
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Timothyy's Achievements

Noob

Noob (1/14)

0

Reputation

  1. @Decto So that is then also dependent of what I connect to the motherboard right? At my Taichi board I in theory could disconnect the M2/ssd and the ethernet card so that it will get the 8 speed instead of 4? In the end it will allow more available assignable bandwidth? Or am I wrong? In your opinion, should I get a real benefit from swapping the motherboard to the Asus Pro WS X570-ACE? If you stood in my shoes would you do it? And then of course the question, will it allow me to use 3 GPU's simultaneously (guaranteed)? I can live with giving up a second ethernet card and/or the M2/ssd, it's just for gaming.
  2. I have checked 2 things and @Decto So if I understand it correctly the IOMMU group 04:00.0 is sharing bandwith with the SSD, 2 network cards, usb controller and sata controller? And this would be different if I had the server grade motherboard with a wired connection and 8 pci lanes bandwith to the slot like the Asus Pro WS X570-ACE so that it at least got 8 lanes instead of 4?
  3. I am running it, but I run into issues when I want a 3rd VM to run simultaneously with a 3rd GPU passed through. For what its worth my short specs: Motherboard: AsRock x570 Taichi, bios version P3.40 CPU: AMD Ryzen 9 3950x Memory: 128gb DDR4 (4x 32gb Samsung M378A4G43MB1-CTD) GPU: 3x GTX 1080 Ti gaming X 11gb I created a post with my error for the 3rd GPU: and I just came across this post which might be of help:
  4. Yes I have this working, trying to get a 3rd VM working but running into issues. 2 VMs with passthrough actually perform very well simultaneously in my opinion (4 isolated cores, 4 HTs (makes 8 total) and 16gb of reserved memory per VM). Have it running for a couple of months now daily and did not encounter much problems. Guess I had 2 or 3 incidental freezes and can not really remember it crashing completely. Problems for me start at the point I am trying to use a 3rd gaming vm with passthrough. I am updating the bios to version P3.61 on short notice. I don't use any other things on my rig currently. Motherboard: AsRock x570 Taichi, bios version P3.40 CPU: AMD Ryzen 9 3950x Memory: 128gb DDR4 (4x 32gb Samsung M378A4G43MB1-CTD) GPU: 3x GTX 1080 Ti gaming X 11gb
  5. Did you bought anything already and maybe even got it working? If yes what setup do you have? I am trying to do the same thing myself, but I still have to find a fix for this issue to use the 3rd VM without crashing. https://forums.unraid.net/topic/98216-video_internal_scheduler_error-windows-10-vm/ describes my problem. Running an X570 board with ryzen 9 3950X and 3 gpu's (full specs in post above).
  6. Hi all, I need some help in how to start troubleshooting my problem. Currently running Unraid 6.8.3 on the following hardware: Motherboard: AsRock x570 Taichi, latest bios version P3.40 CPU: AMD Ryzen 9 3950x Memory: 128gb DDR4 GPU: 3x GTX 1080 Ti gaming X 11gb Harddrives: 3x 1TB SSD drives, 1x 1TB nvm-e drive as unassigned mounted devices (and 2 harddisks: 1x parity and 1 array disk with just the default shares and a 1TB nvm-e cache drive). PSU: 1600 watt (GPUs all have been connected properly with 2 GPU power cables each) also the second power connector on the motherboard. Settings: HVM enabled, IOMMU enabled. No PCIe ACS override, no VFIO allow unsafe interrupts. BIOS boot mode Legacy, switched C states off as in one of the guides advised for Ryzen builds. No dockers running, no other VMs then the 3 w10 vms mentioned below. So as stated above I have 3 W10 gaming vms, all of them with 4 pinned cores and corresponding HT ones, 16gb of memory, a ssd mounted by device id, a dedicated GPU with the same custom vbios (also tried a W10 vm with nvme drive mounted by device id). All settings are equal. 2 W10 gaming vms work fine and stable simultaneous as well as individual, the problem starts when I fire up the 3rd one (either on the last remaining ssd or from the nvm-e, it does not matter). The vm has fresh W10 installed, and as soon as it runs it crashes with a (grapics card) error 'VIDEO_SCHEDULER_INTERNAL_ERROR' within a couple of minutes showing within the famous windows blue stop code error screen. Sometimes it freezes my other vms too and once it throwed the same error on one of the other VMs at the same time. GPU is not broken, output is working. What I have tried so far: - Tested with different nvidia drivers -> yes, no solution/fix (also installed the nvidia control pannel, driver tested via windows update and by downloading it manually). - Checked if vbios was mountend -> yes, they al use the same vbios file. - Checked pinned cpu cores/ht ones -> all different and within the isolated range mapped correctly,. - Checked cables and pcie extenders -> all fine. - Checked if pci lane was enabled -> yes (no 3rd nvme drive inserted on the motherboard that would disable the last slot). - Checked for errors -> no errors in the logs of the machines. - Launching only VM 3 so the other two are not running. - Checked the VM on the other GPUs -> no problems encountered so VM Image should be fine. - Checked the IOMMU groups, all GPUs are on different groups: When I check the following detailed device information with lspci -v I notice that 3 things are disabled on the 'problem causing gpu, its IOMMU device 10:00.0 and that the memory at xxxxxxxx are different. Anyone having the same issue?
  7. I use the full power down from the menu and before switching off I made sure the VMs were off already. I also turned on the syslog mirror to flash now. Any direction on what to look for in the logs? Never really dived in to them. And what about the PCIe ACS override setting which is turned on? Could this cause some trouble with the reboot?
  8. My unraid server v6.8.3 seems not to reboot properly. I can only get the unraid server functionally running after a full power off after having waited for a long moment or pulling the power cord. If I reboot the server or power it up after a shutdown too fast it does not start, screen stays black (have checked all video output cards to make sure I was not looking to the wrong screen). I checked my bios boot order also, and made sure the flash drive was on top (and yes it is saved). Not sure how to start troubleshooting now as my guess is that it does not call the flashdrive properly. When server is running I do not get any errors or so. I have 2 GPUs, slot 1 and 3 both equipped with a GTX 1080, and I have a factory vga connector connected via some sort of ide cable to the motherboard directly All three of them are working fine. Which is the primary one? The one on slot 1 or the one connected to the motherboard via cable? I notice that my motherboard output first is displaying on the vga (bios, load order, pre boot system check etc.) and then is switching. By the way if performing a reboot do I need to stop the array manually before. I currently am doing this for safety reasons anyway at a reboot or shut down but is it mandatory to stop the array first?
  9. Anyone having issues with a mouse (cabled Corsair Ironclaw) connected via usb and passed through to a Windows 10 VM where it works fine until you reboot or shut down the VM and start it up later again? I already checked another usb port (same problem), and disabled the mouse, updated VM settings and enabled it again before a restart. I am sure the mouse is not used by another VM. Corsair keyboard is always working fine, same as a logitech mouse and keyboard assigned to another VM. That one I can reboot or relaunch without issues. They only way I can get it working again is the first time the VM starts after a full shut down of the unraid server and fresh start of it. I am running version 6.8.3, was not able to test it on 6.8.2 or earlier. Do not get VM launch errors. Another mouse I tested is working fine.
  10. Dear Readers, First of all sorry for the long text but I would like to explain the situation. Where I was out for years of building computers and playing with hardware configurations I recently decided to start with it again. I left around core2duo, release of SLI configurations and .... Partly inspired by Linus Tech Tips and their 7 gamers 1 CPU (build) video(s) and some other unraid setups I searched for some similar parts from the first video mentioned and surprisingly I was able to find some at second hand. I also have the wish already for years to set up the play room with lots of flexibility and I am using VMWare Workstation for small things. I know that for my purpose which I will explain later on I could have gone for a lot of whole different setups and even better ones but I currently am in possession of the following parts (bought second handed): Motherboard: ASUS Z10PE-D8 WS (7 x PCIe x 16 slot(s), 8 SATA 6gb/s ports, 1 M.2 socket 3, so enough possibilities) CPU: 2x Intel XEON E5-2697 v3 (14 cores, 28 threads each) CPU Cooler: 2x Noctua U12DX i4 Memory: 4x Samsung 16gb DDR4 2133mhz Case: be quiet! Dark Base Pro 900 rev.2 The purpose is to build a setup with 2 or 3 VMs for gaming which will be used simultaneously where one of them is probably also a power workstation, room to play with Docker, different Linux instances, ad hoc VMs for testing and maybe even NAS in future. I will also not use water cooling in my setup. I do not have very strict requirements for the gaming VMs except that it should stay affordable. So I do not need 4K, the highest FPS, if it runs acceptable smoothly and at 1920*1080 around medium settings I am satisfied. Still have some monitors around with that resolution, so will also not buy new screens. I will if needed upgrade in future when I am running into limits. Last but not least, I have to find some answers on the next questions and I am hoping some of you can help me out: Q1. I already saw a couple of posts on the forum of people having the same motherboard. Still anyone using it? Anyone willing to share a link or configuration. Q2. for what video card should I go, I will narrow the search to NIVIDIA cards only due to, I was told, problems with AMD which still exist in combination with unraid. I was thinking of the GTX 1050/1060 or maybe even around the 1660 range. Q2.1 If I am not mistaken I should equip PCI slot 1 with the first GPU, when a GPU is taking 2 'heights' -is it 2U? - , can I equip PCI slot 3 with the second card and 5 with the eventual third? Or should they be plugged in to slot 1, 2 and 3? Q3. what is the amount of memory that is covering my needs? I am thinking at upgrading to 128 GB. Q4. Things that pop up definitely to check before buying, assembling, configuring, to read other then the manual, to consider e.g. or do's and don'ts. Q5. What would be my best option to go for the PSU? I already know that my CPUs have a TDP of 145Watt each and that I have to feed 2 or 3 GPUs. Q6. What would be advisable for caching drives and size and what for storage, SSD/HDD/M.2? Many thanks in advance and I am looking forward to your reply soon.