Jump to content

bastl

Members
  • Posts

    1,267
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by bastl

  1. On older versions of UNRAID i had to use the pcie_acs_override=downstream,multifunction option to break down my IOMMU groups to get the GPUs in its own groups. With the newest BIOS (AGESA 1.1.0.0) + Unraid 6.6 this isn't needed anymore. At least for me. All my network adapters are still in one group and the USB controllers which i don't passthrough anyway. Maybe check if you're on an up-to-date BIOS? @CSeK I also dissabled the ACS-Override option in the VM-Manager settings and everything works. Edit: Your logs show you're on an old BIOS Sep 20 16:41:34 unRAID kernel: DMI: System manufacturer System Product Name/ROG ZENITH EXTREME, BIOS 0902 12/21/2017 ASUS released a new version in august with the newer AGESA version. ROG ZENITH EXTREME BIOS 1402 Update AGESA 1.1.0.1 Patch A to support AMD 2nd Gen Ryzen™ Threadripper™ processors
  2. @testdasi Yesterday I reduced my Gaming VM to 6 cores + 6 threads on node 1 with all cores isolated and did a couple of benchmarks without running anything else on that die. Than i switched all my dockers and other VMs from node 0 to node 1, isolated the last 6 out of 8 cores and their threads on node 0 from unraid and switched over the gaming VM to node 0 where still my 1080ti should be attached to (if lstopo is correct). I don't flipped the cards around yet, because for now i don't need any vbios to pass through. The performance is basically the same except from small stutters/hickups and sound bugs every 30-40 seconds. Every game (BF5, Farcry5, DayZ, Superposition + Heaven benchmark) i tested gave me nearly the same performance as on node 1 + that weird stuttering. Don't exactly know why. I never had that issue if i isolate the second die and use these cores only. This gets me back again to my initiall idea, that maybe the BIOS is reporting the core pairings wrong to the OS. Why should i get stutters if the GPU is connected to the cores which are used directly and no stutters across the infitity fabric. Weird! I didn't retested in NUMA mode. I did that before and as long as i don't mix up the dies for one VM it makes no difference in gaming performance. Using the UMA mode showed me in my tests that i get a higher memory bandwith with no real performance losts.
  3. @stormense you might missed something in the video.
  4. Thanks @SpaceInvaderOne the symlink fixed it for me Why the hell is the first PCIE slot connected to the second die and the third slot to the first die? In the first slot i have a 1050ti which is used by a Linux VM which uses some cores from the first die. The 1080ti on the 3rd slot is mainly used for a gaming VM and using all cores (8-15;24-31 isolated) on the second die. I wish i could flip a switch in the BIOS to reverse that. I guess there is no chance for such an option, right?
  5. As you said @bonienl it first happened with the RC1 build. The domain "msftncsi.com" i have in an alias list in PFsense since 2017 for blocking Microsofts telemetry stuff. No issues with updating any Windows PC so far in that time by blocking that domain. Windows never showed any connection issues. I had to remove the domain from the list and also "msftncsi.com.edgesuite.net" for the update check to work. Currently the domains resolve to "a1961.g2.akamai.net" which aren't filtered by my alias list. But i have a couple more "*.akamai.net" domains in there. Let's hope the name resolving doesn't change. Thanks for the hint
  6. Manual update from RC2 worked without any problems so far. But same issue as before, RC3 didn't showed up in the update section. I only see the status as "unknown". Can't figure it out where my issues are. AWS download servers are reachable within the browser. Nothing gets blocked from my PFsense box. I appreciate any advice.
  7. Ok, now it gets interesting. I already watched almost all videos from Wendell, but thanks for mentioning it here for people stumbling across this thread. @tjb_altf4 I might overlooked something by doin all my tests and the presented core pairings are alright. I assumed that the better memory performance depends on the cores and from which die they are. By switching between the options auto, die, channel, socket and none in the BIOS under AMD CBS settings, I should have already noticed that as soon as I limit a VM to only 1 die I get the memory bandwith from this specific memory controller. I basically cut the bandwith in half from quad channel (both dies) to dual channel. Makes perfectly sense. How could i miss that? If you need the memory bandwith for your applications, the UMA mode is the way to go. For me i have to set it to Auto, Socket or Die for the memory to get interleaved over all 4 channels and the CPU gets reported as only 1 node. By choosing the option Channel (Numa mode) I basically limit the memory access to the 2 channels from the specific die. The latency in this case should be reduced because you removed the hop to the other die. Option None will limit it to single channel memory and cuts the bandwith even further as shown in the pictures above. I'am actually not sure whats the difference between Auto, Die and Socket are. They all show similar results in the tests. And it should be also mentioned that it looks like Cinebench is more memory bandwith related as most people are reporting. Wendell mentioned in that video by using the lstopo to check which PCIE slots are directly connected to which die. Is there a way to check this without lstopo, which isn't available on Unraid? Right now my 1080ti is placed on the third PCIE slot x16 (1st slot 1050ti x16, second slot empty x8) and I'am not sure if it's directly attached to the correct die in my gaming VM. Maybe there is something already implemented in Unraid for listing the topology in a way lstopo did. Any ideas? Edit: Another thing i should have checked earlier are the behaviour of the clock speeds. Damn i feel so stupid right now. watch grep \"cpu MHz\" /proc/cpuinfo Checking this command during the tests would have shown that as soon as i choose cores from both dies for a VM the clocks on all cores ramp up. If i assign the core paires Unraid gives me, only one die ramps up to full speed and the other stays on idle clocks. 🙄
  8. As reported earlier for the 1950x on a ASRock Fatal1ty x399 Gaming Pro something is reported differently. Looks like the same happened for Jcloud on his Asus Board. Currently I'am on the 6.6 RC2. I couldn't realy find a BIOS setting to change the behaviour how the dies are reported to the OS. It always been reported as 1 node. Edit: @testdasi It looks like your RAM usage for your VMs isn't optimized either. If I understand the shown scheme right, for example your VM with PID 33117 uses half the RAM from 2 different nodes which have a memory controller build in. In case u have more than 1 die assigned to the VM thats ok, but if you use lets say 4 cores from 1 die, it should use the 4GB RAM from the same node and not from another node.
  9. Force shutdown is exactly what he is complaining about. It will be forced after some time if the clean shutdown isn't working. Except from a sheduled shutdown inside the VM i have no idea. I quickly tested it and it doesn't work for me either. Sorry
  10. Manual update passed without errors. Looks like everything is ok. Status from the Update dialog still shows "unknown"
  11. Here are my diagnostics. I checked my PFSense box but nothing gets blocked. AWS servives are accessible. If i check the link with Google Chrome something comes up in the browser from inside am VM on Unraid. Looks some sort of XML file with the download instructions. I will try the manual install. Will report back in a second.
  12. There is no RC2 shown up anywhere. Second picture is running the Update Assistant for next branch.
  13. I'am trying to update from RC1 to RC2 but the Tools/Update OS Section didn't come up with the new version. Update assistant doesn't show any errors. Am I missing something or do i have to downgrade to 6.5.3 first? On RC1 I had to call the docker "check for updates" manually by executing the dockerupdate.php. Is there a way to trigger the update manually?
  14. Agesa 1.0.0.4 you needed some sort of extra patch. The Agesa 1.0.0.6 never released for me at least from ASRock as stable. Only a beta version was available, i never testet. I think it mainly adressed memory incompatibility for the AM4 Ryzen chips and came with some microcode updates to fix security issues. The Agesa version 1.1.0.0 should be the first one including the fix.
  15. Hi @gridrunner First of all big thanks for all your great Unraid tutorials. Helped me a lot to configure my system. I myself use a first gen 1950x so far without any big issues, but what i noticed and earlier reported in this thread, after an upgrade from 6.5.3 to the 6.6.0-rc1 i can't edit any existing VM in the form view without getting an error. Creating one works fine, but not the editing later. Did you upgraded from an earlier version or did you tried a fresh install? If i remember correctly you had an 1950x before, right? If you still have that chip can you check the following thread and maybe can post your core pairings how it looks on your system. https://forums.unraid.net/topic/73509-ryzenthreadripper-psa-core-numberings-andassignments/?do=findComment&comment=678031
  16. For me the mounted libvirt.img is accessible under this path via the webterminal
  17. You can find the xml files in /etc/libvirt/qemu
  18. The command brings up the following: For whatever reason every pair is shown twice. My current syslinux config looks like this I removed the following today after i tested disabling the ACS override patch. "pcie_acs_override=downstream,multifunction" IOMMU groups are fine now with the current kernel and my updated BIOS yesterday. With the older BIOS from end of last year i wasn't able to passthrough a GPU or the nvme controller without this line. Now it works without. All the issues with core pairings or the noneditable VMS existed on both configurations. I don't really sure if i ever touched the go file.
  19. @chatman64 Are you german same as me? I asked because in your screenshot i see you have configured VNC to use a german layout. Maybe something with the localisation causes that error. I already checked if the VNC layout is the issue, but it wasn't the case. Tested a VM without VNC added to it, same error and chossing us layout causes it. And as mentioned earlier, creating a new vm without editing anything caused that error on the first edit afterwards.
  20. BIOS is up to date now with version 3.30. Same results. Core pairings are showing wrong in 6.6.0-rc1. I played around a bit and tested a couple things. First boot it came up with tons of PCI reset errors, but it looks fine now after the second reboot. ACS override i can disable now and get most devices split up in there own groups now. Only the network interfaces are grouped together.
  21. I have a feeling it's an AMD releated issue with Unraid 6.60-RC1. In earlier Versions i hadn't had any issues like this. Maybe someone with a Ryzen or Threadripper system reading this can test my xml or can try to reproduce this error. win7_outlook.xml This looks like another Threadripper/Ryzen related issue i posted in another thread couple days ago. The shown core pairings in unraid didn't really match the real pairing of the cores. I did a couple of test to find out which core is on which die and which ones are the threads only. It looks like the pairings presented to the user by unraid isn't correct. This hasn't changed with the 6.60-RC1 version.
  22. Same behaviour in safemode. Creating a new VM works fine, editing brings up that error. Btw. do you tried my xml on a Intel or AMD system?
  23. First I tried to change the memory from 4 to 8gigs. After that i tried to change the machine type from i440fx-2.11 to a newer version, and even changing the keyboard layout from de to us for vnc results in this error. Every change done one by one. Also without any changes pressing the update button brings up that error.
×
×
  • Create New...