Jump to content

bastl

Members
  • Content Count

    776
  • Joined

  • Last visited

  • Days Won

    1

bastl last won the day on November 7

bastl had the most liked content!

Community Reputation

92 Good

4 Followers

About bastl

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Auggie The "libvirt.img" you can find in the default domain share contains all your VM xml and BIOS files. Depending on how you have setup this share before, search for that file. It should be on the array if you never used a cache before. Check if you maybe have multiple folders containing that file on the array drives and the cache drive. Or restore it from an backup if you have.
  2. @OneFiveRhema I tried it myself with all the default settings and have the same issue with ubuntu-16.04.6-server-amd64. During the install process you'll be asked if you wanna force the installation into UEFI mode. The default is set to "no" and will end up in an unbootable installation. Either you choose "yes" on the following screen or when setting up the VM in Unraid the first time, switch the BIOS setting to "SeaBIOS". Default in the Ubuntu template is OVMF/UEFI.
  3. @OneFiveRhema Sounds like you get to the efi shell. I had some issues in the past on a couple Linux distros manual setting up the partitions and mount points which screwed up the grub config. In most cases the default option for partition worked.
  4. @OneFiveRhema What happens if you remove the iso?
  5. @OneFiveRhema Edit your VM, switch to xml view and change the "boot order" for your vdisk to 1 and for the iso to 2 so it looks something like this <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback' discard='unmap'/> <source file='/mnt/user/VMs/W7_template_seabios/vdisk1.qcow2'/> <target dev='hdc' bus='scsi'/> <boot order='1'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/Acronis/AcronisMedia.117iso.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk>
  6. @jbartlett I didn't found any usecase, where I benefit from a higher memory bandwith. Not in gaming, not in general workflow and not in any cad software I use. Less latency in most situations should end in a quicker and lets say snappier work with any software at least in my case. I don't have any high bandwith related tasks. Don't know if it makes any differences in decoding/encoding video for example but in most cases the faster access to RAM is the more preferred scenario. Correct me if I'am wrong.
  7. @jbartlett And don't forget to play around with the other 10ish options 🤣
  8. The basic architecture didn't really changed between first and second gen TR4. They only added 2 more nodes and improved the interconnection latencies of the first gen. The rest is basically the same. Inside the cputune tag. <vcpu placement='static'>14</vcpu> <iothreads>1</iothreads> <cputune> <vcpupin vcpu='0' cpuset='9'/> <vcpupin vcpu='1' cpuset='25'/> <vcpupin vcpu='2' cpuset='10'/> <vcpupin vcpu='3' cpuset='26'/> <vcpupin vcpu='4' cpuset='11'/> <vcpupin vcpu='5' cpuset='27'/> <vcpupin vcpu='6' cpuset='12'/> <vcpupin vcpu='7' cpuset='28'/> <vcpupin vcpu='8' cpuset='13'/> <vcpupin vcpu='9' cpuset='29'/> <vcpupin vcpu='10' cpuset='14'/> <vcpupin vcpu='11' cpuset='30'/> <vcpupin vcpu='12' cpuset='15'/> <vcpupin vcpu='13' cpuset='31'/> <emulatorpin cpuset='8,24'/> <iothreadpin iothread='1' cpuset='8,24'/> </cputune> <numatune> <memory mode='strict' nodeset='1'/> </numatune> <resource>
  9. Thats wrong. All the first gen TR4 have 2 nodes, each with it's own memory controller to get quad channel memory working on that platform. Starting with 4 cores per node (1900x), 6 cores (1920x) and the 8 core per node 1950x. 4 nodes where only available on first gen Epyc and where introduced on second gen TR4 like on yours.
  10. It looks like it only works if you pass trough extra infos for the cache like I did. https://git.qemu.org/?p=qemu.git;a=commit;h=7210a02c58572b2686a3a8d610c6628f87864aed https://www.reddit.com/r/VFIO/wiki/known_issues#wiki_enabling_smt_on_amd_processors_with_qemu_3.1.2B
  11. @sonicsjuuh Just as an option, you can also use a plugin and the App ControIR to Start/Restart/Stop your VMs or Dockers via phone.
  12. I know that feeling. There is always that one guy that has another little tweak 😂 Already seeing a 19% improvement in the memory score with this set. I'll also test hugepages. By using "interleave" you spread the RAM accross all memory controllers from all nodes, even the ones from the node you're maybe not using in the VM. On first gen TR4 this was a big issue, because it added a lot of RAM latency. Sure you get the higher memory bandwith by using "quad channel" but in most scenarios in my tests the lower latency was the preferred option. Not exactly sure how big of a difference it is on second gen TR4, but using "Preferred" or "Strict" was the better choice for me. Every program, game or benchmark is more or less affected by the lower bandwith by basically turning the RAM into a dual channel configuration. The bigger impact I saw by reducing the latency by using the "strict" setting. Maybe have a look into the "Cache & Memory Benchmark" which comes with AIDA64 to test this. This is a part of the extra CPU flags I use for a while now. <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC</model> <topology sockets='1' cores='7' threads='2'/> <cache level='3' mode='emulate'/> <feature policy='require' name='topoext'/> <feature policy='disable' name='monitor'/> <feature policy='require' name='hypervisor'/> <feature policy='disable' name='svm'/> <feature policy='disable' name='x2apic'/> </cpu> By forcing Windows into recognizing the CPU as an EPYC with these tweaks, it also recognizes the correct L1, L2 and L3 cache sizes which the node has to offer. Without it showed wrong cache sizes and wrong mapping numbers. Without these tweaks and the correct readings, starting up 3DMark for example always crashed or frooze the VM completly at the point, where it gathers the system infos. Not sure which other software might be affected, but this helped me in this scenario. Obvisiously the vcore is reported wrong, but the cache info is reported correctly with this tweak. 1 core is used for iothread and emulatorpin <emulatorpin cpuset='8,24'/> <iothreadpin iothread='1' cpuset='8,24'/> and the rest only specifically for this one VM. One 8 core die of the 2 from the 1950x is dedicated to this VM only and by adding up the numbers it exactly matches the specs of AMD. BUT this isn't the complete list of tweaks. There are way more you can play around with 😂😂😂 <cpu mode='custom' match='exact' check='full'> <model fallback='forbid'>EPYC-IBPB</model> <vendor>AMD</vendor> <topology sockets='1' cores='4' threads='2'/> <feature policy='require' name='tsc-deadline'/> <feature policy='require' name='tsc_adjust'/> <feature policy='require' name='arch-capabilities'/> <feature policy='require' name='cmp_legacy'/> <feature policy='require' name='perfctr_core'/> <feature policy='require' name='virt-ssbd'/> <feature policy='require' name='skip-l1dfl-vmentry'/> <feature policy='require' name='invtsc'/> </cpu> At some point I stopped, because I had no time back than to fiddle arround with it even further and the system was stable enough anyways. Main programs run fine and games performed great. Edit: Forgot to mention it reports "Hyperthreaded" for me in CoreInfo.
  13. @jbartlett Did you by any chance set a strict RAM allocation to the node which cores you're using? If not, you might have to test this again. By not setting this up, unraid will use RAM from all nodes. <numatune> <memory mode='strict' nodeset='1'/> </numatune> The following shows you from which node the VMs taking their RAM numastat qemu