ratosaude

Members
  • Posts

    21
  • Joined

  • Last visited

Posts posted by ratosaude

  1. 8 minutes ago, ich777 said:

    How did you install the mods in the container with SteamCMD or did you copy over the mods from your windows/mac machine?

     

    EDIT: The container download the file servertest.ini you have to edit this file (within your SERVERDIRECTORY/Zomboid/Server)

    This goes back to my initial post, after I edit this .ini and run the server, it creates a ghost directory with new files running standard and ignore every other file I edited.

  2. 32 minutes ago, ich777 said:

    Can you give me a link to a tutorial how to install mods on Zomboid? Which mods do you want to install so i can try it?

    Can't really give a link, since it's bits of information here and there, but it's quite simple. There's two lines inside server.ini that you need to fill that's modId and workshopItems. Usually they provide both in the steam workshop page of each mod, if it doesn't the workshopItem is in the address bar of the mod and the modId you would need to download in steam workshop than go to your steam instalation folder>steamapps>workshop>content>108600(PZ steam id)>mods and find the mod.info file which contains the modId. And inside the .ini you separate them with a ";" for each mod you want. There's 4 mods I want installed: hydrocraft, zombie loot extended, orgm and show damage.

    idpz.jpeg

  3. 5 hours ago, ich777 said:

    I will look into that when i got home.

    So it overwrites your file or have you two server.ini files in the directory?

    Can you try to edit the server.ini file that it creates or past the content from your server.ini to that?

    It creates a new directory, that first one with new server files and run from that, ignoring the Zomboid folder and files in there. I can edit the file however it doesn't load the mods even after restarting the server.

    PZServer.png

  4. Edit VM XML , and add the entire controller , something like this:

     

    <hostdev mode='subsystem' type='pci' managed='yes'>

          <driver name='vfio'/>

          <source>

            <address domain='0x0000' bus='0x00' slot='0x1d' function='0x0'/>

          </source>

        </hostdev>

     

  5. I guess you have a 12 core CPU.

    When using emulatorpin it is best to not pin cores/threads that are being used for the VM itself.

    The point is, is to allow the VM to use the cores all for the os (for you windows) and them not to

    have any overhead doing the emulation calls for the VM

    I think running 5 gaming VMS off 12 cores will be 'very challenging' as I don't think there are enough resources for that, but I may be wrong.

    If you are going to use emulatorpin then pin it to the cores you have marked for unraid.

    <emulatorpin cpuset='0,1,12,13/>

     

    When running the 5 VMs stop all dockers to keep that as light as possible.

     

    If using append isolcpus then its right to isolate only the cores that are being used for the VMS.

    Don't isolate the cores used for emulation calls so how you had it like below should be fine.

    append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

    But you may be fine not isolating the CPUs. Try both and see what works best for you.

    I keep multiple entries in my syslinux config file so I can easily switch on reboots. Just add another label and it will show as selectable on boot.

    for example, mine looks like this

    label unRAID OS
      menu default
      kernel /bzimage
    append vfio-pci.ids=8086:15a1 initrd=/bzroot
    label unRAID isolated 12 cores
      kernel /bzimage
      append isolcpus=2-13,16-27 initrd=/bzroot
    label unRAID OS GUI Mode
      kernel /bzimage
      append initrd=/bzroot,/bzroot-gui

     

    Also after you have set the VMS up you don't need to emulate a DVD/CD drive so scrap these lines

     

    <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows10.iso'/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>

     

    You would be best to pass through the disk not using <source dev='/dev/sde'/> use device id.

     

    One of mine looks like this

     

    <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/disk/by-id/ata-ST32000542AS_5XW1HXX5-part2'/>
          <target dev='hdd' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </disk>

     

    This is the id of the disk is  ata-ST32000542AS_5XW1HXX5  and then the bit on the end -part2 passes through the second partition on the disk. Without the -part2 on the end

    the whole disk would be passed through (all partitions)

     

     

    Once more thank you for the answers. I removed the vdisks on the VM's.

    I used the device BY-ID, it worked perfectly but then a new doubt arise, if I use part1 and part2 to 2 distinguished VM's, will they boot windows normally or these disks can only be secundary?

    My PC as it is right now:

    32gb ram

    12c 24t (2x e5 2630v2)

    HDD data disk 3tb

    HDD cache disk 40gb

    No pariry due to license limitation for 6 disks.

     

    VM's

    P1 - 3c, 6gb ram, SSD 120 full passthrough + AMD Radeon RX480

    P2 - 3c, 6gb ram, SSD 120 full passthrough + GeForce GTX 550 ti

    P3 - 2c, 6gb ram, HDD 500 full passthrough + AMD Radeon HD6750

    P4 - 2c, 4gb ram, HDD 500 full passthrough + AMD Radeon 5450

     

    I utilize various radeon because when I shut down the VM that has the nvidia, the same won't start again unless I do a full reboot of the system

  6. hi

     

    When I try to do this procedure on my GTX550ti I get the error below

     

    root@Frank:~# lspci -v | grep VGA
    03:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Park [Mobility Radeon HD 5430] (prog-if 00 [VG
    A controller])
    04:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] (rev a1) (prog-if 00 [VGA controller])
    0d:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 21) (prog-if 00 [VGA controller])
            Flags: VGA palette snoop, medium devsel, IRQ 19, NUMA node 0
    83:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480] (rev c7) (prog-i
    f 00 [VGA controller])
    84:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Juniper PRO [Radeon HD 6750] (prog-if 00 [VGA
    controller])
    root@Frank:~# echo "0000:04:00.0"  /sys/bus/pci/drivers/vfio-pci/unbind
    0000:04:00.0 /sys/bus/pci/drivers/vfio-pci/unbind
    root@Frank:~#  cd /sys/bus/pci/devices/0000:04:00.0/
    root@Frank:/sys/bus/pci/devices/0000:04:00.0#
    root@Frank:/sys/bus/pci/devices/0000:04:00.0# cd /sys/bus/pci/devices/0000:04:00.0/
    root@Frank:/sys/bus/pci/devices/0000:04:00.0#
    root@Frank:/sys/bus/pci/devices/0000:04:00.0# echo 1 > rom
    root@Frank:/sys/bus/pci/devices/0000:04:00.0# cat rom > /mnt/user/isos/gt550ti.dump
    
    cat: rom: Input/output error
    root@Frank:/sys/bus/pci/devices/0000:04:00.0#
    

  7. First of all, let me thank you for the answers. Now let me see if I got this right. I have 5 gaming VM's with 2 cores and 4 threads to each of them a listed below:

     

    cpu 0 / 12 Unraid

    cpu 1 / 13 Unraid

    cpu 2 / 14 vm1 emu*

    cpu 3 / 15

    cpu 4 / 16 vm2 emu*

    cpu 5 / 17

    cpu 6 / 18 vm3 emu*

    cpu 7 / 19

    cpu 8 / 20 vm4 emu*

    cpu 9 / 21

    cpu 10 / 22 vm5 emu*

    cpu 11 / 23

     

    I followed the same line as it was in your video with optimizing perfomance in mind, as such should I set the emulator pin 2,4,6,8,10 in every VM?

     

     <emulatorpin cpuset='2,4,6,8,10'/> 

     

    Do you have any other tips for disks or some detail that I might have overlooked? Following an example of 1 VM:

     

    <domain type='kvm'>
      <name>Player 2</name>
      <uuid>c513c1e9-31eb-5467-881b-c140595ad758</uuid>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>4194304</memory>
      <currentMemory unit='KiB'>4194304</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>4</vcpu>
      <cputune>
        <vcpupin vcpu='0' cpuset='8'/>
        <vcpupin vcpu='1' cpuset='9'/>
        <vcpupin vcpu='2' cpuset='20'/>
        <vcpupin vcpu='3' cpuset='21'/>
        <emulatorpin cpuset='2,4,6,8,10'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
        </hyperv>
      </features>
      <cpu mode='host-passthrough'>
        <topology sockets='1' cores='2' threads='2'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='block' device='disk'>
          <driver name='qemu' type='raw' cache='writeback'/>
          <source dev='/dev/sde'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/Windows10.iso'/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='usb' index='0' model='nec-xhci'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
        </controller>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='ide' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:8b:3a:17'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target port='0'/>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x84' slot='0x00' function='0x0'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x84' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x10c4'/>
            <product id='0x8105'/>
          </source>
          <address type='usb' bus='0' port='1'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x1532'/>
            <product id='0x001b'/>
          </source>
          <address type='usb' bus='0' port='2'/>
        </hostdev>
        <hostdev mode='subsystem' type='usb' managed='no'>
          <source>
            <vendor id='0x1c4f'/>
            <product id='0x0002'/>
          </source>
          <address type='usb' bus='0' port='3'/>
        </hostdev>
        <memballoon model='virtio'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </memballoon>
      </devices>
    </domain>
    

     

  8. I'm trying to isolate the CPU's to individual VM's, however when I edit the XML, the modification doesn't save. I altered in the system to isolate from the OS

     

    append isolcpus=2-11,14-23 initrd=/bzroot,/bzroot-gui

     

    By editing the XML

     

      <cputune>

        <vcpupin vcpu='0' cpuset='8'/>

        <vcpupin vcpu='1' cpuset='9'/>

        <vcpupin vcpu='2' cpuset='20'/>

        <vcpupin vcpu='3' cpuset='21'/>

      </cputune>

     

    to

     

      <cputune>

        <vcpupin vcpu='0' cpuset='8'/>

        <vcpupin vcpu='1' cpuset='9'/>

        <vcpupin vcpu='2' cpuset='20'/>

        <vcpupin vcpu='3' cpuset='21'/>

        <emnulatorpin cpuset='8,20'/>

      </cputune>

     

    But when I click in view xml with the VM started, the result is the following

     

      <cputune>

        <vcpupin vcpu='0' cpuset='8'/>

        <vcpupin vcpu='1' cpuset='9'/>

        <vcpupin vcpu='2' cpuset='20'/>

        <vcpupin vcpu='3' cpuset='21'/>

      </cputune>

     

    Unraid version 6.3.0rc5

     

     

  9. As a side note IF you did want to run your Cache disks in BTRFS RAID-0 then in summary you would:

     

    - Create a DEFAULT RAID-1 Cache Pool

    -  In Cache Settings (By Clicking on Cache 1 from Main Tab) scroll down to "Balance Status" and where you see ...

     

    -dconvert=raid1 -mconvert=raid1

     

    replace it with

     

    -dconvert=raid0 -mconvert=raid1

     

    and hit "Balance" Button!

     

    Note: IF you do this I would do this when there is NOTHING on the Cache disk(s) at all. Then if it goes wrong you have not been impacted at all.

     

    Reference reading: http://lime-technology.com/forum/index.php?topic=47408.msg454679#msg454679

     

     

     

    OK thank you

     

    also thought about doing the passsthough of completely disk control , this would be a good option ?

     

    root@Frank:~# lspci | grep SATA

    00:1f.2 SATA controller: Intel Corporation C600/X79 series chipset 6-Port SATA A

    HCI Controller (rev 05)

    05:00.0 Serial Attached SCSI controller: Intel Corporation C602 chipset 4-Port S

    ATA Storage Control Unit (rev 05)

    0a:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9230 PCIe SATA 6Gb/s

    Controller (rev 10)

  10. I have 5 discs, 3 of which are HDD and 2 SDD, I use 3 VM with passthrough for vídeo.

    I use 2 of them for gaming and 1 for common access.

    I'd like to know the best way for optimizing the use of the discs.

    My thought was HDD(3TB parity), HDD(3TB disk 1), HDD(500GB disk2), 2 SSD(120GB CACHE1/2).

    Or to use the discs exclusively to the VM, HDD(3TB parity), HDD(3TB disk 1), HDD(500GB cache), 2SSD(120GB for VM1/VM2) and VM3 on cache or disk1.

  11. Hello,

    I have the motherboard Z9PE-D8 WS, it has 7 PCIe slots, being 1 to 4 for CPU1 and 5 to 7 for CPU2, when I try to use slots 5 to 7 it's trying to start from CPU1 and returns a non-permit error. I'd like to know how to set QEMU to initialize from CPU2.

     

    Configs

    P1 - SLOT 1 - GTX 550TI - ok

    P2 - SLOT 5 - Radeon 6750 - fail

     

    P.S. In case I try to use the Radeon on slot 3, it Works correctly.

    P.S.2 My BIOS is updated

     

    Msg of error -

    internal error: early end of file from monitor: possible problem:

    2016-03-08T13:22:32.694206Z qemu-system-x86_64: -device vfio-pci,host=83:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: error opening /dev/vfio/47: Operation not permitted

    2016-03-08T13:22:32.694236Z qemu-system-x86_64: -device vfio-pci,host=83:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: vfio: failed to get group 47

    2016-03-08T13:22:32.694249Z qemu-system-x86_64: -device vfio-pci,host=83:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device initialization failed

    2016-03-08T13:22:32.694260Z qemu-system-x86_64: -device vfio-pci,host=83:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on: Device 'vfio-pci' could not be initialized

     

    XML P2:

      <qemu:commandline>

        <qemu:arg value='-device'/>

        <qemu:arg value='ioh3420,bus=pci.0,addr=1c.0,multifunction=on,port=2,chassis=1,id=root.1'/>

        <qemu:arg value='-device'/>

        <qemu:arg value='vfio-pci,host=83:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on'/>

        <qemu:arg value='-device'/>

        <qemu:arg value='vfio-pci,host=83:00.1,bus=root.1,addr=00.1'/>

      </qemu:commandline>

     

    root@Frank:~# lspci -v | grep VGA

    03:00.0 VGA compatible controller: NVIDIA Corporation GF116 [GeForce GTX 550 Ti] (rev a1) (prog-if 00 [VGA controller])

    0d:00.0 VGA compatible controller: ASPEED Technology, Inc. ASPEED Graphics Family (rev 21) (prog-if 00 [VGA controller])

    83:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Juniper PRO [Radeon HD 6750] (prog-if 00 [VGA controller])