KRSogaard

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

KRSogaard's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I have noticed my parity disk is disabled, but i don't know how to re-enable it. The disk is rather new only 7 months old so i don't think it has issues. ST4000VN008-2DR166_ZDHAX930-20230329-1550 parity (sdb) - DISK_DSBL.txt harleyquinn-diagnostics-20230329-1550.zip
  2. I am in the process of moving many of my websites to my local server. I have a big server with 62 cores and 250GB ram so i got more then enough power to run everything and save on my cloud bill My issues is that i am having issues figure out how to set this up with only a single public IP. I am running Unraid, with a few docker containers, most importantly Plex, NextCloud and chevereto. But i am running all my websites in a kubernetes cluster. I can handle this with my ingress server in kubernetes, but i need something before the kubernetes ingress that can filter out my plex, NextCloud and chevereto domains and send the requests to their dockers, and if nothing matches there send it to my kubernetes ingress. Is there a plugin to Unraid that can do this or a tool i can run in a VM? I have tried NginxProxyManager but that dose not really seem capable of it.
  3. I created a 3rd VM and my speed issues was resolved. however the install still crashes. I tried 3 times, all failed. I then change the allocated RAM from 1G to a max to 20G to 10G max of 10G and the install went though without issues. not sure what the actual issue was.
  4. I am new to Unraid and am currently trying to set up my first Ubuntu VM. However this has taken me over 4 hours to do so far, as the VM keeps freezing on my or is just generally slow. I am currenty trying to install Ubuntu Server 21.10, and i got stuck 4 times on the profiles page, as it would freeze and become unresponisve to my keyboard inputs (however the curser would blink just fine), i have however gotten though it 2 times. The first time after that it failed at configuring partition: partition-0 after about 30 min. And on my current try i have gotten to configuring partition: partition-1 after about 45 min. I have assigned 5 cores, and a minimum of 1G ram, with a max limit of 20G. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='4'> <name>Rancher</name> <uuid>0f8bd116-d953-eab2-f801-59e26059cf16</uuid> <metadata> <vmtemplate xmlns="unraid" name="Ubuntu" icon="ubuntu.png" os="ubuntu"/> </metadata> <memory unit='KiB'>20971520</memory> <currentMemory unit='KiB'>1048576</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>10</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='32'/> <vcpupin vcpu='2' cpuset='11'/> <vcpupin vcpu='3' cpuset='43'/> <vcpupin vcpu='4' cpuset='13'/> <vcpupin vcpu='5' cpuset='45'/> <vcpupin vcpu='6' cpuset='15'/> <vcpupin vcpu='7' cpuset='47'/> <vcpupin vcpu='8' cpuset='17'/> <vcpupin vcpu='9' cpuset='49'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-q35-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/0f8bd116-d953-eab2-f801-59e26059cf16_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='5' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Rancher/vdisk1.img' index='2'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/ubuntu-21.10-live-server-amd64.iso' index='1'/> <backingStore/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <alias name='sata0-0-0'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <controller type='sata' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='pci' index='0' model='pcie-root'> <alias name='pcie.0'/> </controller> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x10'/> <alias name='pci.1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x11'/> <alias name='pci.2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <alias name='pci.3'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <alias name='pci.4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <alias name='pci.5'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:ed:16:b6'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-Rancher/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <graphics type='vnc' port='5900' autoport='yes' websocket='5700' listen='0.0.0.0' keymap='en-us'> <listen type='address' address='0.0.0.0'/> </graphics> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <alias name='video0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/> </video> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </memballoon> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> My server has 2xIntel® Xeon® CPU E5-2683 v4 @ 2.10GHz and 160GB of ram, so it should be more then capable of running this VM. I have 4x4TB Ironwolf NAS as the storage. My VM logs: -nodefaults \ -chardev socket,id=charmonitor,fd=31,server,nowait \ -mon chardev=charmonitor,id=monitor,mode=control \ -rtc base=utc,driftfix=slew \ -global kvm-pit.lost_tick_policy=delay \ -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \ -device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Rancher/vdisk1.img","node-name":"libvirt-2-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}' \ -device virtio-blk-pci,bus=pci.3,addr=0x0,drive=libvirt-2-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/ubuntu-21.10-live-server-amd64.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.0,drive=libvirt-1-format,id=sata0-0-0,bootindex=2 \ -netdev tap,fd=33,id=hostnet0 \ -device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:ed:16:b6,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=34,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -vnc 0.0.0.0:0,websocket=5700,password \ -k en-us \ -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pcie.0,addr=0x1 \ -device virtio-balloon-pci,id=balloon0,bus=pci.4,addr=0x0 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2022-01-25 21:32:40.972+0000: Domain id=4 is tainted: high-privileges 2022-01-25 21:32:40.972+0000: Domain id=4 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) All my docker containers is working just fine and my total CPU load is less then 5%, have i set something up wrong?
  5. I think i have found the answer, but please correct me if i am worng. When using custom IP, all ports are forwarded directly to the container, therefore is the port mapping ignored.
  6. I am new to Unraid so i apologies if this is common knowledge, but a google and forum search did not really provide me an answer. I have a docker container, Sonarr currently running with the network type "Custom: br0" this allows me to provide a dedicated IP for my Sonarr application. I am using the lscr.io/linuxserver/sonarr:latest However Sonarr runs internally in the docker container on port 8989, but as i have a dedicated ip for the container i would like for it to run on port 80 instead. However when i use the network type: "Custom: br0" it won't allow me to change the container port. I have also tried to "Add another port" and added port 80 and 443 like that, but they seems to be ignored when applying the changes.
  7. No i will look it to it when i get home from work, i have decided to upgrade to a RX 460.
  8. I have tried this, but it did not work as i do not have 2 PCI ports as the moatherboard is ITX. I have just tried with a GeForce 970, and it works without any problems. Am i correct in understanding that the pass-though works better with AMD graphics cards?
  9. There is no loss i can notice whilst playing but i will be able to do some benchmark comparisions at the weekend. I will post the results here. I am having problems passing though my GT 610, so i am going to try this when i get home. But will it work using the internal graphics? as my motherboard is ITX so it only have 1 pci-e port.
  10. It seems like the only solution is to move the graphics card into the second GPU spot, sadly the MB only have 1, as it is an ITX
  11. The system have the internal Graphics from a i5 that i have set to be the primary in the bios, else then that are there only the GT610 installed. I have not been able to find the solution you are talking about, could you link to it? Been searching for hours now found nothing that could help. I have updated the first post with more information.
  12. The system have the internal Graphics from a i5 that i have set to be the primary in the bios, else then that are there only the GT610 installed. I have not been able to find the solution you are talking about, could you link to it?
  13. I am currently working on a new unRaid setup, everything works fine, but i am having problems getting the GPU pass-though to work. From what i have could read do GT 610 not support the way unRaid pass-though the graphis card, is this correct? Are there a way i can get pass-though to work with my GT610? If not are there a list of supported graphics cards? If i use VNC the VM works perfectly, but when i select GPU passthough, are there never any signal to the monitor. Logs from my VM start. VM setup: VM XML: <domain type='kvm'> <name>Windows 10</name> <uuid>29cf1c70-77d7-881a-8a9a-6f2531f3158e</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMemory> <memoryBacking> <nosharepages/> <locked/> </memoryBacking> <vcpu placement='static'>2</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='3'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.5'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/29cf1c70-77d7-881a-8a9a-6f2531f3158e_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough'> <topology sockets='1' cores='2' threads='1'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/virtio-win-0.1.118-2.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:bc:d3:cd'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <source mode='connect'/> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x045e'/> <product id='0x0800'/> </source> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> </domain> PCI Devices: 00:00.0 Host bridge [0600]: Intel Corporation Skylake Host Bridge/DRAM Registers [8086:191f] (rev 07) 00:01.0 PCI bridge [0604]: Intel Corporation Skylake PCIe Controller (x16) [8086:1901] (rev 07) 00:02.0 VGA compatible controller [0300]: Intel Corporation HD Graphics 530 [8086:1912] (rev 06) 00:14.0 USB controller [0c03]: Intel Corporation Sunrise Point-H USB 3.0 xHCI Controller [8086:a12f] (rev 31) 00:14.2 Signal processing controller [1180]: Intel Corporation Sunrise Point-H Thermal subsystem [8086:a131] (rev 31) 00:16.0 Communication controller [0780]: Intel Corporation Sunrise Point-H CSME HECI #1 [8086:a13a] (rev 31) 00:17.0 SATA controller [0106]: Intel Corporation Sunrise Point-H SATA controller [AHCI mode] [8086:a102] (rev 31) 00:1c.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #5 [8086:a114] (rev f1) 00:1d.0 PCI bridge [0604]: Intel Corporation Sunrise Point-H PCI Express Root Port #9 [8086:a118] (rev f1) 00:1f.0 ISA bridge [0601]: Intel Corporation Sunrise Point-H LPC Controller [8086:a148] (rev 31) 00:1f.2 Memory controller [0580]: Intel Corporation Sunrise Point-H PMC [8086:a121] (rev 31) 00:1f.3 Audio device [0403]: Intel Corporation Sunrise Point-H HD Audio [8086:a170] (rev 31) 00:1f.4 SMBus [0c05]: Intel Corporation Sunrise Point-H SMBus [8086:a123] (rev 31) 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (2) I219-V [8086:15b8] (rev 31) 01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GF119 [GeForce GT 610] [10de:104a] (rev a1) 01:00.1 Audio device [0403]: NVIDIA Corporation GF119 HDMI Audio Controller [10de:0e08] (rev a1) 02:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
  14. Docker containers will utilize all the cores that are available to unRaid as a whole. (ie: every core unless you specifically disallow unRaid from using them) So the answer is yes. More advanced docker setup: If transcoding via Plex however is very important to you, then I would set nzbget to NOT use all of the available cores, so that while its unpacking and parchecking it will not interfere with Plex's rather high requirements for transcoding. http://lime-technology.com/forum/index.php?topic=40937.msg492111#msg492111 and http://lime-technology.com/forum/index.php?topic=40937.msg492112#msg492112 Thanks for the tip, btw love your avatar Just to make sure i understood, if i select all 4 cores for the VM, will this mean that i disallow unRaid from using them? or is this another setting? I'm not the VM guy around here, but by default unRaid / docker has access to all of the cores, and then the cores that you select for each VM is shared with unRaid / docker. To stop unRaid from itself having access to cores (ie: set the VMs to have exclusive access to certain cores), you isolate the cores you don't want unRaid to use at boot time. See Here Ok thanks for the quick anwsers!
  15. Docker containers will utilize all the cores that are available to unRaid as a whole. (ie: every core unless you specifically disallow unRaid from using them) So the answer is yes. More advanced docker setup: If transcoding via Plex however is very important to you, then I would set nzbget to NOT use all of the available cores, so that while its unpacking and parchecking it will not interfere with Plex's rather high requirements for transcoding. http://lime-technology.com/forum/index.php?topic=40937.msg492111#msg492111 and http://lime-technology.com/forum/index.php?topic=40937.msg492112#msg492112 Thanks for the tip, btw love your avatar Just to make sure i understood, if i select all 4 cores for the VM, will this mean that i disallow unRaid from using them? or is this another setting?