ghstridr

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by ghstridr

  1. You are very welcome. Glad to be of help.
  2. I have some experience with building bespoke clusters using various technologies. Forgive me here, but proxmox does have this ability if you wanted to started experimenting using a gui or something. Again sorry for mentioning a competing product. Back to UnRaid. If you have 2 UnRaid servers and separate shared storage, you could move all your data/docker stuff there, but you have to have software to manage the system. One method is to provide a quorum or fencing. Basically this keeps the slave machine/s from accessing data that currently belongs to the master. It keeps the slave/s from locking/writing to files while they are held open by the master, thus avoiding corruption and data loss. The shared storage can be accomplished over network via iScsi, NFS or even SMB as well as a number of other more advance storage protocols. These are the ones commonly available in Open Source. You can also use older cabled scsi or sas methods as long as you have interfaces and controllers that support fencing to keep the hosts separated. Now the networking bit. You would need a way of introducing a VIP (virtual ip) to the networking stack so that accessing it would take you to the current 'master' in the cluster. The idea is that when one member of the cluster becomes unavailable, code on the other machine takes over the VIP and takes over all the services that were being manage by the now defunct old master. There are a couple of ways of accomplishing that. One such piece of software is Pacemaker. It can handle switching a VIP, stopping/starting services, mounting/connecting storage, etc. It determines which is the master (assuming only 2 members in the cluster) by using a heartbeat signal which is usually sent over a private ethernet link between them. I've just used a direct connection cable between them before and let the nic cards figure the auto-mdx for themselves. The heartbeat allows to keep track of the health of the other machine. It can include status info on several aspects of the opposite machine which can all go into making decisions for what actions to take. Say if you have multiple dockers on a bridge interface so they all have their own ip? Well you could have certain dockers 'moved' over to the other cluster member when, say, the overall cpu usage gets too high on the other machine. I would shut down the docker so that the config info and data shares are updated and flushed to the shared storage. Then you could import it or already have it imported and start it up on the other machine and that docker ip should be now available there. I'm probably missing/glossing over some of the finer details, but proper fail over clustering is a complicated subject. So it would be possible with UnRaid, but it would be A LOT of hacking. You really have to know and understand your networking and shared storage concepts. That said, I really love UnRaid for it's standalone abilities and I feel that it does a lot of things very well. Proxmox has the clustering thing designed in pretty well, so that would be the better tool for the job if clustering is your aim. Use the correct tool for the job instead making the one you have fit the odd shaped hole.
  3. Need a little help. I updated to the linuxserverio Unraid 6.8.3. Main logs and console complain that the nvidia driver couldn't connect to my 1660TI and it thought that another driver was grabbing it. I do have virtio grabbing the card for when I have a Win10 vm booted. Is there anyway to get this to share between them or get virtio to talk to the nvidia driver in a proxy sort of way?
  4. @aptalca I saw that boinc 7.4.22 is released.
  5. So, shouldn't the nerdpack plugin check that the required versions are up to date for the requirements in other plugins like vmbackup? ex. vmbackup requires x.x.x and it is installed, so it's marked up to date. If there is a conflict between other plugins, it notes that and doesn't recommend a upgrade in that situation as that would be up to the plugin authors to discern.
  6. I've been having an issue, but not a show stopper. 'pigz' is continually showing up as needing to updated, but it won't when I click apply. I tried removing the package and reinstalling it. I don't think that actually happened. This is happening with 2019.12.31, it wasn't happening before this version. I'm not sure how to troubleshoot this.
  7. So use the rdp/vnc connection to see what is going on after it starts to boot?
  8. VM XML: <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm'> <name>Windows 10</name> <uuid>52d204ea-17c0-a490-8a8d-54ddd1fa14dd</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='2'/> <vcpupin vcpu='1' cpuset='10'/> <vcpupin vcpu='2' cpuset='3'/> <vcpupin vcpu='3' cpuset='11'/> <vcpupin vcpu='4' cpuset='4'/> <vcpupin vcpu='5' cpuset='12'/> <vcpupin vcpu='6' cpuset='5'/> <vcpupin vcpu='7' cpuset='13'/> </cputune> <os> <type arch='x86_64' machine='pc-q35-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/52d204ea-17c0-a490-8a8d-54ddd1fa14dd_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/BuddhaISO/Windows-10.iso'/> <target dev='hda' bus='sata'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/isos/BuddhaISO/virtio-win-0.1.173-2.iso'/> <target dev='hdb' bus='sata'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='pci' index='0' model='pcie-root'/> <controller type='pci' index='1' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='1' port='0x8'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0' multifunction='on'/> </controller> <controller type='pci' index='2' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='2' port='0x9'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='pci' index='3' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='3' port='0x12'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/> </controller> <controller type='pci' index='4' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='4' port='0x13'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/> </controller> <controller type='pci' index='5' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='5' port='0x14'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/> </controller> <controller type='pci' index='6' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='6' port='0xa'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/> </controller> <controller type='pci' index='7' model='pcie-root-port'> <model name='pcie-root-port'/> <target chassis='7' port='0xb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x3'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </controller> <controller type='sata' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:99:35:9e'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </interface> <serial type='pty'> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/mnt/user/domains/VidRoms/EVGA-GTX1660Ti-Black.rom'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x2'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x2'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x3'/> </source> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x3'/> </hostdev> <memballoon model='none'/> </devices> </domain> VM Log from last start: -no-hpet \ -no-shutdown \ -boot strict=on \ -device pcie-root-port,port=0x8,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x1 \ -device pcie-root-port,port=0x9,chassis=2,id=pci.2,bus=pcie.0,addr=0x1.0x1 \ -device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \ -device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \ -device pcie-root-port,port=0x14,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \ -device pcie-root-port,port=0xa,chassis=6,id=pci.6,bus=pcie.0,addr=0x1.0x2 \ -device pcie-root-port,port=0xb,chassis=7,id=pci.7,bus=pcie.0,addr=0x1.0x3 \ -device ich9-usb-ehci1,id=usb,bus=pcie.0,addr=0x7.0x7 \ -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pcie.0,multifunction=on,addr=0x7 \ -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pcie.0,addr=0x7.0x1 \ -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pcie.0,addr=0x7.0x2 \ -device virtio-serial-pci,id=virtio-serial0,bus=pci.2,addr=0x0 \ -blockdev '{"driver":"file","filename":"/mnt/user/domains/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"raw","file":"libvirt-3-storage"}' \ -device virtio-blk-pci,scsi=off,bus=pci.3,addr=0x0,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/BuddhaISO/Windows-10.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \ -device ide-cd,bus=ide.0,drive=libvirt-2-format,id=sata0-0-0,bootindex=2 \ -blockdev '{"driver":"file","filename":"/mnt/user/isos/BuddhaISO/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \ -blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \ -device ide-cd,bus=ide.1,drive=libvirt-1-format,id=sata0-0-1 \ -netdev tap,fd=35,id=hostnet0,vhost=on,vhostfd=36 \ -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:99:35:9e,bus=pci.1,addr=0x0 \ -chardev pty,id=charserial0 \ -device isa-serial,chardev=charserial0,id=serial0 \ -chardev socket,id=charchannel0,fd=37,server,nowait \ -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \ -device usb-tablet,id=input0,bus=usb.0,port=1 \ -device vfio-pci,host=0000:01:00.0,id=hostdev0,bus=pci.4,multifunction=on,addr=0x0,romfile=/mnt/user/domains/VidRoms/EVGA-GTX1660Ti-Black.rom \ -device vfio-pci,host=0000:01:00.1,id=hostdev1,bus=pci.4,addr=0x0.0x1 \ -device vfio-pci,host=0000:01:00.2,id=hostdev2,bus=pci.4,addr=0x0.0x2 \ -device vfio-pci,host=0000:01:00.3,id=hostdev3,bus=pci.4,addr=0x0.0x3 \ -sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \ -msg timestamp=on 2020-01-31 20:40:24.313+0000: Domain id=3 is tainted: high-privileges 2020-01-31 20:40:24.313+0000: Domain id=3 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) Device List: IOMMU group 0: [8086:3e30] 00:00.0 Host bridge: Intel Corporation 8th Gen Core 8-core Desktop Processor Host Bridge/DRAM Registers [Coffee Lake S] (rev 0d) IOMMU group 1: [8086:1901] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x16) (rev 0d) IOMMU group 2: [8086:1905] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) (rev 0d) IOMMU group 3: [8086:3e98] 00:02.0 VGA compatible controller: Intel Corporation UHD Graphics 630 (Desktop 9 Series) (rev 02) IOMMU group 4: [8086:a379] 00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10) IOMMU group 5: [8086:a36d] 00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10) [8086:a36f] 00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10) IOMMU group 6: [8086:a360] 00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10) IOMMU group 7: [8086:a352] 00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10) IOMMU group 8: [8086:a340] 00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0) IOMMU group 9: [8086:a338] 00:1c.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #1 (rev f0) IOMMU group 10: [8086:a339] 00:1c.1 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #2 (rev f0) IOMMU group 11: [8086:a33c] 00:1c.4 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #5 (rev f0) IOMMU group 12: [8086:a330] 00:1d.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #9 (rev f0) IOMMU group 13: [8086:a305] 00:1f.0 ISA bridge: Intel Corporation Z390 Chipset LPC/eSPI Controller (rev 10) [8086:a348] 00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10) [8086:a323] 00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10) [8086:a324] 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10) [8086:15bc] 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10) IOMMU group 14: [10de:2182] 01:00.0 VGA compatible controller: NVIDIA Corporation TU116 [GeForce GTX 1660 Ti] (rev a1) IOMMU group 15: [10de:1aeb] 01:00.1 Audio device: NVIDIA Corporation TU116 High Definition Audio Controller (rev a1) IOMMU group 16: [10de:1aec] 01:00.2 USB controller: NVIDIA Corporation Device 1aec (rev a1) IOMMU group 17: [10de:1aed] 01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU116 [GeForce GTX 1650 SUPER] (rev a1) IOMMU group 18: [1002:67df] 02:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere [Radeon RX 470/480/570/570X/580/580X/590] (rev e7) IOMMU group 19: [1002:aaf0] 02:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Ellesmere HDMI Audio [Radeon RX 470/480 / 570/580/590] IOMMU group 20: [8086:f1a8] 03:00.0 Non-Volatile memory controller: Intel Corporation SSD 660P Series (rev 03) IOMMU group 21: [8086:1539] 05:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) IOMMU group 22: [8086:f1a8] 71:00.0 Non-Volatile memory controller: Intel Corporation SSD 660P Series (rev 03) Grub Menu for virtio: label Unraid OS menu default kernel /bzimage append pcie_acs_override=downstream,multifunction vfio-pci.ids=8086:a348,10de:1aec,10de:1aed,10de:2182,10de:1aeb,8086:a338,1002:67df,1002:aaf0 intel_iommu='on' initrd=/bzroot label Unraid OS GUI Mode kernel /bzimage append pcie_acs_override=downstream,multifunction vfio-pci.ids=8086:a348,10de:1aec,10de:1aed,10de:2182,10de:1aeb,8086:a338,1002:67df,1002:aaf0 intel_iommu='on' initrd=/bzroot,/bzroot-gui I've been following SpaceInvaders YouTube guides, recently this one: Advanced GPU passthrough techniques on Unraid I have tried both a EVGA GTX 1660Ti Black and a MSI Radeon 570. The 1660 sits in slot 1 and the 570 in slot 2. I did dump the bios from the 1660 per Space Invader's instruction in a different video and tried that also. All I ever get is a black screen, but there are no errors when starting the win10 vm and it does become marked as started. I'm kinda out of ideas.
  9. I love it's extensibility and power. I would love native support for ZFS instead of having to use unassigned devices.
  10. Fresh Unraid 6.7.2 2019-06-25 install. Installed FCP and today started noticing numerous repeated entries in syslog: Dec 11 11:22:49 pinkfloyd root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token This is being continuously being written to the log. I'm not sure what is causing this.