Jump to content

Spritzup

Members
  • Content Count

    201
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Spritzup

  • Rank
    Advanced Member
  • Birthday 05/21/1981

Converted

  • Gender
    Male
  • Location
    Canada

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I suspect I know what the issue is, but my CrashPlanPro is stuck at "Waiting for Connection". I'm also running the "LetsEncrypt" container on my box which has port 443 forward to it. I'm guessing it' some sort of conflict? Thanks for any help. ~Spritz
  2. I'm having the same issue as above. How did you get it to start manually? ~Spritz
  3. I believe I figured it out. I had to delete it and then remove the template. Even though I wasn't using the template, apparently it was pulling info from there. ~Spritz
  4. With the new GUI control for pinning CPU's to containers, I've gone through and removed the --cpuset argument from all containers. However, on my Ombi container, the argument remains. To date, I've tried deleting and recreating the container. I've tried deleting the container, rebooting the system and re-adding the container. Nothing I've done to date has allowed me to delete that line. Any suggestions or help would be appreciated. Thanks! ~Spritz
  5. I am using the example that was created, and followed the instructions that were located within. ~Spritz
  6. Hey All, I'm having issue's using Nextcloud with the LetsEncrypt proxy. Basically I can hit the login page to nextcloud without issue, but when I go to login in, I keep getting "504 Gateway Time-out, nginx/1.14.0". Any thoughts on where I should be looking? This was working before. Thanks in advance! ~Spritz
  7. I've been seeing that exact same error in my logs, but not to the same degree as you are. It most often times out at CA, and then throws the same error. Were you ever able to pin down what the cause was? ~Spritz
  8. That seems like a good video for @gridrunner
  9. haha, I've had days like that as well. If you have any insight, I'd appreciate it. I did read the previous link that you provided, and it was an interesting read, thanks for that ~Spritz
  10. @Squid Thanks for the reply. Unfortunately I think their is some confusion. The issue is that Docker (and I assume by extension unRaid) is not respecting the "isolcpus" command in my syslinux file. What should have been happening is that 8 cores would be isolated on for VM use, and everything else would run on the remaining 24. However, that did not appear to be happening, as I could observer both NZBGet and Plex using the supposedly isolated CPU's thus bring my VM to a screeching halt. As a bandaid, I've pinned CPU's for specific container use, but this is not ideal IMO. So TLDR - The "isolated cores" in this case is those isolated for a VM using the "isolcpus" command. The docker cpu pinning is a bandaid, but is working as expected. ~Spritz
  11. So I've pinned both the Plex and NZBGet container to specific CPU's, and that seems to have put a bandaid on the issue. However, unRaid (and I assume Docker) is still using those supposedly isolated cores for other actions, as even with the VM powered off those cores are seeing some activity. For the moment I can live with that, as whatever is hitting it is not a heavy hitter. All that said though, I'd like to try and figure out why this isn't functioning as expected. When I run the command (which escapes me at the moment) to verify that the CPU's are isolated, it returns the expected result. I can also see the system parsing the isolated CPU line during boot, without error. Yet when I look at cAdvisor (and I don't know if this is accurate or not), it shows all CPU's are available for the containers to use. I'm kind of at a loss on this one. Any assistance would be appreciated. Thanks! ~Spritz
  12. Yup, using the Linux server.io container. Thanks, for the suggestion, I had thought of doing the same thing as well. ~Spritz~
  13. Added to the bottom of my original post. Thanks for looking! ~Spritz~
  14. Good Evening, I've followed @gridrunner excellent guide in maximizing performance in both unRaid and in a host VM. However, I'm seeing that Plex is not respecting the fact that the CPU's are isolated, and is often using them when transcoding... this bring my VM to a screeching halt. It was my understanding that isolating the CPU's made it so that nothing could use them, with the exception of any VM you assigned them too... am I mistaken? Please see my core assignments and my VM.xml. Thanks in advance. ~Spritz CPU Thread Pairings Pair 1: cpu 0 / cpu 16 Pair 2: cpu 1 / cpu 17 Pair 3: cpu 2 / cpu 18 Pair 4: cpu 3 / cpu 19 Pair 5: cpu 4 / cpu 20 Pair 6: cpu 5 / cpu 21 Pair 7: cpu 6 / cpu 22 Pair 8: cpu 7 / cpu 23 Pair 9: cpu 8 / cpu 24 Pair 10: cpu 9 / cpu 25 Pair 11: cpu 10 / cpu 26 Pair 12: cpu 11 / cpu 27 Pair 13: cpu 12 / cpu 28 Pair 14: cpu 13 / cpu 29 Pair 15: cpu 14 / cpu 30 Pair 16: cpu 15 / cpu 31 <domain type='kvm'> <name>Brawn</name> <uuid>aa4f920a-0dfe-d619-f00b-46c900a1055c</uuid> <description>Gaming PC</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='17'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='18'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='19'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='20'/> <emulatorpin cpuset='15,31'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/aa4f920a-0dfe-d619-f00b-46c900a1055c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/disks/Brawn_SSD_1/Brawn/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/OS_ISOs/Windows_10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/OS_ISOs/virtio-win-0.1.141-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:40:f9:bb'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain> Oh, and my syslinux config --> default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append isolcpus=1,2,3,4,17,18,19,20 vfio-pci.ids=1b6f:7052 initrd=/bzroot
  15. In the continued pursuit of trying to track down these ridiculous error's with CA, I'm attempting to setup my network (in my mind) more logically. To that end, I was looking to have my containers on eth0 and my VM's on eth1. This seems to working swimmingly, except for the fact that my VM can't access any of my container's webui's. Any help would be appreciated. ~Spritz PS - On the off chance someone has seen the error before, here's what I'm seeing with CA (among other plugins). Feb 23 20:21:50 Brain nginx: 2018/02/23 20:21:50 [error] 6552#6552: *339333 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.100, server: , request: "POST /plugins/community.applications/include/exec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "brain", referrer: "http://brain/Apps"