Spritzup

Members
  • Posts

    271
  • Joined

  • Last visited

Everything posted by Spritzup

  1. Hey All, I'm having issue's using Nextcloud with the LetsEncrypt proxy. Basically I can hit the login page to nextcloud without issue, but when I go to login in, I keep getting "504 Gateway Time-out, nginx/1.14.0". Any thoughts on where I should be looking? This was working before. Thanks in advance! ~Spritz
  2. I've been seeing that exact same error in my logs, but not to the same degree as you are. It most often times out at CA, and then throws the same error. Were you ever able to pin down what the cause was? ~Spritz
  3. That seems like a good video for @gridrunner
  4. haha, I've had days like that as well. If you have any insight, I'd appreciate it. I did read the previous link that you provided, and it was an interesting read, thanks for that ~Spritz
  5. @Squid Thanks for the reply. Unfortunately I think their is some confusion. The issue is that Docker (and I assume by extension unRaid) is not respecting the "isolcpus" command in my syslinux file. What should have been happening is that 8 cores would be isolated on for VM use, and everything else would run on the remaining 24. However, that did not appear to be happening, as I could observer both NZBGet and Plex using the supposedly isolated CPU's thus bring my VM to a screeching halt. As a bandaid, I've pinned CPU's for specific container use, but this is not ideal IMO. So TLDR - The "isolated cores" in this case is those isolated for a VM using the "isolcpus" command. The docker cpu pinning is a bandaid, but is working as expected. ~Spritz
  6. So I've pinned both the Plex and NZBGet container to specific CPU's, and that seems to have put a bandaid on the issue. However, unRaid (and I assume Docker) is still using those supposedly isolated cores for other actions, as even with the VM powered off those cores are seeing some activity. For the moment I can live with that, as whatever is hitting it is not a heavy hitter. All that said though, I'd like to try and figure out why this isn't functioning as expected. When I run the command (which escapes me at the moment) to verify that the CPU's are isolated, it returns the expected result. I can also see the system parsing the isolated CPU line during boot, without error. Yet when I look at cAdvisor (and I don't know if this is accurate or not), it shows all CPU's are available for the containers to use. I'm kind of at a loss on this one. Any assistance would be appreciated. Thanks! ~Spritz
  7. Yup, using the Linux server.io container. Thanks, for the suggestion, I had thought of doing the same thing as well. ~Spritz~
  8. Added to the bottom of my original post. Thanks for looking! ~Spritz~
  9. Good Evening, I've followed @gridrunner excellent guide in maximizing performance in both unRaid and in a host VM. However, I'm seeing that Plex is not respecting the fact that the CPU's are isolated, and is often using them when transcoding... this bring my VM to a screeching halt. It was my understanding that isolating the CPU's made it so that nothing could use them, with the exception of any VM you assigned them too... am I mistaken? Please see my core assignments and my VM.xml. Thanks in advance. ~Spritz CPU Thread Pairings Pair 1: cpu 0 / cpu 16 Pair 2: cpu 1 / cpu 17 Pair 3: cpu 2 / cpu 18 Pair 4: cpu 3 / cpu 19 Pair 5: cpu 4 / cpu 20 Pair 6: cpu 5 / cpu 21 Pair 7: cpu 6 / cpu 22 Pair 8: cpu 7 / cpu 23 Pair 9: cpu 8 / cpu 24 Pair 10: cpu 9 / cpu 25 Pair 11: cpu 10 / cpu 26 Pair 12: cpu 11 / cpu 27 Pair 13: cpu 12 / cpu 28 Pair 14: cpu 13 / cpu 29 Pair 15: cpu 14 / cpu 30 Pair 16: cpu 15 / cpu 31 <domain type='kvm'> <name>Brawn</name> <uuid>aa4f920a-0dfe-d619-f00b-46c900a1055c</uuid> <description>Gaming PC</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>8</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='17'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='18'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='19'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='20'/> <emulatorpin cpuset='15,31'/> </cputune> <os> <type arch='x86_64' machine='pc-i440fx-2.10'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/aa4f920a-0dfe-d619-f00b-46c900a1055c_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='4' threads='2'/> </cpu> <clock offset='localtime'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/mnt/disks/Brawn_SSD_1/Brawn/vdisk1.img'/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/OS_ISOs/Windows_10.iso'/> <target dev='hda' bus='ide'/> <readonly/> <boot order='2'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <source file='/mnt/user/Data/OS_ISOs/virtio-win-0.1.141-1.iso'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:40:f9:bb'/> <source bridge='br0'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='unix'> <target type='virtio' name='org.qemu.guest_agent.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </memballoon> </devices> </domain> Oh, and my syslinux config --> default menu.c32 menu title Lime Technology, Inc. prompt 0 timeout 50 label unRAID OS menu default kernel /bzimage append isolcpus=1,2,3,4,17,18,19,20 vfio-pci.ids=1b6f:7052 initrd=/bzroot
  10. In the continued pursuit of trying to track down these ridiculous error's with CA, I'm attempting to setup my network (in my mind) more logically. To that end, I was looking to have my containers on eth0 and my VM's on eth1. This seems to working swimmingly, except for the fact that my VM can't access any of my container's webui's. Any help would be appreciated. ~Spritz PS - On the off chance someone has seen the error before, here's what I'm seeing with CA (among other plugins). Feb 23 20:21:50 Brain nginx: 2018/02/23 20:21:50 [error] 6552#6552: *339333 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.100, server: , request: "POST /plugins/community.applications/include/exec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "brain", referrer: "http://brain/Apps"
  11. It sounds to me like you may be shorting the motherboard on your case. The fact that it was rock solid prior to being put back in the case sort of leads me to that conclusion. ~Spritz
  12. Sorry Squid, I ended up just deleting the containers and reinstalling them... seemed more efficient than trying to troubleshoot the issue. If you'd like to try and replicate the issue for your own knowledge (if it is indeed impossible for me to get that error), I would be happy to take this to PM and provide you as much info as I can ~Spritz
  13. I'm attempting to start my Plex docker and keep getting the same error.... this is also happening with NZBGet and Plexpy... and advice would be appreciated --> Brain root: error: /Docker/UpdateContainer?xmlTemplate=edit:/boot/config/plugins/dockerMan/templates-user/my-Plexpy.xml: missing csrf_token ~Spritz
  14. So I'm not sure what the issue is, but it seems that my VM constantly losing it's connection to the SSD which causes it to crash. Afterwards it is not possible to boot up the VM as the disk can no longer be found or mounted by unassigned. It's very odd... ~Spritz
  15. If the disk is mounted using unassigned plugin, it will be automatically trimmed... even if it's being passed-through. Keep in mind we're not doing a block level passthrough, so this should be the correct way to set it up... unless someone wants to correct me. ~Spritz
  16. Morning All, I'm in the process of standing up a gaming-ish desktop running off of my server. I'd like to passthrough the ssd directly to windows, and have found how to do that based on various posts. /dev/disk/by-id/ata-Crucial_CT512MX100SSD1_14340D0C45C6 The issue is that Windows does not see it as an SSD and therefore it doesn't enable trim on the disk... anyway to mitigate this? Also, I had considered having it mounted using 'unassigned' but the option to mount is greyed out. Thanks! ~Spritz
  17. I see what you're saying @bonienl and I think pfSense will let me do that, but I think @ken-ji hit the nail on the head. It can be an advanced setting to create the docker container on an alternative interface. Or it can check for multiple interfaces, and if it sees more than one, ask the user what they want to do. ~Spritz
  18. Yes it does. As well as it I leave it alone for long enough it seems to fix itself. That said, the easiest way to reproduce this error (for me at least) is to go into the CA application. ~Spritz
  19. My suspicion is that it will, I'm just in the process of doing a copy and don't want to interrupt it.
  20. I do not.... wouldn't that cause a bunch of issue's unless I was running vlans? Any advice or guidance on how to do this would be appreciated. Thanks! ~Spritz
  21. Evening all, I'm experiencing an extremely slow WebGUI (almost unusable) and a bunch of the following errors in the logs. I've included my diagnostics as well. Any thoughts as to what the underlying issue could be? Jan 27 21:14:09 SERVERNAME nginx: 2018/01/27 21:14:09 [error] 9532#9532: *214142 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.151, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.200", referrer: "http://192.168.1.200/Apps" Jan 27 21:17:51 SERVERNAME nginx: 2018/01/27 21:17:51 [error] 9532#9532: *214584 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.122, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "SERVERNAME", referrer: "http://SERVERNAME/Settings/FixProblems" Jan 27 21:18:02 SERVERNAME nginx: 2018/01/27 21:18:02 [error] 9532#9532: *215271 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.151, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.200", referrer: "http://192.168.1.200/Apps" Jan 27 21:18:02 SERVERNAME nginx: 2018/01/27 21:18:02 [error] 9532#9532: *215467 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.151, server: , request: "POST /webGui/include/Notify.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.200", referrer: "http://192.168.1.200/Apps" Jan 27 21:18:06 SERVERNAME nginx: 2018/01/27 21:18:06 [error] 9532#9532: *215574 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.122, server: , request: "POST /plugins/preclear.disk/Preclear.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "SERVERNAME", referrer: "http://SERVERNAME/Settings/FixProblems" Jan 27 21:19:35 SERVERNAME nginx: 2018/01/27 21:19:35 [error] 9532#9532: *215274 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.151, server: , request: "POST /webGui/include/Notify.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "192.168.1.200", referrer: "http://192.168.1.200/Dashboard" Thanks! ~Spritz brain-diagnostics-20180127-2256.zip
  22. I'm on 6.4 and don't see an option for br1... though br0 works a treat. ~Spritz
  23. Quick question, how do I get containers on br1 through the web gui? I only see an option for br0... though vm's can be put on br1. Thanks! ~Spritz
  24. Thanks Squid. This is basically a fresh install, so it was the latest version installed. That said, I double checked and it is up to day --> 2018.01.20b
  25. I just moved to version 6.4 and seem to be having an issue with CA... CA just hangs when trying to update the application list. I've tried setting a static DNS to googles servers, but no such luck. I am seeing the following error in the log and want to rule out CA as the culprit before tearing my network apart. Thanks! Jan 22 08:35:13 SERVERNAME nginx: 2018/01/22 08:35:13 [error] 9096#9096: *61358 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 192.168.1.151, server: , request: "POST /plugins/community.applications/include/exec.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock", host: "SERVERNAME", referrer: "http://SERVERNAME/Apps"