TX_Pilot

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by TX_Pilot

  1. I have an Nvidia 3090 that I am trying to assign to a VM. In the VM settings, I have setup the GPU as a second graphics card. From inside the VM I can install the drivers and I can see it through lscpi: 00:00.0 Host bridge: Intel Corporation 82G33/G31/P35/P31 Express DRAM Controller 00:01.0 VGA compatible controller: Red Hat, Inc. QXL paravirtual graphic card (rev 05) 00:02.0 PCI bridge: Red Hat, Inc. QEMU PCIe Root port 00:02.1 PCI bridge: Red Hat, Inc. QEMU PCIe Root port 00:02.2 PCI bridge: Red Hat, Inc. QEMU PCIe Root port 00:02.3 PCI bridge: Red Hat, Inc. QEMU PCIe Root port 00:02.4 PCI bridge: Red Hat, Inc. QEMU PCIe Root port 00:07.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 03) 00:07.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 03) 00:07.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 03) 00:07.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 03) 00:1f.0 ISA bridge: Intel Corporation 82801IB (ICH9) LPC Interface Controller (rev 02) 00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 01:00.0 Ethernet controller: Red Hat, Inc. Virtio network device (rev 01) 02:00.0 Communication controller: Red Hat, Inc. Virtio console (rev 01) 03:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01) 04:00.0 VGA compatible controller: NVIDIA Corporation GA102 [GeForce RTX 3090] (rev a1) However when I run "nvidia-smi" I always get No devices were found Any ideas on what I might be doing wrong? Thanks!
  2. I am building out a new NVR that will run in a VM on my Unraid server. I have seen several comments suggesting that video from the NVR should not be stored on the array, but rather a unassigned disk or a pool specific for video recordings. My question is why? Is there a compelling reason I don't want to use the array? My array is running with Seagate X18 Exos drives.
  3. I am not seeing any errors in my logs (except the Fix Common Problems messages), but Fix Common Problems is reporting that I have Machine Check Errors. I have run diagnostics. Can someone help point me in the right direction to try to figure this out? Thanks, Scott wrighthome-diagnostics-20230320-1154.zip
  4. I recently moved my Unraid server to a new system (new MB, CPU, etc) by moving the drives as described in Space Invaders great video on the subject. Everything seems to be working fine. The system will run for several days with no problems and then overnight it will fill up /var/log and while it is still running it will grind to a halt. I looked at /var/logs and it seems to be filling up the syslog and nginx directories. Any idea what could be causing this? I have included a diagnostics I was able to dump from the command line. It seems I get a bunch of "worker process" errors and it then it turns into nginx Memory Allocation errors. This is a copy of the logs showing both: 2022/04/13 05:46:01 [alert] 7418#7418: worker process 5166 exited on signal 6 ker process: ./nchan-1.2.15/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2022/04/13 05:46:01 [alert] 7418#7418: worker process 5208 exited on signal 6 ker process: ./nchan-1.2.15/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2022/04/13 05:46:02 [alert] 7418#7418: worker process 5210 exited on signal 6 2022/04/13 05:46:02 [crit] 5277#5277: ngx_slab_alloc() failed: no memory 2022/04/13 05:46:02 [error] 5277#5277: shpool alloc failed 2022/04/13 05:46:02 [error] 5277#5277: nchan: Out of shared memory while allocating message of size 8902. Increase nchan_max_reserved_memory. 2022/04/13 05:46:02 [error] 5277#5277: *3781670 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" 2022/04/13 05:46:02 [error] 5277#5277: MEMSTORE:00: can't create shared message for channel /devices ker process: ./nchan-1.2.15/src/store/spool.c:479: spool_fetch_msg: Assertion `spool->msg_status == MSG_INVALID' failed. 2022/04/13 05:46:03 [alert] 7418#7418: worker process 5277 exited on signal 6 2022/04/13 05:46:03 [crit] 5322#5322: ngx_slab_alloc() failed: no memory 2022/04/13 05:46:03 [error] 5322#5322: shpool alloc failed 2022/04/13 05:46:03 [error] 5322#5322: nchan: Out of shared memory while allocating message of size 8902. Increase nchan_max_reserved_memory. 2022/04/13 05:46:03 [error] 5322#5322: *3781681 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" 2022/04/13 05:46:03 [error] 5322#5322: MEMSTORE:00: can't create shared message for channel /devices 2022/04/13 05:46:03 [crit] 5322#5322: ngx_slab_alloc() failed: no memory 2022/04/13 05:46:03 [error] 5322#5322: shpool alloc failed 2022/04/13 05:46:03 [error] 5322#5322: nchan: Out of shared memory while allocating message of size 8902. Increase nchan_max_reserved_memory. 2022/04/13 05:46:03 [error] 5322#5322: *3781692 nchan: error publishing message (HTTP status code 500), client: unix:, server: , request: "POST /pub/devices?buffer_length=1 HTTP/1.1", host: "localhost" 2022/04/13 05:46:03 [error] 5322#5322: MEMSTORE:00: can't create shared message for channel /devices wrighthome-diagnostics-20220413-0847.zip
  5. I am having a strange IPv6 set of errors that get logged while the backups are running. The strange thing that that I do not have IPv6 enabled on any of my interfaces. The problem only comes up while AppData Backup is running. It repeats itself several times while AppData Backup is running. Has anybody seen anything like this before? Feb 23 05:09:50 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface veth98d5e7d.IPv6 with address fe80::34d5:1dff:feb4:1d7f. Feb 23 05:09:50 WrightHome avahi-daemon[9801]: New relevant interface veth98d5e7d.IPv6 for mDNS. Feb 23 05:09:50 WrightHome avahi-daemon[9801]: Registering new address record for fe80::34d5:1dff:feb4:1d7f on veth98d5e7d.*. Feb 23 05:09:52 WrightHome kernel: eth0: renamed from veth70d7098 Feb 23 05:09:52 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth8e17035: link becomes ready Feb 23 05:09:52 WrightHome kernel: br-ae9578e3b8f7: port 12(veth8e17035) entered blocking state Feb 23 05:09:52 WrightHome kernel: br-ae9578e3b8f7: port 12(veth8e17035) entered forwarding state Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered disabled state Feb 23 05:09:54 WrightHome kernel: device vethd6b7cf1 entered promiscuous mode Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered forwarding state Feb 23 05:09:54 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered disabled state Feb 23 05:09:54 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface veth8e17035.IPv6 with address fe80::803f:8cff:fe7c:d91c. Feb 23 05:09:54 WrightHome avahi-daemon[9801]: New relevant interface veth8e17035.IPv6 for mDNS. Feb 23 05:09:54 WrightHome avahi-daemon[9801]: Registering new address record for fe80::803f:8cff:fe7c:d91c on veth8e17035.*. Feb 23 05:09:57 WrightHome kernel: eth0: renamed from veth904246f Feb 23 05:09:57 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethd6b7cf1: link becomes ready Feb 23 05:09:57 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered blocking state Feb 23 05:09:57 WrightHome kernel: br-ae9578e3b8f7: port 13(vethd6b7cf1) entered forwarding state Feb 23 05:09:59 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface vethd6b7cf1.IPv6 with address fe80::385b:94ff:fecf:1878. Feb 23 05:09:59 WrightHome avahi-daemon[9801]: New relevant interface vethd6b7cf1.IPv6 for mDNS. Feb 23 05:09:59 WrightHome avahi-daemon[9801]: Registering new address record for fe80::385b:94ff:fecf:1878 on vethd6b7cf1.*. Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered disabled state Feb 23 05:09:59 WrightHome kernel: device vethfc32397 entered promiscuous mode Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered forwarding state Feb 23 05:09:59 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered disabled state Feb 23 05:10:02 WrightHome kernel: eth0: renamed from veth02cb1ad Feb 23 05:10:02 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethfc32397: link becomes ready Feb 23 05:10:02 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered blocking state Feb 23 05:10:02 WrightHome kernel: br-ae9578e3b8f7: port 14(vethfc32397) entered forwarding state Feb 23 05:10:04 WrightHome avahi-daemon[9801]: Joining mDNS multicast group on interface vethfc32397.IPv6 with address fe80::c867:96ff:fe77:e50c. Feb 23 05:10:04 WrightHome avahi-daemon[9801]: New relevant interface vethfc32397.IPv6 for mDNS. Feb 23 05:10:04 WrightHome avahi-daemon[9801]: Registering new address record for fe80::c867:96ff:fe77:e50c on vethfc32397.*. Feb 23 05:10:04 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered blocking state Feb 23 05:10:04 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered disabled state Feb 23 05:10:04 WrightHome kernel: device vetha38f422 entered promiscuous mode Feb 23 05:10:08 WrightHome kernel: eth0: renamed from veth8e2e7e3 Feb 23 05:10:08 WrightHome kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vetha38f422: link becomes ready Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered blocking state Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 15(vetha38f422) entered forwarding state Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered blocking state Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered disabled state Feb 23 05:10:08 WrightHome kernel: device veth03df597 entered promiscuous mode Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered blocking state Feb 23 05:10:08 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered forwarding state Feb 23 05:10:09 WrightHome kernel: br-ae9578e3b8f7: port 16(veth03df597) entered disabled state wrighthome-diagnostics-20220223-1033.zip
  6. I have several docker containers running on a private docker network. I am using SWAG as a reverse proxy. I can access the UI of any of the containers locally on my network. Is there a way to block access from the local network to the UI so that I can force all traffic through the reverse proxy? For example, user can access the docker at: http://unraid-server:5050 and https://myapp.mydomain.com but I only want them to be able to use use the https://myapp.mydomain.com I have local DNS resolving the domain correctly so this is not an issue. Thanks for your help.
  7. My Home Assistant Core Docker container disappeared. My other dockers images seem to be there and working fine. I went to the community apps tab and it thinks it is installed also. I am not sure what to do at this point. Thanks, Scott wrighthome-diagnostics-20220206-1243.zip
  8. I keep getting "Unable to connect to Unmanic backend. Please check that it is running." Anybody knows what causes this error. My guess is maybe it is looking online for something I have pi-holed.
  9. Can you tell me how filesystem monitoring works for the /watch directory? Is there an interval? I know there is a pause before processing, but it seems that it takes quite a while for AMC to realize there is a directory change and process it.
  10. A couple of ideas to keep things a bit cleaner. 1) I used environment variables available through NPM rather than hardcoding the ip and port into the Advanced Config. So I made the following changes to the Protected Endpoint: set $upstream_CONTAINERNAME http://CONTAINERIP:CONTAINERPORT; became: set $upstream_CONTAINERNAME $forward_scheme://$server:$port; This will allow you to make the changes to IP/Port within NPM rather than both under the Details Tab and the Advanced Tab 2) I also used the actual container name as well so that I don't have to worry about IP. All of the reverse proxy guides recommend you create a network and use the internal Docker network for your reverse proxy. If you do that then you can specify the container name instead of the IP. So: set $upstream_authelia http://SERVERIP:9091/api/verify; became: set $upstream_authelia http://Authelia:9091/api/verify; In this case Authelia is the name of my Authelia container. I have found it is much easier to use container name and internal port references in your NPM config so that if you container IPs change you are stuck fixing your reverse proxy. Just make sure if you do this, your are using the container port, not the translated port for your UnRaid IP address. With these changes you can almost use the same Protected Endpoint for each proxy host. The only thing that would be different is the CONTAINERNAME. I am not sure if it would be a problem for that to be the same between proxy hosts. I am going to do some testing and see if it matters. --Scott
  11. I have gone through the guide twice and I always come up with the same error. When I paste the config in the Advanced Box for the host, NPM shows the host as offline. If I remove it, it comes right back up and works fine. Has anyone seen this error? NEVERMIND...Got it figured out.. For CONTAINERNAME, you cannot have a container name that has a "-" in it. In my case I was using a container that I didn't care if I exposed called "wifi-card". That will NOT work. "wifi" will work or "wificard" will work but you can't have a dash in a container name. THere may be other special characters that don't work. I didn't test any others.