aceofskies05

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by aceofskies05

  1. I've fresh installed unraid with 6.9-rc2. I'm narrowing down the error to Kernal "Dirty" syslog
  2. Can you add a "docker-compose pull" button ? That would make updating docker-compose images a breeze.
  3. Im getting random reboots now. I managed to catch it in the sys log. I logged in at 12:39:26 then it random rebooted right at 12:42:19, it then became available at 12:46:22 Dec 3 00:43:15 Tower sshd[27118]: pam_unix(sshd:session): session closed for user root Dec 3 12:39:26 Tower webGUI: Successful login user root from 192.168.1.170 Dec 3 12:42:07 Tower kernel: veth4f47806: renamed from eth0 Dec 3 12:42:07 Tower kernel: br-4d8c83721e50: port 3(veth69d1485) entered disabled state Dec 3 12:42:08 Tower kernel: br-4d8c83721e50: port 3(veth69d1485) entered disabled state Dec 3 12:42:08 Tower kernel: device veth69d1485 left promiscuous mode Dec 3 12:42:08 Tower kernel: br-4d8c83721e50: port 3(veth69d1485) entered disabled state Dec 3 12:42:16 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered blocking state Dec 3 12:42:16 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered disabled state Dec 3 12:42:16 Tower kernel: device veth6996f31 entered promiscuous mode Dec 3 12:42:16 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered blocking state Dec 3 12:42:16 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered forwarding state Dec 3 12:42:16 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered disabled state Dec 3 12:42:19 Tower kernel: eth0: renamed from vethe963a49 Dec 3 12:42:19 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth6996f31: link becomes ready Dec 3 12:42:19 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered blocking state Dec 3 12:42:19 Tower kernel: br-4d8c83721e50: port 3(veth6996f31) entered forwarding state Dec 3 12:46:22 Tower kernel: Linux version 5.14.15-Unraid (root@Develop) (gcc (GCC) 11.2.0, GNU ld version 2.37-slack15) #1 SMP Thu Oct 28 09:56:33 PDT 2021 Dec 3 12:46:22 Tower kernel: Command line: BOOT_IMAGE=/bzimage initrd=/bzroot pcie_acs_override=downstream,multifunction iommu=pt Dec 3 12:46:22 Tower kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 3 12:46:22 Tower kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 3 12:46:22 Tower kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 3 12:46:22 Tower kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 3 12:46:22 Tower kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 3 12:46:22 Tower kernel: signal: max sigframe size: 1776 Dec 3 12:46:22 Tower kernel: BIOS-provided physical RAM map:
  4. Got it. 7ths times a charm 😄. Here's the syslog from the mirror to flashdrive. syslog
  5. Isnt the syslog in the Diagnostic log? Well, any who I've attached the syslog. Let me know if there's anything else you need. Appreciate the help syslog.txt
  6. Ah im following... thanks. Attached is the LOG before the crash.. Also nerd plugin has a check all button, must have click that on accident one day and enabled all plugins. I fixed that. tower-diagnostics-20211115-1110.zip
  7. I've had the syslog enabled for over a year , that diagnostic log is after crash.... Is there something else you need?
  8. @Squid Is there a sort of stress test/cpu benchmark I can run in unraid to see if a specific component is dying out? Im thinking prime95 like tool...
  9. Interestingly, this RAM has been in my system for 1-2 years. I did run a memtest last weekend and no errors were reported. Im using Netdata and temps seem stable. Im wondering if its the power supply, but curious why psu would fail only after a couple days use.
  10. Server Details - b450m + Ryzen 1700x, cstate disabled in boot config and bios It seems after update to RC2, every 2 days (ish) my server shuts down completely power off. Sometimes, like today, the screen goes black and the server is "on" yet the ssh/Ui is unresponsive, keyboard unresponsive, screen black. ive attached logs. tower-diagnostics-20211115-1110.zip
  11. That worked! Thank you so much. The trick was adding a device to forward and using apex. Woot!
  12. Sorry I linked the wrong thread context its here. https://github.com/magic-blue-smoke/Dual-Edge-TPU-Adapter/issues/3 Issue exists on unraid 6.9 and 6.10... Think this is more of a driver issue? Maybe? So whats going on is that there is a dual edge tpu coral via m2 slot... When you plug it in via a m2/pci adapter you only get to use 1 TPU. Ie an adapter like this. Now someone came along and created an adapter that lets you properly pass through BOTH tpus to the system. If you goto the thread above, you can get lots more details. Now, if I pass thought the PCI to a VM ( qemu xml in unraid ) I can see the two tpu devices on the VM... I then can spin up the frigate docker instance that uses both coral TPUs and the docker inside the VM successfully uses both TPU devices. My theory is that the config is passed through to the VM and then frigate uses a config like this to recognize each TPU. I also had to install the TPU as PCI in the VM ie https://coral.ai/docs/m2/get-started/#4-run-a-model-on-the-edge-tpu detectors: coral1: type: edgetpu device: pci:0 coral2: type: edgetpu device: pci:1 Now in the VM, and then opening a shell IN the docker container I can run " ls -l /dev/apex* " ... Here I can see both tpu apex drivers or whatever. Now, when I open a shell to Frigate docker on Unraid ( same image as the one of the container, in the vm on unraid ) and run "ls -l /dev/apex*" I see this, My current working theory is that I some how need to forward the PCI/Apex drivers? to the container but I dont see a way to do this... Im thinking since I forwarded to the VM, then installed the TPU drivers via google/linux packages it magically worked to pass through to the docker container in the VM. I know this is slightly outside the scope of your driver tool plugin, but this interesting problem is causing me grief because I cant seem to get unraid to pass the dual edge tpu over to docker on Unraids native docker platform. I'd really like to avoid the over head of the vm. tower-diagnostics-20210812-1732.zip
  13. root@Tower:~# lsmod Module Size Used by nct6775 65536 0 hwmon_vid 16384 1 nct6775 apex 16384 0 gasket 94208 1 apex ip6table_filter 16384 1 ip6_tables 28672 1 ip6table_filter iptable_filter 16384 1 ip_tables 28672 1 iptable_filter x_tables 45056 4 ip6table_filter,iptable_filter,ip6_tables,ip_tables bonding 131072 0 edac_mce_amd 32768 0 kvm_amd 122880 0 kvm 864256 1 kvm_amd crct10dif_pclmul 16384 1 crc32_pclmul 16384 0 crc32c_intel 24576 0 ghash_clmulni_intel 16384 0 aesni_intel 380928 0 wmi_bmof 16384 0 crypto_simd 16384 1 aesni_intel cryptd 24576 2 crypto_simd,ghash_clmulni_intel input_leds 16384 0 rapl 16384 0 r8169 77824 0 wmi 28672 1 wmi_bmof led_class 16384 1 input_leds i2c_piix4 24576 0 i2c_core 86016 1 i2c_piix4 ahci 40960 0 realtek 24576 1 k10temp 16384 0 ccp 81920 1 kvm_amd libahci 40960 1 ahci button 16384 0 root@Tower:~# uname -a Linux Tower 5.13.8-Unraid #1 SMP Wed Aug 4 09:39:46 PDT 2021 x86_64 AMD Ryzen 5 1600 Six-Core Processor AuthenticAMD GNU/Linux Unraid 6.10rc1 plugin pic
  14. Do you want me to keep the convo on GitHub or here ? My power is out right now, I’ll reply when I have power.
  15. Using PCI m2 Coral , I dont see the apex drivers in unraid root@Tower:~# ls -l /dev/apex_* /bin/ls: cannot access '/dev/apex_*': No such file or directory I do see the PCI device however 03:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU 04:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU Im trying to pass the PCI coral over to a docker instance. im using the coral drivers plugin. Unraid 6.10 rc Interesting part is that im using a new dual edge adapter card that exposes both tpu cores to the system. In a VM I can get everything to work and can use Frigate with dual TPUs even with docker in the VM... But on the Frigate docker instance in Unraid, it cant see the dual tpu pci card? some content -> https://github.com/blakeblackshear/frigate/issues/1428
  16. Using PCI m2 Coral , I dont see the apex drivers in unraid root@Tower:~# ls -l /dev/apex_* /bin/ls: cannot access '/dev/apex_*': No such file or directory I do see the PCI device however 03:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU 04:00.0 System peripheral: Global Unichip Corp. Coral Edge TPU Im trying to pass the PCI coral over to a docker instance. im using the coral drivers plugin. Unraid 6.10 rc Interesting part is that im using a new dual edge adapter card that exposes both tpu cores to the system. In a VM I can get everything to work and can use Frigate with dual TPUs even with docker in the VM... But on the Frigate docker instance in Unraid, it cant see the dual tpu pci card? some content -> https://github.com/blakeblackshear/frigate/issues/1428
  17. Seeing issues with Coral PCI Driver Installer App. Left a issue on the github repo -> https://github.com/ich777/unraid-coral-driver/issues/1