Jump to content

Squid

Community Developer
  • Posts

    28,770
  • Joined

  • Last visited

  • Days Won

    314

Everything posted by Squid

  1. It's the mining containers taking up the space. I don't know how they work well enough to be able to advise if it's possible to store the data outside of the container, etc. Ask in the respective support threads for them.
  2. Post a screenshot of Container Size. The dashboard is referring to the image utilization, not memory.
  3. Only real reason for an orphan appearing is if after an update the container was unable to start. But, as you found it's quick and painless to fix.
  4. Is there anything in Sonarr's logs regarding the failed move (ie: Cannot import, etc)
  5. By design, templates also update with any updates to the container. This results in any variables etc that are missing from your template being added back in (not changed) What you've done is take an existing template and delete entries (which is fine) Manually edit the file at /config/plugins/dockerMan/templates-user and remove the line saying <TemplateURL>....</TemplateURL> to stop the system from updating the template.
  6. If you've got the docker folders plugin installed, try uninstalling, reboot. If that fixes it post in its support thread
  7. Not that it's related to your problem, but powerdown-x86_64.plg - 2015.05.17a This plugin is long deprecated (and included in the base OS), and should be removed.
  8. VT-d isn't enabled in your BIOS. You need both VT-d and VT-x enabled for passthrough to work.
  9. From the command prompt, diagnostics and upload the resulting file (logs folder on the flash drive)
  10. Drive spinups also cause an automatic SMART read But, I do have to question the rather extreme schedule you've set for polling in the first place? Even 6 hours is much too long IMHO. Why would you want to know 5.2 days after a drive starts developing problems according to it's own smart reports?
  11. At a start, this command will return the cpu %. Then script around it in a loop to check every one in a while and execute the applicable commands cat /var/local/emhttp/cpuload.ini | grep -m1 host= | awk -F "host=" '/host=/{print$2}'
  12. You have to select it within Tools - System Devices to prevent the OS from grabbing it
  13. First thing to check is this https://forums.unraid.net/topic/57181-docker-faq/?tab=comments#comment-566086
  14. Simple solution (although my servers are on 24/7) would be instead of shutting off the server would instead be to sleep them (S3 Sleep plugin)
  15. A rebuild will reflect the corrupted state. Run the file system checks on disk 8 https://wiki.unraid.net/Check_Disk_Filesystems
  16. Jun 8 21:25:20 Tower kernel: WARNING: CPU: 9 PID: 1956 at net/ethtool/common.c:375 ethtool_check_ops+0xe/0x16 Jun 8 21:25:20 Tower kernel: Modules linked in: edac_mce_amd btusb btrtl btbcm kvm_amd btintel bluetooth kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd ecdh_generic ecc cryptd mxm_wmi wmi_bmof igb nvme ahci glue_helper i2c_piix4 input_leds tn40xx(O+) ccp nvme_core i2c_algo_bit libahci wmi i2c_core led_class rapl k10temp thermal button acpi_cpufreq Jun 8 21:25:20 Tower kernel: CPU: 9 PID: 1956 Comm: udevd Tainted: G O 5.10.28-Unraid #1 Jun 8 21:25:20 Tower kernel: Hardware name: Gigabyte Technology Co., Ltd. X570 AORUS MASTER/X570 AORUS MASTER, BIOS F22 08/20/2020 Jun 8 21:25:20 Tower kernel: RIP: 0010:ethtool_check_ops+0xe/0x16 Jun 8 21:25:20 Tower kernel: Code: 42 c2 eb ec 41 89 45 00 48 89 ef e8 da db b7 ff 5b 44 89 e0 5d 41 5c 41 5d 41 5e c3 31 c0 48 83 7f 78 00 74 0c 83 3f 00 75 07 <0f> 0b b8 ea ff ff ff c3 48 8b 97 48 08 00 00 31 c0 49 89 f8 b9 0b Jun 8 21:25:20 Tower kernel: RSP: 0018:ffffc900010e7a58 EFLAGS: 00010246 Jun 8 21:25:20 Tower kernel: RAX: 0000000000000000 RBX: 00000000002b09ab RCX: ffff888102bfd490 Jun 8 21:25:20 Tower kernel: RDX: ffffffff8210d220 RSI: ffffc90000921270 RDI: ffffffffa0152000 Jun 8 21:25:20 Tower kernel: RBP: ffff888105aad000 R08: 0000000000000c00 R09: 000000000000025d Jun 8 21:25:20 Tower kernel: R10: 0000000000001101 R11: ffff888ffea62400 R12: 00000000fffffffc Jun 8 21:25:20 Tower kernel: R13: ffffffff8210b440 R14: 0000000000000000 R15: ffff888105aad000 Jun 8 21:25:20 Tower kernel: FS: 00001522cec1bbc0(0000) GS:ffff888ffea40000(0000) knlGS:0000000000000000 Jun 8 21:25:20 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Jun 8 21:25:20 Tower kernel: CR2: 00000000006706a8 CR3: 0000000136a4a000 CR4: 0000000000350ee0 Jun 8 21:25:20 Tower kernel: Call Trace: Jun 8 21:25:20 Tower kernel: register_netdevice+0x84/0x47a Jun 8 21:25:20 Tower kernel: ? MV88X3310_mdio_reset+0x766/0x7be [tn40xx] Jun 8 21:25:20 Tower kernel: register_netdev+0x1f/0x2e Jun 8 21:25:20 Tower kernel: bdx_probe+0x895/0xaad [tn40xx] Jun 8 21:25:20 Tower kernel: local_pci_probe+0x3c/0x7a Jun 8 21:25:20 Tower kernel: pci_device_probe+0x140/0x19a Jun 8 21:25:20 Tower kernel: ? sysfs_do_create_link_sd.isra.0+0x6b/0x98 Jun 8 21:25:20 Tower kernel: really_probe+0x157/0x341 Jun 8 21:25:20 Tower kernel: driver_probe_device+0x63/0x92 Jun 8 21:25:20 Tower kernel: device_driver_attach+0x37/0x50 Jun 8 21:25:20 Tower kernel: __driver_attach+0x95/0x9d Jun 8 21:25:20 Tower kernel: ? device_driver_attach+0x50/0x50 Jun 8 21:25:20 Tower kernel: bus_for_each_dev+0x70/0xa6 Jun 8 21:25:20 Tower kernel: bus_add_driver+0xfe/0x1af Jun 8 21:25:20 Tower kernel: driver_register+0x99/0xd2 Jun 8 21:25:20 Tower kernel: ? bdx_tx_timeout+0x2c/0x2c [tn40xx] Jun 8 21:25:20 Tower kernel: do_one_initcall+0x71/0x162 Jun 8 21:25:20 Tower kernel: ? do_init_module+0x19/0x1eb Jun 8 21:25:20 Tower kernel: ? kmem_cache_alloc+0x108/0x130 Jun 8 21:25:20 Tower kernel: do_init_module+0x51/0x1eb Jun 8 21:25:20 Tower kernel: load_module+0x1b18/0x20cf Jun 8 21:25:20 Tower kernel: ? map_kernel_range_noflush+0xdf/0x255 Jun 8 21:25:20 Tower kernel: ? __do_sys_init_module+0xc4/0x105 Jun 8 21:25:20 Tower kernel: ? _cond_resched+0x1b/0x1e Jun 8 21:25:20 Tower kernel: __do_sys_init_module+0xc4/0x105 Jun 8 21:25:20 Tower kernel: do_syscall_64+0x5d/0x6a Jun 8 21:25:20 Tower kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Jun 8 21:25:20 Tower kernel: RIP: 0033:0x1522cf27d09a Jun 8 21:25:20 Tower kernel: Code: 48 8b 0d f9 7d 0c 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 af 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d c6 7d 0c 00 f7 d8 64 89 01 48 Jun 8 21:25:20 Tower kernel: RSP: 002b:00007ffc60311618 EFLAGS: 00000206 ORIG_RAX: 00000000000000af Jun 8 21:25:20 Tower kernel: RAX: ffffffffffffffda RBX: 0000000000663b40 RCX: 00001522cf27d09a Jun 8 21:25:20 Tower kernel: RDX: 00001522cf35d97d RSI: 00000000000aed58 RDI: 00000000006d3910 Jun 8 21:25:20 Tower kernel: RBP: 00000000006d3910 R08: 000000000064701a R09: 0000000000000006 Jun 8 21:25:20 Tower kernel: R10: 0000000000647010 R11: 0000000000000206 R12: 00001522cf35d97d Jun 8 21:25:20 Tower kernel: R13: 00007ffc60311b80 R14: 00000000006783a0 R15: 0000000000663b40 Jun 8 21:25:20 Tower kernel: ---[ end trace 09b2fc8fdb595ffd ]--- Jun 8 21:25:20 Tower kernel: tn40xx: register_netdev failed
  17. root 11535 0.5 0.0 113252 7728 ? Sl 15:57 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id a7d898c64c1daf5840d74c5c4d431d8542f8947f9db5830a27ef5a5f2d51a37a -address /var/run/docker/containerd/containerd.sock root 11562 0.8 0.0 2392 760 ? Ss 15:57 0:00 \_ sh /entrypoint.sh root 11631 22.1 0.2 1293528 44336 ? Sl 15:57 0:01 \_ python3 /root/nut.src.latest/nut.py -S root 11630 0.0 0.0 7268 2172 ? Ss 15:57 0:00 \_ /usr/sbin/cron root 8917 0.1 0.0 54584 4972 ? Ssl Jun08 3:49 \_ AmazonProxy --config /config --logs /logs root 9313 0.2 0.0 113380 15628 ? Sl Jun03 28:16 /usr/bin/containerd-shim-runc-v2 -namespace moby -id a8fe56fde35f21656ecc4ff30c6aeb84226886f89e6b612983721e64db4468f5 -address /var/run/docker/containerd/containerd.sock ighor 9336 0.0 0.1 33308 20096 ? Ss Jun03 5:06 \_ /usr/bin/python3 /usr/local/bin/supervisord --nodaemon --loglevel=info --logfile_maxbytes=0 --logfile=/dev/null --configuration=/etc/supervisor/supervisord.conf ighor 11637 0.3 0.7 336236 129948 ? S Jun03 35:55 \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname backup@%h --loglevel info --concurrency=1 --queues=backup --prefetch-multiplier=2 --concurrency 1 ighor 12401 0.0 0.7 335468 124884 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname backup@%h --loglevel info --concurrency=1 --queues=backup --prefetch-multiplier=2 --concurrency 1 ighor 11639 0.3 0.7 336508 130184 ? S Jun03 41:15 \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4 ighor 12452 0.0 0.8 352484 145540 ? S Jun03 1:29 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4 ighor 12476 0.0 0.7 335732 124284 ? S Jun03 0:01 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4 ighor 12482 0.0 0.7 335992 124560 ? S Jun03 0:02 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4 ighor 12484 0.0 0.7 335996 124684 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4 ighor 11640 0.3 0.7 336240 129744 ? S Jun03 37:23 \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2 ighor 12483 0.0 0.7 335468 123384 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2 ighor 12489 0.0 0.7 335472 123400 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2 ighor 11641 0.3 0.7 336244 129856 ? S Jun03 37:29 \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2 ighor 12404 0.0 0.7 335728 124780 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2 ighor 12424 0.0 0.7 335220 123392 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2 ighor 11642 0.3 0.7 336240 130008 ? S Jun03 37:10 \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2 ighor 12373 0.0 0.7 335212 123592 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2 ighor 12400 0.0 0.7 335216 123612 ? S Jun03 0:00 | \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2 ighor 11636 0.0 0.0 16540 6904 ? S Jun03 0:05 \_ /usr/bin/python3 /usr/local/bin/supervisor_stdout ighor 11638 0.0 0.7 337460 129836 ? S Jun03 0:49 \_ /usr/bin/python3 /usr/local/bin/celery beat --loglevel info --pidfile /run/celery/beat.pid ighor 11644 0.0 0.0 67620 6080 ? S Jun03 0:00 \_ nginx: master process /usr/sbin/nginx -g daemon off; ighor 11646 0.0 0.0 67964 2464 ? S Jun03 0:11 | \_ nginx: worker process ighor 11647 0.0 0.0 67964 2844 ? S Jun03 0:00 | \_ nginx: worker process ighor 11648 0.0 0.0 67964 2844 ? S Jun03 0:00 | \_ nginx: worker process ighor 11649 0.0 0.0 67964 2844 ? S Jun03 0:00 | \_ nginx: worker process ighor 11645 0.0 0.5 309072 90332 ? S Jun03 1:10 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini ighor 12335 0.0 0.7 357520 117592 ? S Jun03 0:03 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini ighor 12336 0.0 0.7 357520 116824 ? S Jun03 0:07 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini ighor 12338 0.0 0.7 360848 120768 ? S Jun03 0:15 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini ighor 12339 0.0 0.7 361872 121740 ? S Jun03 0:32 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini ighor 12340 0.0 0.7 361360 121360 ? S Jun03 1:35 \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini I'm simply taking a guess at what they are. Your other option is to stop the containers one at a time and see when the problems disappear and then go from there.
  18. Only references I could find to python3 in the processes running were related to nut and UWS. My suggestion is to stop those containers, set them to not autostart and reboot and then see what happens.
  19. python3 keeps killing off other processes due to out of memory. Those apps were the only ones I could see using python3
  20. Either your nut container or a unify (?) container looks like it's running hogwild. Stop them from autostarting and reboot and go from there.
  21. The driver is crashing. Since it was working, I'd go with hardware failure.
×
×
  • Create New...