• Web authorization often lost


    IGHOR
    • Annoyance

    This bug appears since unRAID implemented web login.
    I do have 32 docker containers in my unRAID server.

    Here is a steps I'm doing:

    1. Login to unRAID web interface
    2. Visit Docker tab

    After that Docker tab completely loaded, but become frozen, I can't click anything.

    It happens not every time, but often.

    And when I refresh page it asks to login again.

    It is really annoying, please fix that.

    My browser is Safari, macOS latest x86_64

    Overall loosing of authorization happens time to time. It is also annoying when I can see a authorization interface inside of multiple web frames in the UI.

    julynas-diagnostics-20210610-1457.zip




    User Feedback

    Recommended Comments

    Either your nut container or a unify (?) container looks like it's running hogwild.  Stop them from autostarting and reboot and go from there.

    Link to comment
    1 minute ago, Squid said:

    Either your nut container or a unify (?) container looks like it's running hogwild.  Stop them from autostarting and reboot and go from there.

    What do you mean by "it's running hogwild"?
    And how it supposed to solve problem?

    I did restart my server many times and it happens time to time since half year or so.

    Link to comment

    python3 keeps killing off other processes due to out of memory.  Those apps were the only ones I could see using python3

    Link to comment

    Only references I could find to python3 in the processes running were related to nut and UWS.  My suggestion is to stop those containers, set them to not autostart and reboot and then see what happens.

    Link to comment
    23 minutes ago, Squid said:

    Only references I could find to python3 in the processes running were related to nut and UWS.  My suggestion is to stop those containers, set them to not autostart and reboot and then see what happens.

    I don't have a nut docker, and nut is disabled in the settings.

    What is UWS?

    Link to comment
    root     11535  0.5  0.0 113252  7728 ?        Sl   15:57   0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id a7d898c64c1daf5840d74c5c4d431d8542f8947f9db5830a27ef5a5f2d51a37a -address /var/run/docker/containerd/containerd.sock
    root     11562  0.8  0.0   2392   760 ?        Ss   15:57   0:00  \_ sh /entrypoint.sh
    root     11631 22.1  0.2 1293528 44336 ?       Sl   15:57   0:01      \_ python3 /root/nut.src.latest/nut.py -S
    root     11630  0.0  0.0   7268  2172 ?        Ss   15:57   0:00      \_ /usr/sbin/cron
    root      8917  0.1  0.0  54584  4972 ?        Ssl  Jun08   3:49  \_ AmazonProxy --config /config --logs /logs
    root      9313  0.2  0.0 113380 15628 ?        Sl   Jun03  28:16 /usr/bin/containerd-shim-runc-v2 -namespace moby -id a8fe56fde35f21656ecc4ff30c6aeb84226886f89e6b612983721e64db4468f5 -address /var/run/docker/containerd/containerd.sock
    ighor     9336  0.0  0.1  33308 20096 ?        Ss   Jun03   5:06  \_ /usr/bin/python3 /usr/local/bin/supervisord --nodaemon --loglevel=info --logfile_maxbytes=0 --logfile=/dev/null --configuration=/etc/supervisor/supervisord.conf
    ighor    11637  0.3  0.7 336236 129948 ?       S    Jun03  35:55      \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname backup@%h --loglevel info --concurrency=1 --queues=backup --prefetch-multiplier=2 --concurrency 1
    ighor    12401  0.0  0.7 335468 124884 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname backup@%h --loglevel info --concurrency=1 --queues=backup --prefetch-multiplier=2 --concurrency 1
    ighor    11639  0.3  0.7 336508 130184 ?       S    Jun03  41:15      \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4
    ighor    12452  0.0  0.8 352484 145540 ?       S    Jun03   1:29      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4
    ighor    12476  0.0  0.7 335732 124284 ?       S    Jun03   0:01      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4
    ighor    12482  0.0  0.7 335992 124560 ?       S    Jun03   0:02      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4
    ighor    12484  0.0  0.7 335996 124684 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname celery@%h --loglevel info --queues=celery --prefetch-multiplier=4 --concurrency 4
    ighor    11640  0.3  0.7 336240 129744 ?       S    Jun03  37:23      \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2
    ighor    12483  0.0  0.7 335468 123384 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2
    ighor    12489  0.0  0.7 335472 123400 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname memory@%h --loglevel info --queues=memory --prefetch-multiplier=10 --concurrency 2
    ighor    11641  0.3  0.7 336244 129856 ?       S    Jun03  37:29      \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2
    ighor    12404  0.0  0.7 335728 124780 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2
    ighor    12424  0.0  0.7 335220 123392 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname notify@%h --loglevel info --queues=notify --prefetch-multiplier=10 --concurrency 2
    ighor    11642  0.3  0.7 336240 130008 ?       S    Jun03  37:10      \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2
    ighor    12373  0.0  0.7 335212 123592 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2
    ighor    12400  0.0  0.7 335216 123612 ?       S    Jun03   0:00      |   \_ /usr/bin/python3 /usr/local/bin/celery worker --hostname translate@%h --loglevel info --queues=translate --prefetch-multiplier=4 --concurrency 2
    ighor    11636  0.0  0.0  16540  6904 ?        S    Jun03   0:05      \_ /usr/bin/python3 /usr/local/bin/supervisor_stdout
    ighor    11638  0.0  0.7 337460 129836 ?       S    Jun03   0:49      \_ /usr/bin/python3 /usr/local/bin/celery beat --loglevel info --pidfile /run/celery/beat.pid
    ighor    11644  0.0  0.0  67620  6080 ?        S    Jun03   0:00      \_ nginx: master process /usr/sbin/nginx -g daemon off;
    ighor    11646  0.0  0.0  67964  2464 ?        S    Jun03   0:11      |   \_ nginx: worker process
    ighor    11647  0.0  0.0  67964  2844 ?        S    Jun03   0:00      |   \_ nginx: worker process
    ighor    11648  0.0  0.0  67964  2844 ?        S    Jun03   0:00      |   \_ nginx: worker process
    ighor    11649  0.0  0.0  67964  2844 ?        S    Jun03   0:00      |   \_ nginx: worker process
    ighor    11645  0.0  0.5 309072 90332 ?        S    Jun03   1:10      \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini
    ighor    12335  0.0  0.7 357520 117592 ?       S    Jun03   0:03          \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini
    ighor    12336  0.0  0.7 357520 116824 ?       S    Jun03   0:07          \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini
    ighor    12338  0.0  0.7 360848 120768 ?       S    Jun03   0:15          \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini
    ighor    12339  0.0  0.7 361872 121740 ?       S    Jun03   0:32          \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini
    ighor    12340  0.0  0.7 361360 121360 ?       S    Jun03   1:35          \_ /usr/bin/uwsgi --ini /etc/uwsgi/apps-enabled/weblate.ini

     

    I'm simply taking a guess at what they are.  Your other option is to stop the containers one at a time and see when the problems disappear and then go from there.

    Link to comment
    6 minutes ago, Squid said:

    I'm simply taking a guess at what they are.  Your other option is to stop the containers one at a time and see when the problems disappear and then go from there.

    I see, it is Weblate docker and nut under another name. Thanks.

    I'll try to stop it and wait for another issue if happens. Anyway this docker state is 'Uptime: 7 days (healthy)' but I had this issue today.

    Edited by IGHOR
    Link to comment
    Nov 17 23:14:14 UnraID kernel: rcu: INFO: rcu_sched detected stalls on CPUs/tasks:
    Nov 17 23:14:14 UnraID kernel: rcu:     3-...!: (1 ticks this GP) idle=ac2/1/0x4000000000000000 softirq=998187647/998187647 fqs=0 
    Nov 17 23:14:14 UnraID kernel:     (detected by 2, t=98437 jiffies, g=1948964373, q=40)
    Nov 17 23:14:14 UnraID kernel: Sending NMI from CPU 2 to CPUs 3:
    Nov 17 23:14:14 UnraID kernel: NMI backtrace for cpu 3
    Nov 17 23:14:14 UnraID kernel: CPU: 3 PID: 29087 Comm: kworker/3:1 Tainted: P           O      5.10.28-Unraid #1
    Nov 17 23:14:14 UnraID kernel: Hardware name: Gigabyte Technology Co., Ltd. P43T-ES3G/P43T-ES3G, BIOS F8 01/12/2012
    Nov 17 23:14:14 UnraID kernel: Workqueue: events dbs_work_handler
    Nov 17 23:14:14 UnraID kernel: RIP: 0010:rb_erase+0x10e/0x24e
    Nov 17 23:14:14 UnraID kernel: Code: 48 89 02 4d 85 c9 74 0b 48 83 cf 01 49 89 39 48 89 08 c3 f6 00 01 48 89 08 75 01 c3 31 c9 48 8b 77 08 48 39 ce 74 7e f6 06 01 <75> 20 4c 8b 46 10 48 89 f8 31 c9 48 83 c8 01 4c 89 47 08 48 89 7e
    Nov 17 23:14:14 UnraID kernel: RSP: 0018:ffffc9000011cf10 EFLAGS: 00000002
    Nov 17 23:14:14 UnraID kernel: RAX: ffffc90000573ea8 RBX: ffffc90009a6fdf0 RCX: ffffc9000390bdf0
    Nov 17 23:14:14 UnraID kernel: RDX: ffff888417d9f0a0 RSI: ffffc900036d3d08 RDI: ffffc90000573ea8
    Nov 17 23:14:14 UnraID kernel: RBP: ffff888417d9f0a0 R08: 0000000000000000 R09: ffff888103afda00
    Nov 17 23:14:14 UnraID kernel: R10: ffff888191241310 R11: ffff888417da2400 R12: ffff888417d9f080
    Nov 17 23:14:14 UnraID kernel: R13: 0000000000000000 R14: ffffc90009a6fdf0 R15: ffff888417d9f040
    Nov 17 23:14:14 UnraID kernel: FS:  0000000000000000(0000) GS:ffff888417d80000(0000) knlGS:0000000000000000
    Nov 17 23:14:14 UnraID kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Nov 17 23:14:14 UnraID kernel: CR2: 00007f336ddb1538 CR3: 0000000196244000 CR4: 00000000000006e0
    Nov 17 23:14:14 UnraID kernel: Call Trace:
    Nov 17 23:14:14 UnraID kernel: <IRQ>
    Nov 17 23:14:14 UnraID kernel: timerqueue_del+0x2c/0x3c
    Nov 17 23:14:14 UnraID kernel: __remove_hrtimer+0x28/0x7d
    Nov 17 23:14:14 UnraID kernel: __hrtimer_run_queues+0x96/0x10b
    Nov 17 23:14:14 UnraID kernel: hrtimer_interrupt+0x8d/0x15b
    Nov 17 23:14:14 UnraID kernel: __sysvec_apic_timer_interrupt+0x5d/0x68
    Nov 17 23:14:14 UnraID kernel: asm_call_irq_on_stack+0x12/0x20
    Nov 17 23:14:14 UnraID kernel: </IRQ>
    Nov 17 23:14:14 UnraID kernel: sysvec_apic_timer_interrupt+0x71/0x95
    Nov 17 23:14:14 UnraID kernel: asm_sysvec_apic_timer_interrupt+0x12/0x20
    Nov 17 23:14:14 UnraID kernel: RIP: 0010:acpi_os_write_port+0x15/0x26
    Nov 17 23:14:14 UnraID kernel: Code: 33 04 25 28 00 00 00 74 05 e8 a0 2f 32 00 31 c0 48 83 c4 10 c3 83 fa 08 89 f0 77 05 89 fa ee eb 17 83 fa 10 77 06 89 fa 66 ef <eb> 0c 83 fa 20 77 05 89 fa ef eb 02 0f 0b 31 c0 c3 48 8b 47 08 48
    Nov 17 23:14:14 UnraID kernel: RSP: 0018:ffffc90003a6fd88 EFLAGS: 00000246
    Nov 17 23:14:14 UnraID kernel: RAX: 0000000000000136 RBX: ffff888100d1f000 RCX: ffff8881008ae100
    Nov 17 23:14:14 UnraID kernel: RDX: 0000000000000880 RSI: 0000000000000136 RDI: 0000000000000880
    Nov 17 23:14:14 UnraID kernel: RBP: ffff888100d1f000 R08: 0000000000000000 R09: 0000000000000000
    Nov 17 23:14:14 UnraID kernel: R10: 8080808080808080 R11: 000000000000001b R12: ffff888417daad30
    Nov 17 23:14:14 UnraID kernel: R13: 0000000000000001 R14: 000000000000000c R15: 00000000001e8480
    Nov 17 23:14:14 UnraID kernel: ? acpi_cpufreq_cpu_exit+0x3f/0x3f [acpi_cpufreq]
    Nov 17 23:14:14 UnraID kernel: acpi_cpufreq_target+0xd9/0x173 [acpi_cpufreq]
    Nov 17 23:14:14 UnraID kernel: ? cpufreq_freq_transition_begin+0xdd/0xfc
    Nov 17 23:14:14 UnraID kernel: ? acpi_cpufreq_cpu_exit+0x3f/0x3f [acpi_cpufreq]
    Nov 17 23:14:14 UnraID kernel: __cpufreq_driver_target+0x181/0x217
    Nov 17 23:14:14 UnraID kernel: od_dbs_update+0x132/0x16a
    Nov 17 23:14:14 UnraID kernel: dbs_work_handler+0x30/0x57
    Nov 17 23:14:14 UnraID kernel: process_one_work+0x13c/0x1d5
    Nov 17 23:14:14 UnraID kernel: worker_thread+0x18b/0x22f
    Nov 17 23:14:14 UnraID kernel: ? process_scheduled_works+0x27/0x27
    Nov 17 23:14:14 UnraID kernel: kthread+0xe5/0xea
    Nov 17 23:14:14 UnraID kernel: ? __kthread_bind_mask+0x57/0x57
    Nov 17 23:14:14 UnraID kernel: ret_from_fork+0x22/0x30

     

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.