Urbanpixels

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by Urbanpixels

  1. I have the port specified in the connection (Windows 11 machines)

     

    With no password set it works. 

     

    With Password set, it does not. Ideally could this be fixed? i don't want a VNC connection running without a password. 

    is it related to tightVNC not needing a username? just a password?

  2. I also have issues with VNC connections. It's as if guac didnt even try to connect. Just said connection failed.

     

    Immediate failure. Not like when something times out etc.

     

    guacd[171]: ERROR:      Unable to connect to VNC server.
    guacd[171]: INFO:       User "@c" disconnected (0 users remain)
    guacd[171]: INFO:       Last user of connection "$d" disconnected
    guacd[24]: INFO:        Connection "$d" removed.

     

    Connection from another machine with the same VNC details works as it should

     

    Guac to the same machine via RDP works so not a VLAN/Network issue. Just not VNC, since latest update.

  3. I've been in touch with Cloudberry support the last 2 weeks about the issue of local storage backup failing with a cloud limit error.

     

    Although they didnt say why or how they did reproduce the error and have fixed it apparantly.

     

    Quote

    While we will be investigating your current issue you can update your backup agent to the latest version. We released it (v. 4.0.2.402) yesterday and initial problem that you were experiencing with CBF plans to local destinations should be fixed. So you should be running your old backup plans without issues after the update. 

     

    Quote

    Since updating to 4.0.1.310 my local storage backup is no longer working. I get the below error:

    Cloud object size limit exceeded (code: 1618)
    1 file was not backed up due to a cloud object size limit excess - 5.37 GB

    Im running the personal licence and have never seen this error before, i also cannot find error 1618 in any of the cloudberry documentation.

    Additionally i have the same backup set running to a Backblaze bucket and this completes fine. The issue only appears to be on my local storage.

     

  4. Hi Guys, 

     

    I've been using cloudberry for a while but the latest update has broke something for me. 

     

    Since updating to 4.0.1.310 my local storage backup is no longer working. I get the below error:

    Cloud object size limit exceeded (code: 1618)
    1 file was not backed up due to a cloud object size limit excess - 5.37 GB


    Im running the personal licence and have never seen this error before, i also cannot find error 1618 in any of the cloudberry documentation.

    Additionally i have the same backup set running to a Backblaze bucket and this completes fine. The issue only appears to be on my local storage.

     

    Has anybody seen similar issues?

  5. Hi Guys, 

     

    I've been using Cloudberry for a while on my unraid installation and in the past few months im running into an option with jobs failing. 

     

    I think this is down to the disk on my unraid server are spundown. The cloudberry app does not seem to force, or ask unraid to spin up the array disks where my files are. This means the job usually fails. 

     

    If i manually spin up all the disks and run the job it completes no problem. 

     

    Is there a way around this? i don't really want to keep the drives spun up 24/7 as it gets too warm in the location i have my server.

  6. My unraid install was on vlan 150 most of my dockers were on vlan 130

     

    I've now made everything onto vlan 130 with br0 and vlan unused. Everything is bridge and I've offloaded piHole to another device.

     

    It seems quite silly that we can't apply different IP addresses to different dockers. Sort of negates the whole point? Never had this issue with Freenas.

  7. Good morning. 

     

    I have an installation which has been running for around 2 months. Ever since creating it i've had call traces in some form or other. Usually Macvlan but now I'm not sure what is causing it.

     

    I had separate dockers on static IP's using VLAN's and still had these issues even though the big Macvlan thread says to use VLAN's

     

    Now I've moved everything to Bridge without static IP's (Other than my Pihole Docker, which is still VLAN) and im still getting call traces. Im starting to wonder if its from something else but i don't understand enough to decipher the call trace. 

     

    Can anybody help? I've attached diagnostics too.

     

    Sep 29 19:42:13 Radiant-Unraid kernel: WARNING: CPU: 2 PID: 0 at net/netfilter/nf_conntrack_core.c:945 __nf_conntrack_confirm+0xa0/0x69e Sep 29 19:42:13 Radiant-Unraid kernel: Modules linked in: macvlan tun xt_nat veth nvidia_uvm(O) ipt_MASQUERADE iptable_filter iptable_nat nf_nat_ipv4 nf_nat ip_tables xfs md_mod nvidia_drm(PO) nvidia_modeset(PO) nvidia(PO) edac_mce_amd crc32_pclmul pcbc aesni_intel aes_x86_64 glue_helper crypto_simd ghash_clmulni_intel cryptd drm_kms_helper drm kvm_amd kvm syscopyarea sysfillrect sysimgblt fb_sys_fops mpt3sas rsnvme(PO) agpgart i2c_piix4 ccp i2c_core raid_class atlantic scsi_transport_sas nvme ahci mxm_wmi wmi_bmof crct10dif_pclmul nvme_core libahci pcc_cpufreq crc32c_intel wmi button acpi_cpufreq Sep 29 19:42:13 Radiant-Unraid kernel: CPU: 2 PID: 0 Comm: swapper/2 Tainted: P O 4.19.107-Unraid #1 Sep 29 19:42:13 Radiant-Unraid kernel: Hardware name: System manufacturer System Product Name/ROG CROSSHAIR VII HERO, BIOS 3103 06/17/2020 Sep 29 19:42:13 Radiant-Unraid kernel: RIP: 0010:__nf_conntrack_confirm+0xa0/0x69e Sep 29 19:42:13 Radiant-Unraid kernel: Code: 04 e8 56 fb ff ff 44 89 f2 44 89 ff 89 c6 41 89 c4 e8 7f f9 ff ff 48 8b 4c 24 08 84 c0 75 af 48 8b 85 80 00 00 00 a8 08 74 26 <0f> 0b 44 89 e6 44 89 ff 45 31 f6 e8 95 f1 ff ff be 00 02 00 00 48 Sep 29 19:42:13 Radiant-Unraid kernel: RSP: 0018:ffff8887fe6838e8 EFLAGS: 00010202 Sep 29 19:42:13 Radiant-Unraid kernel: RAX: 0000000000000188 RBX: ffff8887f80aeb00 RCX: ffff88833490ce18 Sep 29 19:42:13 Radiant-Unraid kernel: RDX: 0000000000000001 RSI: 0000000000000133 RDI: ffffffff81e093a4 Sep 29 19:42:13 Radiant-Unraid kernel: RBP: ffff88833490cdc0 R08: 000000008cfd706e R09: ffffffff81c8aa80 Sep 29 19:42:13 Radiant-Unraid kernel: R10: 0000000000000158 R11: ffffffff81e91080 R12: 0000000000003d33 Sep 29 19:42:13 Radiant-Unraid kernel: R13: ffffffff81e91080 R14: 0000000000000000 R15: 000000000000f769 Sep 29 19:42:13 Radiant-Unraid kernel: FS: 0000000000000000(0000) GS:ffff8887fe680000(0000) knlGS:0000000000000000 Sep 29 19:42:13 Radiant-Unraid kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Sep 29 19:42:13 Radiant-Unraid kernel: CR2: 0000559a1bdefca8 CR3: 00000007a3ef0000 CR4: 0000000000340ee0 Sep 29 19:42:13 Radiant-Unraid kernel: Call Trace: Sep 29 19:42:13 Radiant-Unraid kernel: <IRQ> Sep 29 19:42:13 Radiant-Unraid kernel: ipv4_confirm+0xaf/0xb9 Sep 29 19:42:13 Radiant-Unraid kernel: nf_hook_slow+0x3a/0x90 Sep 29 19:42:13 Radiant-Unraid kernel: ip_local_deliver+0xad/0xdc Sep 29 19:42:13 Radiant-Unraid kernel: ? ip_sublist_rcv_finish+0x54/0x54 Sep 29 19:42:13 Radiant-Unraid kernel: ip_sabotage_in+0x38/0x3e Sep 29 19:42:13 Radiant-Unraid kernel: nf_hook_slow+0x3a/0x90 Sep 29 19:42:13 Radiant-Unraid kernel: ip_rcv+0x8e/0xbe Sep 29 19:42:13 Radiant-Unraid kernel: ? ip_rcv_finish_core.isra.0+0x2e1/0x2e1 Sep 29 19:42:13 Radiant-Unraid kernel: __netif_receive_skb_one_core+0x53/0x6f Sep 29 19:42:13 Radiant-Unraid kernel: netif_receive_skb_internal+0x79/0x94 Sep 29 19:42:13 Radiant-Unraid kernel: br_pass_frame_up+0x128/0x14a Sep 29 19:42:13 Radiant-Unraid kernel: ? br_flood+0xa4/0x148 Sep 29 19:42:13 Radiant-Unraid kernel: ? br_fdb_update+0x56/0x13d Sep 29 19:42:13 Radiant-Unraid kernel: br_handle_frame_finish+0x342/0x383 Sep 29 19:42:13 Radiant-Unraid kernel: ? br_pass_frame_up+0x14a/0x14a Sep 29 19:42:13 Radiant-Unraid kernel: br_nf_hook_thresh+0xa3/0xc3 Sep 29 19:42:13 Radiant-Unraid kernel: ? br_pass_frame_up+0x14a/0x14a Sep 29 19:42:13 Radiant-Unraid kernel: br_nf_pre_routing_finish+0x24a/0x271 Sep 29 19:42:13 Radiant-Unraid kernel: ? br_pass_frame_up+0x14a/0x14a Sep 29 19:42:13 Radiant-Unraid kernel: ? br_handle_local_finish+0xe/0xe Sep 29 19:42:13 Radiant-Unraid kernel: ? nf_nat_ipv4_in+0x1e/0x62 [nf_nat_ipv4] Sep 29 19:42:13 Radiant-Unraid kernel: ? br_handle_local_finish+0xe/0xe Sep 29 19:42:13 Radiant-Unraid kernel: br_nf_pre_routing+0x31c/0x343 Sep 29 19:42:13 Radiant-Unraid kernel: ? br_nf_forward_ip+0x362/0x362 Sep 29 19:42:13 Radiant-Unraid kernel: nf_hook_slow+0x3a/0x90 Sep 29 19:42:13 Radiant-Unraid kernel: br_handle_frame+0x27e/0x2bd Sep 29 19:42:13 Radiant-Unraid kernel: ? br_pass_frame_up+0x14a/0x14a Sep 29 19:42:13 Radiant-Unraid kernel: __netif_receive_skb_core+0x4a7/0x7b1 Sep 29 19:42:13 Radiant-Unraid kernel: ? inet_gro_receive+0x246/0x254 Sep 29 19:42:13 Radiant-Unraid kernel: __netif_receive_skb_one_core+0x35/0x6f Sep 29 19:42:13 Radiant-Unraid kernel: netif_receive_skb_internal+0x79/0x94 Sep 29 19:42:13 Radiant-Unraid kernel: napi_gro_receive+0x44/0x7b Sep 29 19:42:13 Radiant-Unraid kernel: aq_ring_rx_clean+0x32d/0x35b [atlantic] Sep 29 19:42:13 Radiant-Unraid kernel: ? hw_atl_b0_hw_ring_rx_receive+0x12b/0x1f7 [atlantic] Sep 29 19:42:13 Radiant-Unraid kernel: aq_vec_poll+0xf2/0x180 [atlantic] Sep 29 19:42:13 Radiant-Unraid kernel: net_rx_action+0x107/0x26c Sep 29 19:42:13 Radiant-Unraid kernel: __do_softirq+0xc9/0x1d7 Sep 29 19:42:13 Radiant-Unraid kernel: irq_exit+0x5e/0x9d Sep 29 19:42:13 Radiant-Unraid kernel: do_IRQ+0xb2/0xd0 Sep 29 19:42:13 Radiant-Unraid kernel: common_interrupt+0xf/0xf Sep 29 19:42:13 Radiant-Unraid kernel: </IRQ> Sep 29 19:42:13 Radiant-Unraid kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141 Sep 29 19:42:13 Radiant-Unraid kernel: Code: ff 45 84 f6 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 7a 8d bb ff fb 66 0f 1f 44 00 00 <48> 2b 2c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cd Sep 29 19:42:13 Radiant-Unraid kernel: RSP: 0018:ffffc90003253e98 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffd6 Sep 29 19:42:13 Radiant-Unraid kernel: RAX: ffff8887fe69fac0 RBX: ffff8887f6f66c00 RCX: 000000000000001f Sep 29 19:42:13 Radiant-Unraid kernel: RDX: 0000000000000000 RSI: 00000000238e3ad4 RDI: 0000000000000000 Sep 29 19:42:13 Radiant-Unraid kernel: RBP: 00003eb130160966 R08: 00003eb130160966 R09: 0000000000002ecb Sep 29 19:42:13 Radiant-Unraid kernel: R10: 000000002510a978 R11: 071c71c71c71c71c R12: 0000000000000002 Sep 29 19:42:13 Radiant-Unraid kernel: R13: ffffffff81e5e1e0 R14: 0000000000000000 R15: ffffffff81e5e2b8 Sep 29 19:42:13 Radiant-Unraid kernel: ? cpuidle_enter_state+0xbf/0x141 Sep 29 19:42:13 Radiant-Unraid kernel: do_idle+0x17e/0x1fc Sep 29 19:42:13 Radiant-Unraid kernel: cpu_startup_entry+0x6a/0x6c Sep 29 19:42:13 Radiant-Unraid kernel: start_secondary+0x197/0x1b2 Sep 29 19:42:13 Radiant-Unraid kernel: secondary_startup_64+0xa4/0xb0 Sep 29 19:42:13 Radiant-Unraid kernel: ---[ end trace e96b4b447794652c ]---

     

     

    radiant-unraid-diagnostics-20200930-1052.zip

  8. Thanks for your work on the Xteve_VPN docker. 

     

    It works great here, even with VLAN (not bridge mode) all my traffic is going via VPN as it should.

     

    If it's at all possible I'd love for one feature to be available. 

     

    Currently you have to specify the Local NET. This works when assigning the same LAN as my plex server. Works great. However most of my PC's are on a different VLAN which makes accessing the webGUI for changes a bit of a pain (has to be done from the defined NET. 

     

    Any chance we could define more than one Local NET? would that work? i tried comma spaced but didnt work as i expected.

     

    Thanks again

  9. Good morning.

     

    Firstly thanks to spaceinvader for this plugin and excellent videos they've helped me loads.

     

    I have a working Catalina install, iMessage, App store etc all work as they should however im struggling to get Content caching to work. The box is just grayed out.

     

    Does anybody have any idea to fix this? I've read that it struggles to work in a VM environment but surely if iMessage and iCloud work there must be a work around for caching?


    TIA

  10. Hi Guys,

     

    I love this plugin but im getting some weird log issues. It's spamming my log every 2 seconds or so, any ideas? i have to disable it to view anything else in my log.

     

    May 8 09:20:46 Urbanpixels kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
    May 8 09:20:49 Urbanpixels kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
    May 8 09:20:49 Urbanpixels kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
    May 8 09:20:52 Urbanpixels kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
    May 8 09:20:52 Urbanpixels kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs
    May 8 09:20:54 Urbanpixels kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
    May 8 09:20:54 Urbanpixels kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs

     

    Edit - Could be something else as i get the errors in the log when uninstalled and running the command above.

  11. 4 minutes ago, johnnie.black said:

    To make a direct replacement you'd need both old and new devices connected, which might not be an option with m.2 devices, you can still remove one device, convert pool to single, then add another device and reconvert to raid1, but before starting make sure your pool is redundant because of this bug, if you want post diags to confirm.

    Thanks, 

     

    I assumed Raid1 was a complete mirror so im surprised to learn you can't just replace one drive.

     

    Mine looks fine here i believe - 

     

        Device size:                   1.86TiB
        Device allocated:             62.06GiB
        Device unallocated:            1.80TiB
        Device missing:                  0.00B
        Used:                         53.68GiB
        Free (estimated):            926.11GiB      (min: 926.11GiB)
        Data ratio:                       2.00
        Metadata ratio:                   2.00
        Global reserve:               20.12MiB      (used: 0.00B)

                      Data     Metadata  System              
    Id Path           RAID1    RAID1     RAID1    Unallocated
    -- -------------- -------- --------- -------- -----------
     1 /dev/nvme0n1p1 30.00GiB   1.00GiB 32.00MiB   922.84GiB
     2 /dev/nvme1n1p1 30.00GiB   1.00GiB 32.00MiB   922.84GiB
    -- -------------- -------- --------- -------- -----------
       Total          30.00GiB   1.00GiB 32.00MiB     1.80TiB
       Used           26.72GiB 120.50MiB 16.00KiB     

  12. Hi Guys, 

     

    I have 2x cache drives (m.2) in my Unraid Pool. I think one of the drives is failing as im seeing loads of warnings.

     

    If the devices are in Raid1 i should be able to just power down the machine and replace the damaged drive yes? i will not lose any data? I expect i will just need to re-balance afterwards?

     

    Or am i totally wrong with that?

     

    Many thanks.

     

     

  13. Hi Guys, 

     

    So I've had a bit of an update with my issues. I reinstalled all my dockers which were causing issues and overnight the system crashed but not in the same way as before.

     

    I could not access my webGUI at all. I was given a NGINX error 500 after a few seconds. I can access SSH all my dockers are working and shares are still accessible which is not the same as before. I've attached a new log file which shows some Macvlan issues around 2am on the 7th Feb.

     

    I gave a restart command with powerdown -r but this didn't do anything. I had to run it as Sudo which did restart the box albeit uncleanly and triggered a parity check.

     

    If anybody has any ideas it would be appreciated. Without Pihole and Homebridge running the installation is Solid. Only when i use dockers will different IP's to the unraid box do these issues occur

     

    TIA

     

     

     

    urbanpixels-diagnostics-20200207-0752.zip

  14. Hi Guys, 

     

    Im new to unraid so please forgive any things which maybe obvious.

     

    Im having issues assigning a separate IP (same subnet as the server) to a docker container using br0

     

    When docker containers use HOST everything is peachy. When using Custom: br0 i get issues almost instantly.

     

    The main server is 192.168.1.10 and /24 Dockers which are host and ports are OK however if i want to make a docker container with and IP of 1.20 im getting trace warnings for macvlan within a few minutes and eventually hard crashes. The call traces go away instantly if a container is shutdown or removed which uses it's own IP via br0.

     

    Im using 6.8.2 but i've had the issue since i started using Unraid on 6.8.0.

     

    My network adapter is a 10gig interface which unraid has set up as bond0

     

    Ive included one excerpt from the log below. I've seen other threads on here about this issue but none seem to fix it. Only suggest to use another physical nic fro the docker in question. I came from Freenas using the jails and it was never an issue to use seperate IP addresses per jail?

     

    Jan 12 12:40:45 Urbanpixels kernel: CR2: 000019013158a000 CR3: 0000000001e0a005 CR4: 00000000003606f0
    Jan 12 12:40:45 Urbanpixels kernel: Call Trace:
    Jan 12 12:40:45 Urbanpixels kernel: <IRQ>
    Jan 12 12:40:45 Urbanpixels kernel: nf_nat_used_tuple+0x2e/0x49 [nf_nat]
    Jan 12 12:40:45 Urbanpixels kernel: nf_nat_setup_info+0x5fd/0x666 [nf_nat]
    Jan 12 12:40:45 Urbanpixels kernel: ? ipt_do_table+0x5da/0x62a [ip_tables]
    Jan 12 12:40:45 Urbanpixels kernel: nf_nat_alloc_null_binding+0x71/0x88 [nf_nat]
    Jan 12 12:40:45 Urbanpixels kernel: nf_nat_inet_fn+0x9f/0x1b9 [nf_nat]
    Jan 12 12:40:45 Urbanpixels kernel: ? br_handle_local_finish+0xe/0xe
    Jan 12 12:40:45 Urbanpixels kernel: nf_nat_ipv4_in+0x1e/0x62 [nf_nat_ipv4]
    Jan 12 12:40:45 Urbanpixels kernel: nf_hook_slow+0x3a/0x90
    Jan 12 12:40:45 Urbanpixels kernel: br_nf_pre_routing+0x303/0x343
    Jan 12 12:40:45 Urbanpixels kernel: ? br_nf_forward_ip+0x362/0x362
    Jan 12 12:40:45 Urbanpixels kernel: nf_hook_slow+0x3a/0x90
    Jan 12 12:40:45 Urbanpixels kernel: br_handle_frame+0x27e/0x2bd
    Jan 12 12:40:45 Urbanpixels kernel: ? br_pass_frame_up+0x14a/0x14a
    Jan 12 12:40:45 Urbanpixels kernel: __netif_receive_skb_core+0x464/0x76e
    Jan 12 12:40:45 Urbanpixels kernel: ? __kmalloc_node_track_caller+0x11b/0x12c
    Jan 12 12:40:45 Urbanpixels kernel: __netif_receive_skb_one_core+0x35/0x6f
    Jan 12 12:40:45 Urbanpixels kernel: netif_receive_skb_internal+0x9f/0xba
    Jan 12 12:40:45 Urbanpixels kernel: process_responses+0xd4d/0xee4 [cxgb3]
    Jan 12 12:40:45 Urbanpixels kernel: ? enqueue_task_fair+0xba/0x557
    Jan 12 12:40:45 Urbanpixels kernel: napi_rx_handler+0x1f/0x5f [cxgb3]
    Jan 12 12:40:45 Urbanpixels kernel: net_rx_action+0x107/0x26c
    Jan 12 12:40:45 Urbanpixels kernel: __do_softirq+0xc9/0x1d7
    Jan 12 12:40:45 Urbanpixels kernel: irq_exit+0x5e/0x9d
    Jan 12 12:40:45 Urbanpixels kernel: do_IRQ+0xb2/0xd0
    Jan 12 12:40:45 Urbanpixels kernel: common_interrupt+0xf/0xf
    Jan 12 12:40:45 Urbanpixels kernel: </IRQ>
    Jan 12 12:40:45 Urbanpixels kernel: RIP: 0010:cpuidle_enter_state+0xe8/0x141
    Jan 12 12:40:45 Urbanpixels kernel: Code: ff 45 84 f6 74 1d 9c 58 0f 1f 44 00 00 0f ba e0 09 73 09 0f 0b fa 66 0f 1f 44 00 00 31 ff e8 e0 99 bb ff fb 66 0f 1f 44 00 00 <48> 2b 2c 24 b8 ff ff ff 7f 48 b9 ff ff ff ff f3 01 00 00 48 39 cd
    Jan 12 12:40:45 Urbanpixels kernel: RSP: 0018:ffffffff81e03e80 EFLAGS: 00000246 ORIG_RAX: ffffffffffffffd7
    Jan 12 12:40:45 Urbanpixels kernel: RAX: ffff888436a1fac0 RBX: ffff888436a2a000 RCX: 000000000000001f
    Jan 12 12:40:45 Urbanpixels kernel: RDX: 0000000000000000 RSI: 000000002aaaaaaa RDI: 0000000000000000
    Jan 12 12:40:45 Urbanpixels kernel: RBP: 00002e3c05b1e5ca R08: 00002e3c05b1e5ca R09: 000000000000156b
    Jan 12 12:40:45 Urbanpixels kernel: R10: 0000000008a03324 R11: 071c71c71c71c71c R12: 0000000000000006
    Jan 12 12:40:45 Urbanpixels kernel: R13: ffffffff81e5b1a0 R14: 0000000000000000 R15: ffffffff81e5b3f8
    Jan 12 12:40:45 Urbanpixels kernel: do_idle+0x17e/0x1fc
    Jan 12 12:40:45 Urbanpixels kernel: cpu_startup_entry+0x6a/0x6c
    Jan 12 12:40:45 Urbanpixels kernel: start_kernel+0x44e/0x46c
    Jan 12 12:40:45 Urbanpixels kernel: secondary_startup_64+0xa4/0xb0
    Jan 12 12:41:12 Urbanpixels kernel: rcu: INFO: rcu_bh self-detected stall on CPU
    Jan 12 12:41:12 Urbanpixels kernel: rcu: 	3-....: (1903655 ticks this GP) idle=

     

    urbanpixels-diagnostics-20200204-1353.zip