dalben

Members
  • Posts

    1463
  • Joined

  • Last visited

Report Comments posted by dalben

  1. Thanks.

     

    OK, did that, seems up and running.  Mild heart attack when the containers wouldn't start. It seems that even though they had Custom eth1 and the correct IP address in the settings, the container needed me to make a change to actually set that.  Just deleting the last number of the IP and re-entering it, then Apply, and all came good

     

    Some of my containers now seem rate limited.  What was at least a 50MB download speed before is now around 10MB.  But I also upgraded that latest Unifi network client around the same time so I'll need roll that back to see whats causing this issue.

  2. @JorgeB - Caught the macvlan call trace today.  Attached are diagnostics and the call trace snippet below.  I have Version: 6.12.4 running on the box.

     

    Sep 22 08:39:13 tdm kernel: ------------[ cut here ]------------
    Sep 22 08:39:13 tdm kernel: WARNING: CPU: 9 PID: 159 at net/netfilter/nf_conntrack_core.c:1210 __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: Modules linked in: tun tls xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter xfs md_mod tcp_diag inet_diag ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc macvtap macvlan tap e1000e r8169 realtek intel_rapl_msr zfs(PO) intel_rapl_common zunicode(PO) zzstd(O) i915 x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel zlua(O) zavl(PO) icp(PO) kvm iosf_mbi drm_buddy i2c_algo_bit ttm zcommon(PO) drm_display_helper drm_kms_helper znvpair(PO) spl(O) crct10dif_pclmul crc32_pclmul crc32c_intel drm ghash_clmulni_intel sha512_ssse3 aesni_intel crypto_simd cryptd mei_hdcp mei_pxp intel_gtt rapl intel_cstate gigabyte_wmi wmi_bmof mpt3sas i2c_i801 nvme agpgart ahci mei_me i2c_smbus syscopyarea raid_class i2c_core intel_uncore nvme_core mei libahci sysfillrect scsi_transport_sas sysimgblt fb_sys_fops thermal fan video wmi
    Sep 22 08:39:13 tdm kernel: backlight intel_pmc_core acpi_tad acpi_pad button unix [last unloaded: e1000e]
    Sep 22 08:39:13 tdm kernel: CPU: 9 PID: 159 Comm: kworker/u24:6 Tainted: P           O       6.1.49-Unraid #1
    Sep 22 08:39:13 tdm kernel: Hardware name: Gigabyte Technology Co., Ltd. B365 M AORUS ELITE/B365 M AORUS ELITE-CF, BIOS F3d 08/18/2020
    Sep 22 08:39:13 tdm kernel: Workqueue: events_unbound macvlan_process_broadcast [macvlan]
    Sep 22 08:39:13 tdm kernel: RIP: 0010:__nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: Code: 44 24 10 e8 e2 e1 ff ff 8b 7c 24 04 89 ea 89 c6 89 04 24 e8 7e e6 ff ff 84 c0 75 a2 48 89 df e8 9b e2 ff ff 85 c0 89 c5 74 18 <0f> 0b 8b 34 24 8b 7c 24 04 e8 18 dd ff ff e8 93 e3 ff ff e9 72 01
    Sep 22 08:39:13 tdm kernel: RSP: 0018:ffffc9000032cd98 EFLAGS: 00010202
    Sep 22 08:39:13 tdm kernel: RAX: 0000000000000001 RBX: ffff888194440700 RCX: 0d35b370dc56628d
    Sep 22 08:39:13 tdm kernel: RDX: 0000000000000000 RSI: 0000000000000001 RDI: ffff888194440700
    Sep 22 08:39:13 tdm kernel: RBP: 0000000000000001 R08: e03672da4c370c38 R09: 4eea21ed5130b6c5
    Sep 22 08:39:13 tdm kernel: R10: 9101b661ea51c81d R11: ffffc9000032cd60 R12: ffffffff82a11d00
    Sep 22 08:39:13 tdm kernel: R13: 000000000002fc07 R14: ffff8881af585900 R15: 0000000000000000
    Sep 22 08:39:13 tdm kernel: FS:  0000000000000000(0000) GS:ffff88880f440000(0000) knlGS:0000000000000000
    Sep 22 08:39:13 tdm kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    Sep 22 08:39:13 tdm kernel: CR2: 0000149eca3e2484 CR3: 000000000220a002 CR4: 00000000003706e0
    Sep 22 08:39:13 tdm kernel: DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    Sep 22 08:39:13 tdm kernel: DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    Sep 22 08:39:13 tdm kernel: Call Trace:
    Sep 22 08:39:13 tdm kernel: <IRQ>
    Sep 22 08:39:13 tdm kernel: ? __warn+0xab/0x122
    Sep 22 08:39:13 tdm kernel: ? report_bug+0x109/0x17e
    Sep 22 08:39:13 tdm kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: ? handle_bug+0x41/0x6f
    Sep 22 08:39:13 tdm kernel: ? exc_invalid_op+0x13/0x60
    Sep 22 08:39:13 tdm kernel: ? asm_exc_invalid_op+0x16/0x20
    Sep 22 08:39:13 tdm kernel: ? __nf_conntrack_confirm+0xa4/0x2b0 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: ? __nf_conntrack_confirm+0x9e/0x2b0 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: ? nf_nat_inet_fn+0x60/0x1a8 [nf_nat]
    Sep 22 08:39:13 tdm kernel: nf_conntrack_confirm+0x25/0x54 [nf_conntrack]
    Sep 22 08:39:13 tdm kernel: nf_hook_slow+0x3a/0x96
    Sep 22 08:39:13 tdm kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Sep 22 08:39:13 tdm kernel: NF_HOOK.constprop.0+0x79/0xd9
    Sep 22 08:39:13 tdm kernel: ? ip_protocol_deliver_rcu+0x164/0x164
    Sep 22 08:39:13 tdm kernel: __netif_receive_skb_one_core+0x77/0x9c
    Sep 22 08:39:13 tdm kernel: process_backlog+0x8c/0x116
    Sep 22 08:39:13 tdm kernel: __napi_poll.constprop.0+0x28/0x124
    Sep 22 08:39:13 tdm kernel: net_rx_action+0x159/0x24f
    Sep 22 08:39:13 tdm kernel: __do_softirq+0x126/0x288
    Sep 22 08:39:13 tdm kernel: do_softirq+0x7f/0xab
    Sep 22 08:39:13 tdm kernel: </IRQ>
    Sep 22 08:39:13 tdm kernel: <TASK>
    Sep 22 08:39:13 tdm kernel: __local_bh_enable_ip+0x4c/0x6b
    Sep 22 08:39:13 tdm kernel: netif_rx+0x52/0x5a
    Sep 22 08:39:13 tdm kernel: macvlan_broadcast+0x10a/0x150 [macvlan]
    Sep 22 08:39:13 tdm kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan]
    Sep 22 08:39:13 tdm kernel: process_one_work+0x1a8/0x295
    Sep 22 08:39:13 tdm kernel: worker_thread+0x18b/0x244
    Sep 22 08:39:13 tdm kernel: ? rescuer_thread+0x281/0x281
    Sep 22 08:39:13 tdm kernel: kthread+0xe4/0xef
    Sep 22 08:39:13 tdm kernel: ? kthread_complete_and_exit+0x1b/0x1b
    Sep 22 08:39:13 tdm kernel: ret_from_fork+0x1f/0x30
    Sep 22 08:39:13 tdm kernel: </TASK>
    Sep 22 08:39:13 tdm kernel: ---[ end trace 0000000000000000 ]---

     

    tdm-diagnostics-20230922-1630.zip

  3. On 8/9/2023 at 3:01 PM, JorgeB said:

    Can you please confirm if you have bridging enabled for the docker dedicated NIC? And if not please post a couple of the call traces you are getting (or the syslog).

     

    I may need to back track out of that statement.  I went trawling through my syslog to find I had weekly alerts from the Fix Common Problems plugin that I had macvlan call trace errors, but I couldn't find one in the actual syslog.  So ignore the above for now,

    • Like 1
  4. The call trace issue, while I have it, doesn't seem to cause me issues.  Scanning the logs I get one about once a week.  Doesn't halt the system in anyway and I've seen no issues.

     

    I reconfigured my network setup based on some guides earlier.  2 NICs in the server, one dedicated to the Docker network.  No bridge between NIC 1 and 2.  NIC 2 isn't assigned an IP address.  

     

    I run a Unifi setup and I'd much prefer my network map to be correct.  Being borderline OCD i like to give all my dockers a static IP address.  So IPVLAN isn't for me.

     

    So in summary, I do get the odd call trace, but it doesn't cause my server any problems.  Not sure if that's consistent with others or whether I'm one of the lucky ones.

  5. 1 hour ago, isvein said:

    Just me, or does Docker uses forever to check for updates?

    I thought the same but then looking at the logs realised that after loading RC2 rendered my docker.img as a read only volume.  btrfs complained of corrupt super block.  Not sure if RC2 related or a coincidence.

     

    Now in the process of rebuilding my docker image.

     

    Edit:  Looks like my entire cache pool/drive is read only and also getting read errors.  Was working before.  Reboot hasn't helped.   Need to dig in and see what's happening.

     

    Diags attached if anyone wants to dig in deeper.

    tdm-diagnostics-20230320-0601.zip

  6. On the pool disks tab.  Second pool shows this error:

     

    Balance Status
    btrfs filesystem df:
    Data, single: total=30.00GiB, used=21.21GiB
    System, RAID1: total=32.00MiB, used=16.00KiB
    Metadata, RAID1: total=2.00GiB, used=29.98MiB
    GlobalReserve, single: total=261.73MiB, used=0.00B
    btrfs balance status:
    No balance found on '/mnt/rad'
     
     Current usage ratio:
    Warning: A non-numeric value encountered in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(515) : eval()'d code on line 542
    0 % ---
    Warning: A non-numeric value encountered in /usr/local/emhttp/plugins/dynamix/include/DefaultPageLayout.php(515) : eval()'d code on line 542
    Full Balance recommended

     

  7. On 8/10/2021 at 7:23 AM, dalben said:

    I've had two hard lockups in 12 hours since installing this release.  Nothing in syslog around time of lockup.

     

    Diagnostics attached.  I'll stay on this release for a bit longer in case there's a need for more info or a fix, but will need to roll back to 6.9 sooner rather than later.

     

    tdm-diagnostics-20210810-0718.zip 135.45 kB · 3 downloads

     

    3 hard lockups in 36 hours of 6.10 RC1

    If no one is interested in the logs or any potential other checks or tests I'll just roll back to 6.9

     

    Latest diagnostics attached

    tdm-diagnostics-20210811-1548.zip

  8. I've noticed I can't map directly to a pool disks.  I have a mirrored cache and a single ssd called system.  U used to be able to access them via \\tower\cache or \\tower\system.  Now I can't reach them,  Is this by design?  If so, is there anyway a user can revert back if they want too?  I have a few scripts and other setups throughout my network that look for the above paths that are now failing.

  9. 34 minutes ago, bastl said:

    @dalben Check how the shares for "appdata" and "system" are configured. I bet they don't exist on your cache device. Adjust your paths like the following:

     

    grafik.png.e28416ce67216a9af9c3edca1eb52bae.png

     

    Thanks.  Adding the slash and the extension got it working again.

     

    26 minutes ago, bonienl said:

    Version 6.8 is more strict on user input and marks anything invalid rather then starting the service.

     

    A vDisk location must point to an image file and not a folder. This means a file with the .img extension.

    A storage location must point to a folder. This means the path must end with a slash

     

    See the examples given by @bastl

     

    Not a bug.

    Technically not a bug no.  But when something changes like this that requires users to modify what they've been doing for years, maybe updating the help banner on that setting to reflect the change, or give an example, might be good practice.

  10. 30 minutes ago, itimpi said:

    That could easily have that effect as a write to any drive will require all disks to be spinning.

    OK, that does make sense.  I've been meaning to find out whats been keeping my Disk 4 from spinning down for a while.  Looks like enabling RW when Disk 4 never spins down will keep them all up.

     

    Cheers.

  11. 7 minutes ago, WizADSL said:

    I would recommend disabling any ad blockers when accessing the unraid web interface.  I have had problems in the past with this.

    I don't use an adblocker these days.  PiHole has eliminated the need for that

     

    On 11/6/2019 at 10:22 PM, bonienl said:

    Just did a multi container update and all is working fine for me (Windows 10 + Chrome).

     

    Do you have some plugin installed in Chrome which may interfere?

     

    I've stripped down my plugins to LastPass, nzbget-chrome and Transmission easy client.  Still happens.  Next time I have a multi-update I'll disable one at a time.  

  12. 41 minutes ago, Squid said:

    I'm curious.  Do you have any scripts via User Scripts set to start at Array Start?  I only see the message not go away on my first start, never on a subsequent stop / start.  And I have a script set to run at 1st array start only.  Just haven't had time to investigate further.

    Yes. I have a script running at array start via User Scripts that I needed to install when the flash drive got locked down. 

     

    When I'm home I can try disabling that

  13. 7 minutes ago, itimpi said:

    I am afraid stopping/starting services is not part of the plugin as its primary purpose was just to avoid parity checks hitting system performance during prime time.   It seemed a step too far at the moment and rather difficult to implement in a generalized fashion.

     

    What I have been thinking of adding is an ability to run custom scripts on parity check start/resume and pause/end.  If I get this in place so you can do your own stop/starts is this likely to be of use?

    Yes, though my scripting skills aren't great but the ability to shutdown dockers and plugins before a parity check would be handy.  Though I'm assuming doing so would speed up the parity check.