TheLinuxGuy

Members
  • Posts

    68
  • Joined

  • Last visited

Everything posted by TheLinuxGuy

  1. Hi newbie question, according to changelog this image should support `syslog` - how do I go about enabling and using it? I see 514 port exposed and when shipping logs to the observium it does not seem to be getting processed. Are other settings need be changed in the web UI or files to make this work? The instructions from observium discuss rsyslogd but this is not installed in the docker container.
  2. Not sure where else to post this question. "Unraid digest" email today included a blurb I wanted to know more about what "Hybrid ZFS pools" means in this context? Any information about the technical details of it yet?
  3. @JSE that's fair, I am not married to the idea of bcachefs. There could be a similar caching solution achieved with lvmcache + dm-cache with btrfs under the hood too, this option should be much more stable and proven than bcachefs.
  4. Almost exactly a year ago, I made this feature request which got 19 people upvoting. One possible solution for my feature request is to add "experimental" bcachefs support into unraid. Another option could be btrfs if they ever implement https://github.com/kdave/btrfs-progs/issues/610 bcachefs is available in mainline kernel 6.7... given this plus the amount of interest about this in the unraid community, I feel that @limetechshould really consider working on this feature.
  5. My unraid 6.12.6 "Main" page no longer shows the disk drives members of the array. Everything else is working except the total array size, disks etc. It is not a web browser cache issue; tried multiple web browsers and cleared cache. Is this a known issue? I have not tried a full system reboot but rebooting to fix this doesn't make sense to me.
  6. Curious if this ever got fixed for you? I am seeing the same thing on 6.12.6 in multiple browsers.
  7. My unifi network keeps alarming me like this Its my unraid server; the NIC interface is used in two distinct VLANs. root@tower:~# ifconfig | grep 02:d2:39:7f:d8:4e -B4 -A4 TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vhost0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 192.168.20.28 netmask 255.255.255.0 broadcast 0.0.0.0 ether 02:d2:39:7f:d8:4e txqueuelen 500 (Ethernet) RX packets 798506 bytes 114513804 (109.2 MiB) RX errors 0 dropped 3 overruns 0 frame 0 TX packets 266397 bytes 325195494 (310.1 MiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 -- TX packets 1 bytes 90 (90.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 vhost0.70: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 ether 02:d2:39:7f:d8:4e txqueuelen 500 (Ethernet) RX packets 125353 bytes 7520476 (7.1 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7 bytes 586 (586.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 How can I change the MAC address that vlan70 is using (same physical NIC)?
  8. I'm interested in keeping my media with the same quality as my current copy but improve the ability for files to play more smoothly on my Plex clients (Firestick 4K Max, Apple TV). Does anyone have some helpful TL;DR on how to achieve this goal? There is a bunch of information out there explaining all of the complex settings for Tdarr, while Unmanic seems more simple but also others seem to sugest that sometimes conversion of files may cause more storage to be used on raw disk vs. original copy. I also saw a comment that simply recommended replacing the video container (.mkv to .mp4) without changing anything else. I am not totally clear on how to do this since at least in unmanic you need to download the audic codec plugin of your choise like AAC, don;t you? or is the "remux plugin" enough?
  9. Could this be confirmed for sure? I found a past discussion where I read that someone tested and verified with "File Activity Plugin" that actually you don't need to restart replaying anything; the FUSE picks it up without intervention. Here is the discussions: "Because of shfs mechanism accessing a file from /mnt/user will read/write fro cache if it exists, then from array. Duplicate data are not a problem and globally speed up things." Also "Edit 13-02-2020: yes, after checking with File Activity plugin, that's the case and its plex/transmission take the file on cache as soon as it is available!"
  10. How does unraid's FUSE filesystem behave when: * share configured as cache+array * plex uses FUSE mount /mnt/user/movies * user from plex plays a movie not in cache (/mnt/cache/movies not hit; /mnt/diskX/movies finds the file - spins up hdd) 1. What would happen if rsync copies the file being played in plex and suddenly /mnt/cache now has the same data logically represented in /mnt/user/movies ? 2. Would the FUSE mount transparently refer to /mnt/cache/movies and allow the array disk to sleep? I vaguely remember reading something on reddit indicating this would be the behavior but I can't find that discussion now... asking to validate so a new feature to increase power effiency in unraid could be done in https://github.com/bexem/PlexCache/issues/20
  11. I couldn't find this mentioned in the FAQ. The hotio qbittorrent-vpn package is giving me DNS issues; hotio says to use "--dns" flag to force the DNS fix. Where do I set this in unraid? https://docs.docker.com/network/ < there are other flags like "--hostname" that are also of interest for me to setup since I use macvlan. Thanks in advance.
  12. I'm having issues with qbittorrent-vpn package, the DNS resolution is broken. `--dns 1.1.1.1` docker CLI flag according to the documentation is the way to fix it. How can I add this "--dns" flag to the docker startup command of this given container? I do not see a setting for it.
  13. I am curious if you ever went back to macvlan or if you saw any issues? I have the same NIC; on latest unraid I get the kernel crashes macvlan.
  14. There seems to be a bug in unraid - it seems like it deleted all of the docker macvlan networks when I switched from macvlan to ipvlan (via settings menu). But flipping the setting back to macvlan did not restore the networks that used to be there - which are still there in the server's network config page. root@tower:~# docker network inspect 390f44584cb6 [ { "Name": "none", "Id": "390f44584cb6ede959111c0ac6ab1caaf0de6bdc6ff9808596e5c837e88a6459", "Created": "2023-08-06T01:17:32.854369674-07:00", "Scope": "local", "Driver": "null", "EnableIPv6": false, "IPAM": { "Driver": "default", "Options": null, "Config": [] }, "Internal": false, "Attachable": false, "Ingress": false, "ConfigFrom": { "Network": "" }, "ConfigOnly": false, "Containers": {}, "Options": {}, "Labels": {} } ] root@tower:~# docker network ls NETWORK ID NAME DRIVER SCOPE 96c145534982 bridge bridge local 3e455c84393c host host local 390f44584cb6 none null local root@tower:~# root@tower:~# ip link 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN mode DEFAULT group default qlen 1000 link/ipip 0.0.0.0 brd 0.0.0.0 6: eth0: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9000 qdisc fq master bond0 state DOWN mode DEFAULT group default qlen 1000 link/ether 00:e2:x8 brd ff:ff:ff:ff:ff:ff 7: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 9000 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether 00:e2:x brd ff:ff:ff:ff:ff:ff permaddr 24:8a:07:e3:14:b0 8: eth2: <NO-CARRIER,BROADCAST,MULTICAST,SLAVE,UP> mtu 9000 qdisc mq master bond0 state DOWN mode DEFAULT group default qlen 1000 link/ether 00:e2x brd ff:ff:ff:ff:ff:ff permaddr 24:8a:07:e3:14:b1 9: bond0: <BROADCAST,MULTICAST,PROMISC,MASTER,UP,LOWER_UP> mtu 9000 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether 00:e2:x brd ff:ff:ff:ff:ff:ff 10: bond0.70@bond0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 9000 qdisc noqueue master br0.70 state UP mode DEFAULT group default qlen 1000 link/ether 00:e2x brd ff:ff:ff:ff:ff:ff 11: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:e2:x8 brd ff:ff:ff:ff:ff:ff 12: br0.70: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether 00:e2:x8 brd ff:ff:ff:ff:ff:ff 13: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:e5:7d:53:3d brd ff:ff:ff:ff:ff:ff the containers i had configured with br0.70 complain of being unable to find br0.70 network looks like - but that isn;t everything. I cannot "edit" the containers themselves
  15. Hey all, I am getting this error on all of my containers which were previously setup with macvlan and a static IP. I did try to switch from macvlan to ipvlan; this did not work. I started getting the above error message. I then reverted all of the networking changes I made; toggled docker on/off a few times but my containers are not starting at all (not even with the old settings prone to crashes). 2023-08-14,18:29:06,Info,tower,kern,kernel,note: kswapd0[162] exited with preempt_count 1 2023-08-14,18:29:06,Info,tower,kern,kernel,note: kswapd0[162] exited with irqs disabled 2023-08-14,18:29:06,Warning,tower,kern,kernel,PKRU: 55555554 2023-08-14,18:29:06,Warning,tower,kern,kernel,CR2: 0000000000000000 CR3: 00000001bf7c0006 CR4: 0000000000770ee0 2023-08-14,18:29:06,Warning,tower,kern,kernel,CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 2023-08-14,18:29:06,Warning,tower,kern,kernel,FS: 0000000000000000(0000) GS:ffff8884a0540000(0000) knlGS:0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,R13: 0000000000000058 R14: 0000000000000000 R15: 0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,R10: ffff8881ebc282c0 R11: 0000000000000000 R12: ffff8881c3d90e38 2023-08-14,18:29:06,Warning,tower,kern,kernel,RBP: ffff8881c3d90db8 R08: ffffffff82206f48 R09: 000000000000034b 2023-08-14,18:29:06,Warning,tower,kern,kernel,RDX: 0000000000000001 RSI: ffff8881c3d90e38 RDI: 0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81e29f00 2023-08-14,18:29:06,Warning,tower,kern,kernel,RSP: 0018:ffffc9000089fac0 EFLAGS: 00010246 2023-08-14,18:29:06,Warning,tower,kern,kernel,Code: 89 31 c0 f0 48 0f b1 9d f0 00 00 00 48 85 c0 74 0e 48 89 df 5b 5d 41 5c 41 5d e9 9c f9 ff ff 5b 5d 41 5c 41 5d c3 cc cc cc cc <48> 8b 07 48 83 c0 60 48 39 c7 74 2c 53 48 89 fb e8 c2 86 e6 ff 48 2023-08-14,18:29:06,Warning,tower,kern,kernel,RIP: 0010:wb_get+0x0/0x3d 2023-08-14,18:29:06,Warning,tower,kern,kernel,---[ end trace 0000000000000000 ]--- 2023-08-14,18:29:06,Warning,tower,kern,kernel,CR2: 0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,sha512_ssse3 aesni_intel crypto_simd mei_hdcp mei_pxp cryptd wmi_bmof rapl nvme intel_gtt intel_cstate mei_me ahci agpgart i2c_i801 input_leds sr_mod intel_uncore i2c_smbus mei libahci nvme_core cdrom joydev led_class i2c_core syscopyarea sysfillrect sysimgblt thermal fb_sys_fops fan tpm_crb tpm_tis tpm_tis_core video tpm wmi backlight int3400_thermal intel_pmc_core acpi_thermal_rel acpi_pad acpi_tad button unix [last unloaded: md_mod] 2023-08-14,18:29:06,Warning,tower,kern,kernel,Modules linked in: md_mod xt_connmark xt_mark iptable_mangle xt_comment iptable_raw wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha xt_nat xt_tcpudp macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter dm_crypt dm_mod nfsd auth_rpcgss oid_registry lockd grace sunrpc tcp_diag inet_diag corefreqk(O) ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet bridge 8021q garp mrp stp llc bonding tls mlx4_en mlx4_core r8169 realtek zfs(PO) zunicode(PO) zzstd(O) i915 zlua(O) zavl(PO) icp(PO) intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp zcommon(PO) kvm_intel znvpair(PO) iosf_mbi drm_buddy i2c_algo_bit ttm spl(O) drm_display_helper kvm drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel 2023-08-14,18:29:06,Warning,tower,kern,kernel,</TASK> 2023-08-14,18:29:06,Warning,tower,kern,kernel,ret_from_fork+0x1f/0x30 2023-08-14,18:29:06,Warning,tower,kern,kernel,? kthread_complete_and_exit+0x1b/0x1b 2023-08-14,18:29:06,Warning,tower,kern,kernel,kthread+0xe4/0xef 2023-08-14,18:29:06,Warning,tower,kern,kernel,? balance_pgdat+0x6a2/0x6a2 2023-08-14,18:29:06,Warning,tower,kern,kernel,? _raw_spin_rq_lock_irqsave+0x20/0x20 2023-08-14,18:29:06,Warning,tower,kern,kernel,kswapd+0x2f0/0x333 2023-08-14,18:29:06,Warning,tower,kern,kernel,? finish_task_switch.isra.0+0x140/0x218 2023-08-14,18:29:06,Warning,tower,kern,kernel,? raw_spin_rq_unlock_irq+0x5/0x10 2023-08-14,18:29:06,Warning,tower,kern,kernel,? _raw_spin_unlock+0x14/0x29 2023-08-14,18:29:06,Warning,tower,kern,kernel,? newidle_balance+0x289/0x30a 2023-08-14,18:29:06,Warning,tower,kern,kernel,balance_pgdat+0x4e9/0x6a2 2023-08-14,18:29:06,Warning,tower,kern,kernel,shrink_node+0x318/0x549 2023-08-14,18:29:06,Warning,tower,kern,kernel,shrink_slab+0x1f9/0x267 2023-08-14,18:29:06,Warning,tower,kern,kernel,do_shrink_slab+0x188/0x2a1 2023-08-14,18:29:06,Warning,tower,kern,kernel,super_cache_scan+0xf4/0x17c 2023-08-14,18:29:06,Warning,tower,kern,kernel,prune_dcache_sb+0x51/0x73 2023-08-14,18:29:06,Warning,tower,kern,kernel,shrink_dentry_list+0xaa/0xba 2023-08-14,18:29:06,Warning,tower,kern,kernel,__dentry_kill+0xcb/0x131 2023-08-14,18:29:06,Warning,tower,kern,kernel,evict+0x4c/0x150 2023-08-14,18:29:06,Warning,tower,kern,kernel,inode_io_list_del+0x23/0x80 2023-08-14,18:29:06,Warning,tower,kern,kernel,locked_inode_to_wb_and_lock_list+0x28/0x73 2023-08-14,18:29:06,Warning,tower,kern,kernel,? __inode_attach_wb+0xc5/0xc5 2023-08-14,18:29:06,Warning,tower,kern,kernel,? asm_exc_page_fault+0x22/0x30 2023-08-14,18:29:06,Warning,tower,kern,kernel,? exc_page_fault+0xfb/0x11d 2023-08-14,18:29:06,Warning,tower,kern,kernel,? xas_store+0x2a7/0x412 2023-08-14,18:29:06,Warning,tower,kern,kernel,? do_user_addr_fault+0x12e/0x48d 2023-08-14,18:29:06,Warning,tower,kern,kernel,? page_fault_oops+0x329/0x376 2023-08-14,18:29:06,Warning,tower,kern,kernel,? __die_body+0x1a/0x5c 2023-08-14,18:29:06,Warning,tower,kern,kernel,<TASK> 2023-08-14,18:29:06,Warning,tower,kern,kernel,Call Trace: 2023-08-14,18:29:06,Warning,tower,kern,kernel,PKRU: 55555554 2023-08-14,18:29:06,Warning,tower,kern,kernel,CR2: 0000000000000000 CR3: 00000001bf7c0006 CR4: 0000000000770ee0 2023-08-14,18:29:06,Warning,tower,kern,kernel,CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 2023-08-14,18:29:06,Warning,tower,kern,kernel,FS: 0000000000000000(0000) GS:ffff8884a0540000(0000) knlGS:0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,R13: 0000000000000058 R14: 0000000000000000 R15: 0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,R10: ffff8881ebc282c0 R11: 0000000000000000 R12: ffff8881c3d90e38 2023-08-14,18:29:06,Warning,tower,kern,kernel,RBP: ffff8881c3d90db8 R08: ffffffff82206f48 R09: 000000000000034b 2023-08-14,18:29:06,Warning,tower,kern,kernel,RDX: 0000000000000001 RSI: ffff8881c3d90e38 RDI: 0000000000000000 2023-08-14,18:29:06,Warning,tower,kern,kernel,RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81e29f00 2023-08-14,18:29:06,Warning,tower,kern,kernel,RSP: 0018:ffffc9000089fac0 EFLAGS: 00010246 2023-08-14,18:29:06,Warning,tower,kern,kernel,Code: 89 31 c0 f0 48 0f b1 9d f0 00 00 00 48 85 c0 74 0e 48 89 df 5b 5d 41 5c 41 5d e9 9c f9 ff ff 5b 5d 41 5c 41 5d c3 cc cc cc cc <48> 8b 07 48 83 c0 60 48 39 c7 74 2c 53 48 89 fb e8 c2 86 e6 ff 48 2023-08-14,18:29:06,Warning,tower,kern,kernel,RIP: 0010:wb_get+0x0/0x3d 2023-08-14,18:29:06,Warning,tower,kern,kernel,"Hardware name: SKYLINE SKYLINE HM570 ITX/SKYLINE HM570 ITX, BIOS 5.19 05/29/2023" 2023-08-14,18:29:06,Warning,tower,kern,kernel,CPU: 5 PID: 162 Comm: kswapd0 Tainted: P U W O 6.1.38-Unraid #2 2023-08-14,18:29:06,Warning,tower,kern,kernel,Oops: 0000 [#1] PREEMPT SMP NOPTI 2023-08-14,18:29:06,Info,tower,kern,kernel,PGD 0 P4D 0 2023-08-14,18:29:06,Alert,tower,kern,kernel,#PF: error_code(0x0000) - not-present page 2023-08-14,18:29:06,Alert,tower,kern,kernel,#PF: supervisor read access in kernel mode 2023-08-14,18:29:06,Alert,tower,kern,kernel,"BUG: kernel NULL pointer dereference, address: 0000000000000000" 2023-08-14,16:52:33,Warning,tower,kern,kernel,---[ end trace 0000000000000000 ]--- 2023-08-14,16:52:33,Warning,tower,kern,kernel,</TASK> 2023-08-14,16:52:33,Warning,tower,kern,kernel,R13: 000015135e356710 R14: 00000000000003f8 R15: 000015135e356718 2023-08-14,16:52:33,Warning,tower,kern,kernel,R10: 0000000100000001 R11: 0000000000000000 R12: 0000000000000008 2023-08-14,16:52:33,Warning,tower,kern,kernel,RBP: 00007ffcab376170 R08: 000015135e356710 R09: 0000000000000001 2023-08-14,16:52:33,Warning,tower,kern,kernel,RDX: 0000000000000000 RSI: 000015135c71be80 RDI: 000015135e356718 2023-08-14,16:52:33,Warning,tower,kern,kernel,RAX: 0000000000000254 RBX: 0000000000000001 RCX: 0000000000000076 2023-08-14,16:52:33,Warning,tower,kern,kernel,RSP: 002b:00007ffcab376120 EFLAGS: 00000202 2023-08-14,16:52:33,Warning,tower,kern,kernel,Code: 0c 48 ff c1 48 39 c8 75 ef 4c 89 f8 49 0f af c6 4c 39 f8 76 1d 49 ff ce 4d 0f af f7 4d 01 ef 31 c0 41 8a 4c 05 00 41 88 0c 07 <48> ff c0 49 39 c6 75 ef 48 83 c4 28 5b 41 5c 41 5d 41 5e 41 5f 5d 2023-08-14,16:52:33,Warning,tower,kern,kernel,RIP: 0033:0x151360efb89b 2023-08-14,16:52:33,Warning,tower,kern,kernel,asm_common_interrupt+0x22/0x40 2023-08-14,16:52:33,Warning,tower,kern,kernel,common_interrupt+0x3b/0xc1 2023-08-14,16:52:33,Warning,tower,kern,kernel,__irq_exit_rcu+0x5e/0xb8 2023-08-14,16:52:33,Warning,tower,kern,kernel,__do_softirq+0x126/0x288 2023-08-14,16:52:33,Warning,tower,kern,kernel,? mlx4_msi_x_interrupt+0xd/0x17 [mlx4_core] 2023-08-14,16:52:33,Warning,tower,kern,kernel,net_rx_action+0x159/0x24f 2023-08-14,16:52:33,Warning,tower,kern,kernel,__napi_poll.constprop.0+0x28/0x124 2023-08-14,16:52:33,Warning,tower,kern,kernel,mlx4_en_poll_rx_cq+0xa5/0xd0 [mlx4_en] 2023-08-14,16:52:33,Warning,tower,kern,kernel,napi_complete_done+0x7b/0x11a 2023-08-14,16:52:33,Warning,tower,kern,kernel,gro_normal_list+0x1d/0x3f 2023-08-14,16:52:33,Warning,tower,kern,kernel,netif_receive_skb_list_internal+0x1d2/0x20b 2023-08-14,16:52:33,Warning,tower,kern,kernel,__netif_receive_skb_list_core+0x8a/0x11e 2023-08-14,16:52:33,Warning,tower,kern,kernel,? inet_gro_receive+0x23b/0x25b 2023-08-14,16:52:33,Warning,tower,kern,kernel,? udp4_gro_receive+0x1da/0x20c 2023-08-14,16:52:33,Warning,tower,kern,kernel,__netif_receive_skb_core.constprop.0+0x4fa/0x6e9 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_pass_frame_up+0xdd/0xdd [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? walk_pgd_range+0xc1/0x645 2023-08-14,16:52:33,Warning,tower,kern,kernel,br_handle_frame+0x277/0x2e0 [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_nf_hook_thresh+0x109/0x109 [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,br_nf_pre_routing+0x236/0x24a [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_nf_hook_thresh+0x109/0x109 [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? NF_HOOK.isra.0+0xe4/0x140 [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_pass_frame_up+0xdd/0xdd [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,br_nf_pre_routing_finish+0x2c1/0x2ec [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_pass_frame_up+0xdd/0xdd [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,br_nf_hook_thresh+0xe2/0x109 [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? br_pass_frame_up+0xdd/0xdd [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,br_handle_frame_finish+0x438/0x472 [bridge] 2023-08-14,16:52:33,Warning,tower,kern,kernel,netif_receive_skb+0xbf/0x127 2023-08-14,16:52:33,Warning,tower,kern,kernel,__netif_receive_skb_one_core+0x77/0x9c 2023-08-14,16:52:33,Warning,tower,kern,kernel,? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 2023-08-14,16:52:33,Warning,tower,kern,kernel,NF_HOOK.constprop.0+0x79/0xd9 2023-08-14,16:52:33,Warning,tower,kern,kernel,? ip_rcv_finish_core.constprop.0+0x3e8/0x3e8 2023-08-14,16:52:33,Warning,tower,kern,kernel,nf_hook_slow+0x3a/0x96 2023-08-14,16:52:33,Warning,tower,kern,kernel,ip_sabotage_in+0x4f/0x60 [br_netfilter] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? ip_protocol_deliver_rcu+0x164/0x164 2023-08-14,16:52:33,Warning,tower,kern,kernel,NF_HOOK.constprop.0+0x79/0xd9 2023-08-14,16:52:33,Warning,tower,kern,kernel,? ip_protocol_deliver_rcu+0x164/0x164 2023-08-14,16:52:33,Warning,tower,kern,kernel,nf_hook_slow+0x3a/0x96 2023-08-14,16:52:33,Warning,tower,kern,kernel,nf_nat_ipv4_local_in+0x2a/0xaa [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,nf_nat_inet_fn+0xc0/0x1a8 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,__nf_nat_alloc_null_binding+0x66/0x81 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? xt_write_recseq_end+0xf/0x1c [ip_tables] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? ipt_do_table+0x57a/0x5bf [ip_tables] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? __local_bh_enable_ip+0x56/0x6b 2023-08-14,16:52:33,Warning,tower,kern,kernel,? xt_write_recseq_end+0xf/0x1c [ip_tables] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? nf_nat_setup_info+0x44/0x7d1 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? nf_nat_setup_info+0x8c/0x7d1 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? asm_exc_invalid_op+0x16/0x20 2023-08-14,16:52:33,Warning,tower,kern,kernel,? exc_invalid_op+0x13/0x60 2023-08-14,16:52:33,Warning,tower,kern,kernel,? handle_bug+0x41/0x6f 2023-08-14,16:52:33,Warning,tower,kern,kernel,? nf_nat_setup_info+0x8c/0x7d1 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,? report_bug+0x109/0x17e 2023-08-14,16:52:33,Warning,tower,kern,kernel,? __warn+0xab/0x122 2023-08-14,16:52:33,Warning,tower,kern,kernel,<TASK> 2023-08-14,16:52:33,Warning,tower,kern,kernel,Call Trace: 2023-08-14,16:52:33,Warning,tower,kern,kernel,PKRU: 55555554 2023-08-14,16:52:33,Warning,tower,kern,kernel,CR2: 00001513532c3000 CR3: 000000016dc78001 CR4: 0000000000770ee0 2023-08-14,16:52:33,Warning,tower,kern,kernel,CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 2023-08-14,16:52:33,Warning,tower,kern,kernel,FS: 0000151363e8ab40(0000) GS:ffff8884a0500000(0000) knlGS:0000000000000000 2023-08-14,16:52:33,Warning,tower,kern,kernel,R13: 0000000000000000 R14: ffffc90000c1b8c0 R15: 0000000000000001 2023-08-14,16:52:33,Warning,tower,kern,kernel,R10: 0000000000000158 R11: 0000000000000000 R12: ffffc90000c1b7dc 2023-08-14,16:52:33,Warning,tower,kern,kernel,RBP: ffffc90000c1b7c0 R08: 000000005314a8c0 R09: 0000000000000000 2023-08-14,16:52:33,Warning,tower,kern,kernel,RDX: 0000000000000000 RSI: ffffc90000c1b7dc RDI: ffff88820161ff00 2023-08-14,16:52:33,Warning,tower,kern,kernel,RAX: 0000000000000180 RBX: ffff88820161ff00 RCX: ffff88810604f840 2023-08-14,16:52:33,Warning,tower,kern,kernel,RSP: 0000:ffffc90000c1b6f8 EFLAGS: 00010282 2023-08-14,16:52:33,Warning,tower,kern,kernel,Code: a8 80 75 26 48 8d 73 58 48 8d 7c 24 20 e8 18 db fc ff 48 8d 43 0c 4c 8b bb 88 00 00 00 48 89 44 24 18 eb 54 0f ba e0 08 73 07 <0f> 0b e9 75 06 00 00 48 8d 73 58 48 8d 7c 24 20 e8 eb da fc ff 48 2023-08-14,16:52:33,Warning,tower,kern,kernel,RIP: 0010:nf_nat_setup_info+0x8c/0x7d1 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,"Hardware name: SKYLINE SKYLINE HM570 ITX/SKYLINE HM570 ITX, BIOS 5.19 05/29/2023" 2023-08-14,16:52:33,Warning,tower,kern,kernel,CPU: 4 PID: 3351 Comm: Plex Media Scan Tainted: P U W O 6.1.38-Unraid #2 2023-08-14,16:52:33,Warning,tower,kern,kernel,sha512_ssse3 aesni_intel crypto_simd mei_hdcp mei_pxp cryptd wmi_bmof rapl nvme intel_gtt intel_cstate mei_me ahci agpgart i2c_i801 input_leds sr_mod intel_uncore i2c_smbus mei libahci nvme_core cdrom joydev led_class i2c_core syscopyarea sysfillrect sysimgblt thermal fb_sys_fops fan tpm_crb tpm_tis tpm_tis_core video tpm wmi backlight int3400_thermal intel_pmc_core acpi_thermal_rel acpi_pad acpi_tad button unix [last unloaded: md_mod] 2023-08-14,16:52:33,Warning,tower,kern,kernel,Modules linked in: md_mod xt_connmark xt_mark iptable_mangle xt_comment iptable_raw wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha xt_nat xt_tcpudp macvlan xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo iptable_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_addrtype br_netfilter dm_crypt dm_mod nfsd auth_rpcgss oid_registry lockd grace sunrpc tcp_diag inet_diag corefreqk(O) ip6table_filter ip6_tables iptable_filter ip_tables x_tables efivarfs af_packet bridge 8021q garp mrp stp llc bonding tls mlx4_en mlx4_core r8169 realtek zfs(PO) zunicode(PO) zzstd(O) i915 zlua(O) zavl(PO) icp(PO) intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp zcommon(PO) kvm_intel znvpair(PO) iosf_mbi drm_buddy i2c_algo_bit ttm spl(O) drm_display_helper kvm drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel 2023-08-14,16:52:33,Warning,tower,kern,kernel,WARNING: CPU: 4 PID: 3351 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x8c/0x7d1 [nf_nat] 2023-08-14,16:52:33,Warning,tower,kern,kernel,------------[ cut here ]------------ Any guidance? Attached diagnostics... the kernel trace is also shown here (from my synology NAS logs collector) tower-diagnostics-20230814-2256.zip
  16. I wanted to check in how is this debugging going? are we any closer to "a fix" for this macvlan issue? My unraid server started crashing today with it.
  17. I'm having similar issues to what you described. I have a Sabrent 5 disk USB 3.2 enclosure for my unraid and it randomly disconnects during heavy I/O. Did you find a solution? I tried a few including powertop (validating no usb power saving setting on usb was enabled). I don't think the "usb-storage quirks" worked for me. I think we both have the same chipset `174c:55aa ASMedia Technology Inc. ASM1051E SATA 6Gb/s bridge, ASM1053E SATA 6Gb/s bridge, ASM1153 SATA 3Gb/s bridge, ASM1153E SATA 6Gb/s bridge` https://sabrent.com/community/xenforum/topic/106596/docking-station-on-linux-random-disconnects-with-hubextportstatus-failed-err-71-errors-unraid
  18. New here; so it looks like the original testdasi image is abandoned in terms of support, it looks like?
  19. I installed this container trying to get it to work with Intel Quicksync and to replace the linuxserver one; I found your container to be missing `/usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128` - which is mentioned by Step 7 https://jellyfin.org/docs/general/administration/hardware-acceleration/intel/ as a verification that gpu and capabilities are detected. I wasn't able to get my intel igpu transcoding to work with your image; linuxserver works fine... not sure why "vainfo" doesn't exist in your image but it does in the other ones.
  20. @Mcklaren just flagging in case it was missed, I noticed you commented on all other requests but mine... it would be fantastic to have this available in nerdtools for convenience.
  21. ZFS Master plugin. I wonder if this is the unsung root cause for all those reported issues. Hope someone report back to confirm if this may be it.
  22. Yeah its mentioned in a few places. Most recently in this post from 24 hours ago - someone else called out ZFS as not letting their disks go to sleep: I guess my question for the unraid devs (and others with more experience on it)... is this normal and expected? Because the blog post said that disks can be spun down with "Hybrid" ZFS array disks in unraid.
  23. Wondering the same. I think I am seeing this bug on 6.12.3 - I formatted a new disk and converted from zfs to xfs - fresh off restarting the array and formatting this disk - the GUI and "df -h" report the same 118GB used on an empty disk with no files or folders. root@Tower:~/openSeaChest-v23.03.1-linux-x86_64-manylinux# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/md3p1 17T 117G 17T 1% /mnt/disk3 root@Tower:~/openSeaChest-v23.03.1-linux-x86_64-manylinux# cd /mnt/disk3 root@Tower:/mnt/disk3# ncdu root@Tower:/mnt/disk3# ls -lah total 0 drwxrwxrwx 2 nobody users 6 Aug 8 23:18 ./ drwxr-xr-x 12 root root 240 Aug 8 23:08 ../ root@Tower:/mnt/disk3# du -sh 0 . root@Tower:/mnt/disk3# more details root@Tower:~# xfs_info /mnt/disk3/ meta-data=/dev/md3p1 isize=512 agcount=17, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 data = bsize=4096 blocks=4394582003, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 root@Tower:~# parted /dev/sdd GNU Parted 3.6 Using /dev/sdd Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WUH721818AL (scsi) Disk /dev/sdd: 18.0TB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 32.8kB 18.0TB 18.0TB xfs (parted) exit
  24. There's this discussion on reddit suggesting that using ZFS on array disks may cause the disk to never go to sleep (random reads, seeks?), and that XFS or other filesystems are better at power efficiency. How true are those statements? This post from July 24, 2023: https://unraid.net/blog/zfs-guide - there is a confusing mention For NON-ZFS filesystems, a Pros: * "The regular array offers excellent power efficiency as idle disks can be powered or spun down. So, if a movie is playing from disk 1, all the other disks, if not being used, can be spun down." But then when it goes into "Hybrid array" which is basically an array disk zfs formatted it says "Idle disks can be powered down to conserve energy when unused." This seems to be contradicting the redditor's post - so looking to gain some insights into people's observations when using hybrid/zfs array disks and if they do spin down as expected, like any btrfs or xfs array disk. thanks!