DanTheMan827

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by DanTheMan827

  1. Absolutely! The web push api is supported by all browsers now, so it’d be one protocol to implement. And it would be even better with the ability to specify the notification level per subscription.
  2. Manual SMB export configuration would allow the ability to specify certain advanced smb options like setting the "fruit" options of an export on a per share basis, or perhaps setting the path as the ZFS dataset rather than the user share version. Using the ZFS dataset would allow SMB to show the correct space if you set the ZFS dataset reserved space and max space. You could have a ZFS dataset on a drive dedicated to Time Machine, or even just a non-Apple share you want to configure for File History on Windows and not have it eat all your space. zfs create disk1/mybackup zfs set reservation=1T disk1/mybackup zfs set quota=1T disk1/mybackup Maybe it would work, but I think a non-exclusive version of this share would have issues with free space calculation
  3. It would be very nice if there was an option in the SMB section of the share for manual configuration of that export. Something like Export: Yes (Manual) Selecting that would remove all of the unraid provided SMB configuration and replace it with a text area with just the raw SMB configuration for the share. This can be done already by not exporting the share and then going into the SMB settings and adding the extra configuration, but that's much less user friendly, and requires stopping the array in order to change it.
  4. I’m starting to think it’s more so bad cables than RAM, but I’ll leave it run over the weekend and see what that says.
  5. I haven't run one recently, but when I had swapped the motherboard a while back, I had ran one for a few hours without issue... I've had a couple freezes and less than graceful reboots, so it's possible that the corruption on the xfs/brtfs drives could be in part related to that, but the ZFS corruption... there was a weird thing that happened yesterday... I saw the corruption, so I went to stop the dockers manually as I typically do, but they weren't stopping... even after the 10s timeout, so I clicked reboot in the dashboard, the console eventually gave a force reboot message, but did nothing, even after a few minutes... so I pushed reset. Unless ZFS has a considerable ram write cache, I have no idea how that would've caused corruption on the data it did though, unless it goes through after the fact... EDIT: a btrfs online scrub of both disks 3 and 4 found “no errors”. ZFS scrub however wasn’t so lucky… quite a few unrecoverable errors in files, although most of it can be replaced, and the rest should be in the backup (RAID IS NOT A BACKUP!) I was able to get this from dmesg, it definitely seems like something isn't right. [92965.624548] general protection fault, probably for non-canonical address 0x26073c0028e960: 0000 [#1] PREEMPT SMP PTI [92965.625102] CPU: 2 PID: 16903 Comm: chown Tainted: P W O 6.1.38-Unraid #2 [92965.625661] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Extreme6, BIOS P2.70 05/17/2016 [92965.626187] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [92965.626773] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [92965.627958] RSP: 0018:ffffc9003b56fcd0 EFLAGS: 00010206 [92965.628498] RAX: 0026073c0028e960 RBX: ffff888514f13e98 RCX: 00000000000c0000 [92965.629095] RDX: ffff88881feae180 RSI: 00000000000020a4 RDI: ffff888514f13e98 [92965.629657] RBP: 0000000000000002 R08: 3130643945353639 R09: 3130643945353639 [92965.630215] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881feae180 [92965.630755] R13: 0000000000000000 R14: 0000000000000040 R15: 0000000000000064 [92965.631334] FS: 000014fca83d6740(0000) GS:ffff88881fe80000(0000) knlGS:0000000000000000 [92965.631970] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [92965.632623] CR2: 000015002ecb3000 CR3: 000000040bac4001 CR4: 00000000001706e0 [92965.633258] Call Trace: [92965.633819] <TASK> [92965.634413] ? __die_body+0x1a/0x5c [92965.635035] ? die_addr+0x38/0x51 [92965.635592] ? exc_general_protection+0x30f/0x345 [92965.636176] ? asm_exc_general_protection+0x22/0x30 [92965.636711] ? native_queued_spin_lock_slowpath+0x152/0x1cf [92965.637275] do_raw_spin_lock+0x14/0x1a [92965.637920] lockref_get_not_dead+0x41/0x64 [92965.638445] __legitimize_path+0x38/0x4f [92965.639053] try_to_unlazy+0x3a/0x7a [92965.639629] complete_walk+0x48/0xa3 [92965.640190] path_lookupat+0x8e/0xfe [92965.640763] filename_lookup+0x5f/0xbc [92965.641321] ? notify_change+0x35f/0x397 [92965.641857] ? getname_flags+0x29/0x152 [92965.642430] ? kmem_cache_alloc+0x122/0x14d [92965.642984] user_path_at_empty+0x37/0x4f [92965.643536] do_fchownat+0x6a/0xda [92965.644083] __x64_sys_fchownat+0x1b/0x22 [92965.644599] do_syscall_64+0x6b/0x81 [92965.645184] entry_SYSCALL_64_after_hwframe+0x63/0xcd [92965.645675] RIP: 0033:0x14fca84ddcea [92965.646208] Code: 48 8b 0d 31 31 0e 00 f7 d8 64 89 01 48 83 c8 ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 49 89 ca b8 04 01 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d fe 30 0e 00 f7 d8 64 89 01 48 [92965.647298] RSP: 002b:00007fffb8a5e368 EFLAGS: 00000246 ORIG_RAX: 0000000000000104 [92965.647902] RAX: ffffffffffffffda RBX: 0000000000000009 RCX: 000014fca84ddcea [92965.648430] RDX: 0000000000000063 RSI: 0000000000433190 RDI: 000000000000000f [92965.649093] RBP: 0000000000433090 R08: 0000000000000100 R09: 0000000000432f70 [92965.649648] R10: 0000000000000064 R11: 0000000000000246 R12: 0000000000433100 [92965.650238] R13: 0000000000433190 R14: 000000000000000f R15: 00007fffb8a5e560 [92965.650820] </TASK> [92965.651408] Modules linked in: ipvlan vhost_net tun vhost tap kvm_intel kvm md_mod af_packet udp_diag xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_iotlb macvlan veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xfs zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc e1000e r8169 realtek i915 intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel mei_hdcp mei_pxp [92965.651452] mxm_wmi crypto_simd intel_gtt cryptd i2c_i801 agpgart nvme rapl syscopyarea intel_cstate mei_me i2c_smbus ahci sysfillrect intel_uncore i2c_core sysimgblt nvme_core libahci mei fb_sys_fops video wmi backlight acpi_pad button unix [last unloaded: md_mod] [92965.657192] ---[ end trace 0000000000000000 ]--- [92965.657883] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [92965.658569] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [92965.660084] RSP: 0018:ffffc9003b56fcd0 EFLAGS: 00010206 [92965.660865] RAX: 0026073c0028e960 RBX: ffff888514f13e98 RCX: 00000000000c0000 [92965.661629] RDX: ffff88881feae180 RSI: 00000000000020a4 RDI: ffff888514f13e98 [92965.662375] RBP: 0000000000000002 R08: 3130643945353639 R09: 3130643945353639 [92965.663165] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881feae180 [92965.663896] R13: 0000000000000000 R14: 0000000000000040 R15: 0000000000000064 [92965.664567] FS: 000014fca83d6740(0000) GS:ffff88881fe80000(0000) knlGS:0000000000000000 [92965.665241] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [92965.665988] CR2: 000015002ecb3000 CR3: 000000040bac4001 CR4: 00000000001706e0 [92965.666733] note: chown[16903] exited with preempt_count 1 [93094.273503] XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x800da93f dinode [93094.274201] XFS (md2p1): Unmount and run xfs_repair [93094.274820] XFS (md2p1): First 128 bytes of corrupted metadata buffer: [93094.275480] 00000000: 49 4e 81 b6 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d [93094.276094] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ [93094.276713] 00000020: 60 8b 3f 83 24 6c d8 28 52 de 22 8f 00 00 00 00 `.?.$l.(R."..... [93094.277356] 00000030: 64 78 31 9a 24 5d 3d f9 00 00 00 00 16 26 04 2b dx1.$]=......&.+ [93094.277979] 00000040: 00 00 00 00 00 01 62 61 00 00 00 00 00 00 00 01 ......ba........ [93094.278624] 00000050: 00 00 18 01 00 00 00 00 00 00 00 00 72 bf 2e 02 ............r... [93094.279254] 00000060: ff ff ff ff 36 12 d1 40 00 00 00 00 00 00 00 2a ....6..@.......* [93094.279888] 00000070: 00 00 00 05 00 20 ec 6e 00 00 00 00 00 00 00 00 ..... .n........ [93094.493924] XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x800da93f dinode [93094.494672] XFS (md2p1): Unmount and run xfs_repair [93094.495318] XFS (md2p1): First 128 bytes of corrupted metadata buffer: [93094.496005] 00000000: 49 4e 81 b6 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d [93094.496741] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ [93094.497398] 00000020: 60 8b 3f 83 24 6c d8 28 52 de 22 8f 00 00 00 00 `.?.$l.(R."..... [93094.498092] 00000030: 64 78 31 9a 24 5d 3d f9 00 00 00 00 16 26 04 2b dx1.$]=......&.+ [93094.498830] 00000040: 00 00 00 00 00 01 62 61 00 00 00 00 00 00 00 01 ......ba........ [93094.499589] 00000050: 00 00 18 01 00 00 00 00 00 00 00 00 72 bf 2e 02 ............r... [93094.501933] 00000060: ff ff ff ff 36 12 d1 40 00 00 00 00 00 00 00 2a ....6..@.......* [93094.503754] 00000070: 00 00 00 05 00 20 ec 6e 00 00 00 00 00 00 00 00 ..... .n........ [101078.953627] XFS (md2p1): Metadata corruption detected at xfs_dinode_verify+0xa0/0x732 [xfs], inode 0x800da93f dinode [101078.954459] XFS (md2p1): Unmount and run xfs_repair [101078.955157] XFS (md2p1): First 128 bytes of corrupted metadata buffer: [101078.955867] 00000000: 49 4e 81 b6 03 02 00 00 00 00 00 63 00 00 00 64 IN.........c...d [101078.956601] 00000010: 00 00 00 01 00 00 00 00 00 00 00 00 00 00 00 00 ................ [101078.957318] 00000020: 60 8b 3f 83 24 6c d8 28 52 de 22 8f 00 00 00 00 `.?.$l.(R."..... [101078.958062] 00000030: 64 78 31 9a 24 5d 3d f9 00 00 00 00 16 26 04 2b dx1.$]=......&.+ [101078.958836] 00000040: 00 00 00 00 00 01 62 61 00 00 00 00 00 00 00 01 ......ba........ [101078.959586] 00000050: 00 00 18 01 00 00 00 00 00 00 00 00 72 bf 2e 02 ............r... [101078.960488] 00000060: ff ff ff ff 36 12 d1 40 00 00 00 00 00 00 00 2a ....6..@.......* [101078.961285] 00000070: 00 00 00 05 00 20 ec 6e 00 00 00 00 00 00 00 00 ..... .n........ [101450.060672] BUG: unable to handle page fault for address: ffffffff8c406226 [101450.061398] #PF: supervisor write access in kernel mode [101450.062199] #PF: error_code(0x0002) - not-present page [101450.062928] PGD 220e067 P4D 220e067 PUD 220f063 PMD 0 [101450.063639] Oops: 0002 [#2] PREEMPT SMP PTI [101450.064346] CPU: 0 PID: 21053 Comm: du Tainted: P D W O 6.1.38-Unraid #2 [101450.065121] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Extreme6, BIOS P2.70 05/17/2016 [101450.065878] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [101450.066584] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [101450.068025] RSP: 0018:ffffc90000637c10 EFLAGS: 00010282 [101450.068734] RAX: ffffffff8c406226 RBX: ffff888514f13e98 RCX: 0000000000040000 [101450.069438] RDX: ffff88881fe2e180 RSI: 00000000000031da RDI: ffff888514f13e98 [101450.070168] RBP: 0000000000000000 R08: 3130643945353639 R09: 3130643945353639 [101450.070866] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881fe2e180 [101450.071551] R13: 0000000000000000 R14: 0000000000000040 R15: ffff88810b943000 [101450.072226] FS: 0000145bb2911740(0000) GS:ffff88881fe00000(0000) knlGS:0000000000000000 [101450.072899] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [101450.073660] CR2: ffffffff8c406226 CR3: 0000000600314003 CR4: 00000000001706f0 [101450.074485] Call Trace: [101450.075225] <TASK> [101450.075927] ? __die_body+0x1a/0x5c [101450.076622] ? page_fault_oops+0x329/0x376 [101450.077275] ? fixup_exception+0x22/0x24b [101450.077987] ? exc_page_fault+0xf4/0x11d [101450.078648] ? asm_exc_page_fault+0x22/0x30 [101450.079275] ? native_queued_spin_lock_slowpath+0x152/0x1cf [101450.079899] do_raw_spin_lock+0x14/0x1a [101450.080516] lockref_get_not_dead+0x41/0x64 [101450.081131] __legitimize_path+0x38/0x4f [101450.081801] try_to_unlazy+0x3a/0x7a [101450.082402] complete_walk+0x48/0xa3 [101450.083003] path_lookupat+0x8e/0xfe [101450.083596] filename_lookup+0x5f/0xbc [101450.084187] ? xfs_bmap_last_offset+0x8a/0xc2 [xfs] [101450.084827] ? mem_cgroup_from_slab_obj+0x1e/0x9a [101450.085408] ? _raw_spin_unlock+0x14/0x29 [101450.085980] ? list_lru_add+0xe4/0x102 [101450.086545] ? slab_post_alloc_hook+0x4d/0x15e [101450.087112] vfs_statx+0x62/0x126 [101450.087672] vfs_fstatat+0x46/0x62 [101450.088235] __do_sys_newfstatat+0x26/0x5c [101450.088786] ? fpregs_assert_state_consistent+0x20/0x44 [101450.089332] ? exit_to_user_mode_prepare+0xd3/0x10d [101450.089878] do_syscall_64+0x6b/0x81 [101450.090418] entry_SYSCALL_64_after_hwframe+0x63/0xcd [101450.090958] RIP: 0033:0x145bb2a171ca [101450.091490] Code: 48 89 f2 b9 00 01 00 00 48 89 fe bf 9c ff ff ff e9 0b 00 00 00 66 2e 0f 1f 84 00 00 00 00 00 90 41 89 ca b8 06 01 00 00 0f 05 <3d> 00 f0 ff ff 77 07 31 c0 c3 0f 1f 40 00 48 8b 15 19 4c 0e 00 f7 [101450.092600] RSP: 002b:00007ffee6800298 EFLAGS: 00000246 ORIG_RAX: 0000000000000106 [101450.093226] RAX: ffffffffffffffda RBX: 0000000000430e10 RCX: 0000145bb2a171ca [101450.093794] RDX: 0000000000430e80 RSI: 0000000000430f10 RDI: 000000000000000b [101450.094460] RBP: 0000000000430e80 R08: 0000000000000007 R09: 0000000000430cf0 [101450.095013] R10: 0000000000000100 R11: 0000000000000246 R12: 000000000041b4b0 [101450.095560] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000051976133 [101450.096109] </TASK> [101450.096650] Modules linked in: ipvlan vhost_net tun vhost tap kvm_intel kvm md_mod af_packet udp_diag xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_iotlb macvlan veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xfs zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc e1000e r8169 realtek i915 intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel mei_hdcp mei_pxp [101450.096694] mxm_wmi crypto_simd intel_gtt cryptd i2c_i801 agpgart nvme rapl syscopyarea intel_cstate mei_me i2c_smbus ahci sysfillrect intel_uncore i2c_core sysimgblt nvme_core libahci mei fb_sys_fops video wmi backlight acpi_pad button unix [last unloaded: md_mod] [101450.102092] CR2: ffffffff8c406226 [101450.102723] ---[ end trace 0000000000000000 ]--- [101450.103331] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [101450.103940] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [101450.105246] RSP: 0018:ffffc9003b56fcd0 EFLAGS: 00010206 [101450.105874] RAX: 0026073c0028e960 RBX: ffff888514f13e98 RCX: 00000000000c0000 [101450.106494] RDX: ffff88881feae180 RSI: 00000000000020a4 RDI: ffff888514f13e98 [101450.107097] RBP: 0000000000000002 R08: 3130643945353639 R09: 3130643945353639 [101450.107704] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881feae180 [101450.108397] R13: 0000000000000000 R14: 0000000000000040 R15: 0000000000000064 [101450.109143] FS: 0000145bb2911740(0000) GS:ffff88881fe00000(0000) knlGS:0000000000000000 [101450.109855] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [101450.110509] CR2: ffffffff8c406226 CR3: 0000000600314003 CR4: 00000000001706f0 [101450.111146] note: du[21053] exited with irqs disabled [101450.111852] note: du[21053] exited with preempt_count 1 [101619.730418] general protection fault, probably for non-canonical address 0x2bc45ff2df903: 0000 [#3] PREEMPT SMP PTI [101619.731073] CPU: 1 PID: 5656 Comm: du Tainted: P D W O 6.1.38-Unraid #2 [101619.731717] Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./Z97 Extreme6, BIOS P2.70 05/17/2016 [101619.732349] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [101619.733030] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [101619.734418] RSP: 0018:ffffc90011277c10 EFLAGS: 00010206 [101619.735207] RAX: 0002bc45ff2df903 RBX: ffff888514f13e98 RCX: 0000000000080000 [101619.735989] RDX: ffff88881fe6e180 RSI: 0000000000003925 RDI: ffff888514f13e98 [101619.736734] RBP: 0000000000000001 R08: 3130643945353639 R09: 3130643945353639 [101619.737439] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881fe6e180 [101619.738168] R13: 0000000000000000 R14: 0000000000000040 R15: ffff888100e6e000 [101619.738907] FS: 000014d52fdff740(0000) GS:ffff88881fe40000(0000) knlGS:0000000000000000 [101619.739633] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [101619.740491] CR2: 0000000000549e90 CR3: 000000070f032006 CR4: 00000000001706e0 [101619.741226] Call Trace: [101619.741972] <TASK> [101619.742716] ? __die_body+0x1a/0x5c [101619.743419] ? die_addr+0x38/0x51 [101619.744189] ? exc_general_protection+0x30f/0x345 [101619.745072] ? asm_exc_general_protection+0x22/0x30 [101619.745898] ? native_queued_spin_lock_slowpath+0x152/0x1cf [101619.746690] do_raw_spin_lock+0x14/0x1a [101619.747409] lockref_get_not_dead+0x41/0x64 [101619.748128] __legitimize_path+0x38/0x4f [101619.748849] try_to_unlazy+0x3a/0x7a [101619.749558] complete_walk+0x48/0xa3 [101619.750294] path_lookupat+0x8e/0xfe [101619.751025] filename_lookup+0x5f/0xbc [101619.751736] ? xfs_bmap_last_offset+0x8a/0xc2 [xfs] [101619.752483] ? slab_post_alloc_hook+0x4d/0x15e [101619.753320] vfs_statx+0x62/0x126 [101619.754079] vfs_fstatat+0x46/0x62 [101619.754740] __do_sys_newfstatat+0x26/0x5c [101619.755403] ? fpregs_assert_state_consistent+0x20/0x44 [101619.756079] ? exit_to_user_mode_prepare+0xd3/0x10d [101619.756763] do_syscall_64+0x6b/0x81 [101619.757392] entry_SYSCALL_64_after_hwframe+0x63/0xcd [101619.758053] RIP: 0033:0x14d52ff051ca [101619.758708] Code: 48 89 f2 b9 00 01 00 00 48 89 fe bf 9c ff ff ff e9 0b 00 00 00 66 2e 0f 1f 84 00 00 00 00 00 90 41 89 ca b8 06 01 00 00 0f 05 <3d> 00 f0 ff ff 77 07 31 c0 c3 0f 1f 40 00 48 8b 15 19 4c 0e 00 f7 [101619.760018] RSP: 002b:00007ffdee444618 EFLAGS: 00000246 ORIG_RAX: 0000000000000106 [101619.760695] RAX: ffffffffffffffda RBX: 0000000000430e10 RCX: 000014d52ff051ca [101619.761360] RDX: 0000000000430e80 RSI: 0000000000430f10 RDI: 000000000000000b [101619.762025] RBP: 0000000000430e80 R08: 0000000000000007 R09: 0000000000430cf0 [101619.762694] R10: 0000000000000100 R11: 0000000000000246 R12: 000000000041b4b0 [101619.763335] R13: 0000000000000001 R14: 0000000000000000 R15: 0000000051976133 [101619.763984] </TASK> [101619.764608] Modules linked in: ipvlan vhost_net tun vhost tap kvm_intel kvm md_mod af_packet udp_diag xt_nat xt_CHECKSUM ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_iotlb macvlan veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xfs zfs(PO) zunicode(PO) zzstd(O) zlua(O) zavl(PO) icp(PO) zcommon(PO) znvpair(PO) spl(O) tcp_diag inet_diag iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc e1000e r8169 realtek i915 intel_rapl_msr intel_rapl_common x86_pkg_temp_thermal intel_powerclamp coretemp iosf_mbi drm_buddy i2c_algo_bit ttm drm_display_helper drm_kms_helper drm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel sha512_ssse3 aesni_intel mei_hdcp mei_pxp [101619.764653] mxm_wmi crypto_simd intel_gtt cryptd i2c_i801 agpgart nvme rapl syscopyarea intel_cstate mei_me i2c_smbus ahci sysfillrect intel_uncore i2c_core sysimgblt nvme_core libahci mei fb_sys_fops video wmi backlight acpi_pad button unix [last unloaded: md_mod] [101619.770451] ---[ end trace 0000000000000000 ]--- [101619.771152] RIP: 0010:native_queued_spin_lock_slowpath+0x152/0x1cf [101619.771822] Code: b9 01 00 00 00 f0 0f b1 0b 74 76 eb cc c1 ee 12 83 e0 03 ff ce 48 c1 e0 05 48 63 f6 48 05 80 e1 02 00 48 03 04 f5 a0 1a 12 82 <48> 89 10 8b 42 08 85 c0 75 04 f3 90 eb f5 48 8b 32 48 85 f6 74 bc [101619.773170] RSP: 0018:ffffc9003b56fcd0 EFLAGS: 00010206 [101619.773890] RAX: 0026073c0028e960 RBX: ffff888514f13e98 RCX: 00000000000c0000 [101619.774603] RDX: ffff88881feae180 RSI: 00000000000020a4 RDI: ffff888514f13e98 [101619.775258] RBP: 0000000000000002 R08: 3130643945353639 R09: 3130643945353639 [101619.775941] R10: d513810fa5295ecc R11: 0000000000000fe0 R12: ffff88881feae180 [101619.776625] R13: 0000000000000000 R14: 0000000000000040 R15: 0000000000000064 [101619.777265] FS: 000014d52fdff740(0000) GS:ffff88881fe40000(0000) knlGS:0000000000000000 [101619.777937] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [101619.778607] CR2: 0000000000549e90 CR3: 000000070f032006 CR4: 00000000001706e0 [101619.779268] note: du[5656] exited with preempt_count 1
  6. I have 6 data drives in my array, and two are spitting out errors in a maintenance mode check. It started when I noticed the console spitting out messages, so I went and checked the log to be greeted with this. Also shown in that error, a csum error for disk 3, but the maintenance mode check didn't show anything wrong with that particular drive... In any case, suggestions on how to proceed would be very much appreciated... I have a backup, but if there's corruption somewhere in the XFS drive, I have no way of knowing which file(s)... at least on a btrfs drive I can scrube to find the inode and then the file(s) associated with that. I do have enough free space on drives without known corruption that I can scatter the contents of drive 4 to in order to re-format to single drive ZFS... that was the plan all along, but I really didn't want to deal with FS corruption. Also, on a single drive ZFS pool I had formatted earlier, I had moved a bunch of data to it, and things seemed to be going well, until I decided to run a scrub on that pool, and was greeted with this... pool: disk6 state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A scan: scrub in progress since Thu Jul 27 01:14:43 2023 1.61T scanned at 0B/s, 92.2G issued at 77.1M/s, 1.61T total 0B repaired, 5.61% done, 05:43:48 to go config: NAME STATE READ WRITE CKSUM disk6 ONLINE 0 0 0 md6p1 ONLINE 0 0 82 errors: Permanent errors have been detected in the following files: Everything should be backed up to CrashPlan, but this really isn't leaving a good taste in my mouth, and is making me a little concerned. disk5.txt disk4-bad.txt disk3.txt disk2-bad.txt disk1.txt
  7. @rocketeer55 I also have a global config with fruit settings... I found this thread on the truenas forums suggesting fruit:metadata be set to netatalk, so maybe that would help? https://www.truenas.com/community/threads/log-filled-with-failed-fruit.80663/
  8. I'd also like to chime in with this same issue. Just noticed today, but /var/log/ is flooded with this same error message. Running a mixture of xfs/btrfs
  9. So I was copying files to a new array and after some time I got a message saying there was no space left when trying to access the web ui… I SSH’d into the server and found that tons of files ending with the extension of .end had filled up my ramdisk in /var/tmp, and after looking at them appear to be related to some file integrity check I’m assuming they’re from this plug-in, but that seems like a bug for files to just fill up the RAM disk like that… is there something misconfigured on my end?
  10. If I add -i as an extra argument, I can use a command like "docker attach --sig-proxy=false my_container" to get an interactive console of the docker entry point. It would be convenient for things like game servers if you could get the same functionality by just clicking the icon for the container and then "Console" I tried changing the shell command, but you can only select shell or bash, a custom option would also allow the desired functionality since it could be set to the aforementioned docker attach command
  11. @Squid Yes, that's what I'm referring to, except with /mnt/user instead of just /mnt. Curiously, there's no option to easily do this within the web ui like you can with a disk share. @JonathanM I'm aware that you can move files on disk shares, but a global share (root share if you will) would remove the risk of data loss since you'd be doing everything inside of /mnt/user/ The video actually describes a good use case for such a share.
  12. It's possible to manually export /mnt/user with extra samba options, but a way to do this within the web ui like you can for disk shares would be quite handy. Moving files from one share to another (unless SSH'd into the server) in most cases results in copying the file and deleting the original, but if everything were a single samba share, this could be avoided since the client would just see everything as one and realize it can just move the file.
  13. Unraid already has the ability to shell into a docker container, but it would be quite nice if you could attach to a docker container in the same way by using ttyd. I've tried running: docker attach my_container But all I get is a blank prompt when it should be the initial docker command Maybe something could be done with screen or tmux and docker containers could be started within one (and the ttyd client would just connect to that session)
  14. So I updated unraid and now I no longer have the AFP protocol. I see this has been removed in the latest version, but for what reason? I still have devices that use and need AFP.
  15. This is still needed for time machine on macs running older operating systems! I have a couple old macs in the house that work best over AFP (OS 9 / early OS 10)
  16. KVM supports having sparse VM disks along with marking regions as sparse when the guest VM trims the virtual disk. Exposing this in the web UI would be quite useful for people storing VMs on an SSD.
  17. it's possible to manually edit the xml file to use vmxnet as the virtual network adapter type, but if you make any changes with the GUI it reverts back to the virtio adapter
  18. I have an hamachi docker container and it works when set to host network mode, but that exposes all of the network adapters. Is there a way to limit this to just br0? The problem I've been having is that when I leave the hamachi docker running, it will eventually bring my network to a crawl, and I'm not entirely sure why... I'm presuming there's a network loop somewhere.
  19. So I have an interesting situation that just started happening... If I have my VM set to br0, the majority of websites won't work, I can ping them by IP or DNS name just fine and the VM does get an IP from the router. But when I change to virbr0, everything works albeit with an internal NAT IP from the interface. the interesting thing is google works regardless, but other sites have issues with br0 (like http://epg123.garyan2.net/)
  20. So... It has changed from hard locking to just crashing and rebooting... I finally have a FCP Troubleshooting log so I'll post it here... It has crashed a few times since my first post... some with uptime of a few days and others sometimes a month or more... I've been forgetting to start troubleshooting mode though... unraid-diagnostics-20161229-0359.zip
  21. Has anyone had issues with this not showing in the Home app? I've tried installing lukeadair's docker image but it doesn't show in any of my homekit apps I've also tried the docker image for Synology and it does the same thing... https://github.com/marcoraddatz/homebridge-docker
  22. The ability to execute a custom script for use as a notification agent would be a much welcomed addition. Even better if the different notification attributes were sent as separate command line arguments
  23. Oddly enough I recently had a situation where my write was extremely slow (around 5MBps) I replaced an old hard drive and that seems to have sped it back up Now my typical speeds are once again 70-80MBps over gigabit with occasional drops to 50-60MBps... I'm using 1 parity + 4 data disks (no cache) I am using the reconstruct-write mode rather than the default read-modify-write (using that is extremely slow) My drives are 3x ST4000VN000, 1x ST3000DM001, 1x ST2000DM001 running on the onboard Intel SATA controller...
  24. Personally, I would just leave it (or depending upon your case use one of the bottom holes in its place) The screw probably bottomed out (too long) and you kept on trying to tighten Yeah, the case has no ability to use the bottom mounting holes... The screw definitely didn't bottom out since it's not even through the hole... that would have made extracting fairly easy since I could have used a needle-nose pliers to reverse it enough to get a little bit of the screw out so that I could continue on the other side of the screw hole... I guess I was just unlucky...