autumnwalker

Members
  • Posts

    199
  • Joined

  • Last visited

Everything posted by autumnwalker

  1. I flipped hardlinks off a couple of weeks ago and it seems to have resolved my issues ... noting this is not a workable solution for others. At least we are getting somewhere with root cause.
  2. Turns out this feature is enabled! I could have sworn I disabled it. Once parity check is finished I'll try disabling and see if it solves the issue for me. I'm still a way's away before getting my homelab back up so I won't be able to fully test until later.
  3. My server is currently offline so I will have to validate later, but I'm pretty sure I have this off on my box as well. I'll double check and confirm.
  4. So it has been close to one month now with cache disabled on the shares. Have not had a single instance of "stale file handle" across CIFS or NFS. Clearly there is a relation to cache / mover.
  5. Thanks! I see that now. I zoomed in on the "version too high" message I guess.
  6. Full output below, it's still saying 2.7.4. plugin: updating: nut.plg +============================================================================== | Skipping package nut-2.7.4.20181125-x86_64-1 (already installed) +============================================================================== plugin: skipping: nut-2.7.4.20171129-x86_64-1.txz - Unraid version too high, requires at most version 6.4.99 +============================================================================== | Skipping package net-snmp-5.7.3-x86_64-4 (already installed) +============================================================================== plugin: downloading: https://raw.githubusercontent.com/dmacias72/NUT-unRAID/master/archive/nut-plugin-2020.03.17-x86_64-1.txz ... done plugin: downloading: https://raw.githubusercontent.com/dmacias72/NUT-unRAID/master/archive/nut-plugin-2020.03.17-x86_64-1.md5 ... done +============================================================================== | Upgrading nut-plugin-2019.02.03-x86_64-1 package using /boot/config/plugins/nut/nut-plugin-2020.03.17-x86_64-1.txz +============================================================================== Pre-installing package nut-plugin-2020.03.17-x86_64-1... Removing package: nut-plugin-2019.02.03-x86_64-1-upgraded-2020-04-13,11:58:45 --> Deleting /usr/local/emhttp/plugins/nut/NUTsummary.page Verifying package nut-plugin-2020.03.17-x86_64-1.txz. Installing package nut-plugin-2020.03.17-x86_64-1.txz: PACKAGE DESCRIPTION: # NUT - Network UPS Tools (a collection of ups tools) unRAID Plugin # # The Network UPS Tools is a collection of programs which provide a # common interface for monitoring and administering UPS hardware. It # users a layered approach to connect all the components. Drivers are # provided for a wide assortment of equipment. The primary goal of # the NUT project is to provide reliable monitoring of UPS hardware # and ensure safe shutdowns of the systems which are connected. # This package includes the tools needed to monitor your UPS over the # web and it also includes the upsclient library. # # https://github.com/dmacias72/unRAID-plugins Executing install script for nut-plugin-2020.03.17-x86_64-1.txz. Package nut-plugin-2020.03.17-x86_64-1.txz installed. Package nut-plugin-2019.02.03-x86_64-1 upgraded with new package /boot/config/plugins/nut/nut-plugin-2020.03.17-x86_64-1.txz. stopping services... Writing nut config Updating permissions... Stopping the UPS services... Network UPS Tools - UPS driver controller 2.7.4.1 checking network ups tools configuration... ----------------------------------------------------------- nut has been installed. Copyright 2015, macester Copyright 2020, gfjardim Copyright 2015-2020, dmacias72 Version: 2020.03.17 ----------------------------------------------------------- Updating Support Link plugin: updated AFAIK that's the latest version of NUT (outside of Git / dev).
  7. Just updated the plugin and noticed this in the log: plugin: updating: nut.plg +============================================================================== | Skipping package nut-2.7.4.20181125-x86_64-1 (already installed) +============================================================================== plugin: skipping: nut-2.7.4.20171129-x86_64-1.txz - Unraid version too high, requires at most version 6.4.99 Specifically, "Unraid version too high". Thoughts on this?
  8. I'd like to see this feature as well. I know when setting up a Time Machine share via AFP previously I could set a max size. I just went to do it on a regular SMB share and realized you couldn't.
  9. Bump. Would really like to have these suppressed. I'm searching around online, but not seeing anything non-Unraid specific either.
  10. Reflecting further on this - I don't think the issue is actually with CIFS. I have the same behavior using NFS if cache is enabled on the share. So perhaps vers=1.0 does something with CIFS to suppress the problem, but it's not strictly a CIFS issue.
  11. I think this is the key part. @sjaak had suggested that earlier I believe. I have not validated this myself, but I have flipped netbios back on within Unraid so it is available.
  12. This is exactly what I am seeing. NFS / CIFS makes no difference. Disabling cache on the share and the problem has stopped. I think it's related to mover.
  13. Also curious about this. There must be a way to suppress it.
  14. Further info - when I disable "Enhanced OS X Interoperability" the folder browses fine. Any ideas?
  15. Has anyone else come across the issue where Finder in MacOS cannot browse one folder within a share? I can browse all the other folders in the share, but one in particular ("Documents") causes Finder to hang and eventually requires me to dismount the share, force quit Finder, or reboot the system. If I try to browse via Terminal it works fine. Windows clients work fine, SSH works fine, Unraid GUI works fine. Looking at permissions in Terminal on MacOS looks right relative to the other folders / files that I can browse. This is happening on two of my Mac's, both running current Mojave. Running 6.8.2 with "Enhanced OS X Interoperability" enabled on SMB.
  16. Not sure if this is a relevant data point (will continue to monitor). I just turned cache off for the biggest Proxmox share (the backup share) and RAM immediately dropped from 86% used to 24% used.
  17. I saw that - it's not clear to me if that "enables" SMB v1 or if it "forces" SMB v1.
  18. How are you forcing the protocol to v1?
  19. Any luck @darkside40? I've not flipped on the verbose logging yet as I'm being lazy and not wanting to stop the array and take down my dependent servers.
  20. Proxmox support is indicating this is a likely a caching issue with Unraid. "This sounds like cache problems on the Unraid Box. The point is you load an image and not small files as you do with Windows file share." The two connections from Proxmox to Unraid are for 1) loading Linux ISOs for creating VMs, and 2) nightly backups / snapshots of all running VMs and containers. I suspect the second item is what is causing the runaway memory. Is there a way to have Unraid "clear cache" on a share?
  21. So what's my best bet? It seems to occur on any of my servers (Linux based) and is not specific to Proxmox (as my OOM thread had thought). Bump up verbosity on Samba and see what hits?
  22. I see some out of memory errors where smbd is killed. Perhaps related? Dec 21 01:20:38 nas01 kernel: smbd invoked oom-killer: gfp_mask=0x6280ca(GFP_HIGHUSER_MOVABLE|__GFP_ZERO), nodemask=(null), order=0, oom_score_adj=0 Dec 21 01:20:38 nas01 kernel: smbd cpuset=/ mems_allowed=0 Dec 21 01:20:38 nas01 kernel: CPU: 3 PID: 9871 Comm: smbd Not tainted 4.19.88-Unraid #1 Dec 21 01:20:38 nas01 kernel: Hardware name: Gigabyte Technology Co., Ltd. GA-MA785GT-UD3H/GA-MA785GT-UD3H, BIOS F8 05/25/2010 Dec 21 01:20:38 nas01 kernel: Call Trace: Dec 21 01:20:38 nas01 kernel: dump_stack+0x67/0x83 Dec 21 01:20:38 nas01 kernel: dump_header+0x66/0x289 Dec 21 01:20:38 nas01 kernel: oom_kill_process+0x9d/0x220 Dec 21 01:20:38 nas01 kernel: ? oom_badness+0x20/0x117 Dec 21 01:20:38 nas01 kernel: out_of_memory+0x3b7/0x3ea Dec 21 01:20:38 nas01 kernel: __alloc_pages_nodemask+0x920/0xae1 Dec 21 01:20:38 nas01 kernel: alloc_pages_vma+0x13c/0x163 Dec 21 01:20:38 nas01 kernel: __handle_mm_fault+0xa79/0x11b7 Dec 21 01:20:38 nas01 kernel: handle_mm_fault+0x189/0x1e3 Dec 21 01:20:38 nas01 kernel: __do_page_fault+0x267/0x3ff Dec 21 01:20:38 nas01 kernel: page_fault+0x1e/0x30 Dec 21 01:20:38 nas01 kernel: RIP: 0010:copy_user_generic_string+0x2c/0x40 Dec 21 01:20:38 nas01 kernel: Code: 00 83 fa 08 72 27 89 f9 83 e1 07 74 15 83 e9 08 f7 d9 29 ca 8a 06 88 07 48 ff c6 48 ff c7 ff c9 75 f2 89 d1 c1 e9 03 83 e2 07 <f3> 48 a5 89 d1 f3 a4 31 c0 0f 1f 00 c3 0f 1f 80 00 00 00 00 0f 1f Dec 21 01:20:38 nas01 kernel: RSP: 0018:ffffc90004ffbb78 EFLAGS: 00010202 Dec 21 01:20:38 nas01 kernel: RAX: 000055cfb82ea363 RBX: ffff8883bdc98882 RCX: 0000000000000073 Dec 21 01:20:38 nas01 kernel: RDX: 0000000000000004 RSI: ffff8883bdc98a8e RDI: 000055cfb82ea000 Dec 21 01:20:38 nas01 kernel: RBP: 0000000000000000 R08: 0000000000078320 R09: 00000000000005a8 Dec 21 01:20:38 nas01 kernel: R10: 00000000000005a8 R11: ffff88841dc20000 R12: 00000000000005a8 Dec 21 01:20:38 nas01 kernel: R13: ffffc90004ffbd18 R14: 00000000000005a8 R15: 0000000000000882 Dec 21 01:20:38 nas01 kernel: copyout+0x22/0x27 Dec 21 01:20:38 nas01 kernel: copy_page_to_iter+0x157/0x2ab Dec 21 01:20:38 nas01 kernel: skb_copy_datagram_iter+0xe8/0x19d Dec 21 01:20:38 nas01 kernel: tcp_recvmsg+0x7f8/0x9d5 Dec 21 01:20:38 nas01 kernel: inet_recvmsg+0x95/0xbb Dec 21 01:20:38 nas01 kernel: sock_read_iter+0x74/0xa8 Dec 21 01:20:38 nas01 kernel: do_iter_readv_writev+0x110/0x146 Dec 21 01:20:38 nas01 kernel: do_iter_read+0x87/0x15c Dec 21 01:20:38 nas01 kernel: vfs_readv+0x6b/0xa3 Dec 21 01:20:38 nas01 kernel: ? handle_mm_fault+0x189/0x1e3 Dec 21 01:20:38 nas01 kernel: do_readv+0x6b/0xe2 Dec 21 01:20:38 nas01 kernel: do_syscall_64+0x57/0xf2 Dec 21 01:20:38 nas01 kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Dec 21 01:20:38 nas01 kernel: RIP: 0033:0x14decd38452d Dec 21 01:20:38 nas01 kernel: Code: 28 89 54 24 1c 48 89 74 24 10 89 7c 24 08 e8 fa 39 f8 ff 8b 54 24 1c 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 13 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 2f 44 89 c7 48 89 44 24 08 e8 2e 3a f8 ff 48 Dec 21 01:20:38 nas01 kernel: RSP: 002b:00007ffd0ce6cef0 EFLAGS: 00000293 ORIG_RAX: 0000000000000013 Dec 21 01:20:38 nas01 kernel: RAX: ffffffffffffffda RBX: 0000000000000001 RCX: 000014decd38452d Dec 21 01:20:38 nas01 kernel: RDX: 0000000000000001 RSI: 000055cdccd4dc18 RDI: 0000000000000024 Dec 21 01:20:38 nas01 kernel: RBP: 00000000ffffffff R08: 0000000000000000 R09: 0000000000000000 Dec 21 01:20:38 nas01 kernel: R10: 0000000000005a92 R11: 0000000000000293 R12: 000055cdccd4dc08 Dec 21 01:20:38 nas01 kernel: R13: 000055cdccd4dc10 R14: 000055cdccd4dc18 R15: 000055cdccd4db50 Dec 21 01:20:38 nas01 kernel: Mem-Info: Dec 21 01:20:38 nas01 kernel: active_anon:3801310 inactive_anon:33851 isolated_anon:0 Dec 21 01:20:38 nas01 kernel: active_file:4035 inactive_file:6040 isolated_file:1 Dec 21 01:20:38 nas01 kernel: unevictable:0 dirty:766 writeback:3727 unstable:0 Dec 21 01:20:38 nas01 kernel: slab_reclaimable:19669 slab_unreclaimable:19358 Dec 21 01:20:38 nas01 kernel: mapped:44183 shmem:277294 pagetables:10253 bounce:0 Dec 21 01:20:38 nas01 kernel: free:33113 free_pcp:0 free_cma:0 Dec 21 01:20:38 nas01 kernel: Node 0 active_anon:15205240kB inactive_anon:135404kB active_file:16140kB inactive_file:24160kB unevictable:0kB isolated(anon):0kB isolated(file):4kB mapped:176732kB dirty:3064kB writeback:14908kB shmem:1109176kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 8570880kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no Dec 21 01:20:38 nas01 kernel: Node 0 DMA free:15880kB min:68kB low:84kB high:100kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15964kB managed:15880kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 2458 15264 15264 Dec 21 01:20:38 nas01 kernel: Node 0 DMA32 free:61644kB min:10872kB low:13588kB high:16304kB active_anon:2706500kB inactive_anon:6040kB active_file:200kB inactive_file:160kB unevictable:0kB writepending:0kB present:2863684kB managed:2780924kB mlocked:0kB kernel_stack:128kB pagetables:3576kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 0 12806 12806 Dec 21 01:20:38 nas01 kernel: Node 0 Normal free:54928kB min:56640kB low:70800kB high:84960kB active_anon:12498040kB inactive_anon:129364kB active_file:16780kB inactive_file:23748kB unevictable:0kB writepending:18504kB present:13369344kB managed:13113892kB mlocked:0kB kernel_stack:13760kB pagetables:37436kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Dec 21 01:20:38 nas01 kernel: lowmem_reserve[]: 0 0 0 0 Dec 21 01:20:38 nas01 kernel: Node 0 DMA: 0*4kB 1*8kB (U) 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15880kB Dec 21 01:20:38 nas01 kernel: Node 0 DMA32: 495*4kB (UME) 429*8kB (UME) 407*16kB (UME) 323*32kB (UME) 213*64kB (UME) 108*128kB (UME) 35*256kB (UME) 5*512kB (UME) 1*1024kB (E) 0*2048kB 0*4096kB = 62260kB Dec 21 01:20:38 nas01 kernel: Node 0 Normal: 429*4kB (UME) 140*8kB (UME) 2623*16kB (UME) 293*32kB (UME) 4*64kB (M) 3*128kB (M) 3*256kB (M) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 55588kB Dec 21 01:20:38 nas01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB Dec 21 01:20:38 nas01 kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Dec 21 01:20:38 nas01 kernel: 287533 total pagecache pages Dec 21 01:20:38 nas01 kernel: 0 pages in swap cache Dec 21 01:20:38 nas01 kernel: Swap cache stats: add 0, delete 0, find 0/0 Dec 21 01:20:38 nas01 kernel: Free swap = 0kB Dec 21 01:20:38 nas01 kernel: Total swap = 0kB Dec 21 01:20:38 nas01 kernel: 4062248 pages RAM Dec 21 01:20:38 nas01 kernel: 0 pages HighMem/MovableOnly Dec 21 01:20:38 nas01 kernel: 84574 pages reserved Dec 21 01:20:38 nas01 kernel: 0 pages cma reserved Dec 21 01:20:38 nas01 kernel: Tasks state (memory values in pages): Dec 21 01:20:38 nas01 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name Dec 21 01:20:38 nas01 kernel: [ 1160] 0 1160 4165 882 69632 0 -1000 udevd Dec 21 01:20:38 nas01 kernel: [ 1402] 0 1402 53879 650 69632 0 0 rsyslogd Dec 21 01:20:38 nas01 kernel: [ 1487] 0 1487 2090 1137 49152 0 0 haveged Dec 21 01:20:38 nas01 kernel: [ 1541] 81 1541 954 525 45056 0 0 dbus-daemon Dec 21 01:20:38 nas01 kernel: [ 1550] 32 1550 835 500 45056 0 0 rpcbind Dec 21 01:20:38 nas01 kernel: [ 1555] 32 1555 1818 1433 57344 0 0 rpc.statd Dec 21 01:20:38 nas01 kernel: [ 1585] 44 1585 19330 1079 65536 0 0 ntpd Dec 21 01:20:38 nas01 kernel: [ 1592] 0 1592 618 23 40960 0 0 acpid Dec 21 01:20:38 nas01 kernel: [ 1610] 0 1610 633 423 49152 0 0 crond Dec 21 01:20:38 nas01 kernel: [ 1614] 0 1614 630 360 40960 0 0 atd Dec 21 01:20:38 nas01 kernel: [ 4658] 0 4658 25242 6078 176128 0 0 php Dec 21 01:20:38 nas01 kernel: [ 5680] 0 5680 643 425 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5681] 0 5681 643 432 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5682] 0 5682 643 435 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5683] 0 5683 643 402 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5684] 0 5684 643 425 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5685] 0 5685 643 427 40960 0 0 agetty Dec 21 01:20:38 nas01 kernel: [ 5699] 0 5699 6314 2106 90112 0 0 slim Dec 21 01:20:38 nas01 kernel: [ 5702] 0 5702 33936 5929 184320 0 0 Xorg Dec 21 01:20:38 nas01 kernel: [ 6410] 0 6410 87285 999 102400 0 0 emhttpd Dec 21 01:20:38 nas01 kernel: [ 7206] 61 7206 1629 835 49152 0 0 avahi-daemon Dec 21 01:20:38 nas01 kernel: [ 7208] 61 7208 1533 69 49152 0 0 avahi-daemon Dec 21 01:20:38 nas01 kernel: [ 7217] 0 7217 1179 27 49152 0 0 avahi-dnsconfd Dec 21 01:20:38 nas01 kernel: [ 7529] 0 7529 24519 2787 172032 0 0 php-fpm Dec 21 01:20:38 nas01 kernel: [ 7551] 0 7551 3307 1052 61440 0 0 ttyd Dec 21 01:20:38 nas01 kernel: [ 7554] 0 7554 37347 2473 73728 0 0 nginx Dec 21 01:20:38 nas01 kernel: [ 9745] 0 9745 946 718 40960 0 0 diskload Dec 21 01:20:38 nas01 kernel: [ 10301] 0 10301 36250 192 65536 0 0 shfs Dec 21 01:20:38 nas01 kernel: [ 10314] 0 10314 224282 10428 323584 0 0 shfs Dec 21 01:20:38 nas01 kernel: [ 10503] 0 10503 13041 3743 135168 0 0 smbd Dec 21 01:20:38 nas01 kernel: [ 10506] 0 10506 12582 2009 126976 0 0 smbd-notifyd Dec 21 01:20:38 nas01 kernel: [ 10507] 0 10507 12584 1864 126976 0 0 cleanupd Dec 21 01:20:38 nas01 kernel: [ 10508] 0 10508 8835 1801 102400 0 0 nmbd Dec 21 01:20:38 nas01 kernel: [ 10515] 0 10515 628 26 40960 0 0 wsdd Dec 21 01:20:38 nas01 kernel: [ 10518] 0 10518 16812 6961 167936 0 0 winbindd Dec 21 01:20:38 nas01 kernel: [ 10521] 0 10521 21174 9895 204800 0 0 winbindd Dec 21 01:20:38 nas01 kernel: [ 10642] 0 10642 213521 18304 339968 0 -500 dockerd Dec 21 01:20:38 nas01 kernel: [ 10657] 0 10657 157950 8407 258048 0 -500 containerd Dec 21 01:20:38 nas01 kernel: [ 11103] 0 11103 27660 2468 90112 0 0 unbalance Dec 21 01:20:38 nas01 kernel: [ 11104] 0 11104 17000 4670 167936 0 0 winbindd Dec 21 01:20:38 nas01 kernel: [ 11176] 0 11176 27276 2703 81920 0 -999 containerd-shim Dec 21 01:20:38 nas01 kernel: [ 11193] 0 11193 49 1 28672 0 0 s6-svscan Dec 21 01:20:38 nas01 kernel: [ 11292] 0 11292 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 11481] 0 11481 25831 942 65536 0 -500 docker-proxy Dec 21 01:20:38 nas01 kernel: [ 11493] 0 11493 25831 943 65536 0 -500 docker-proxy Dec 21 01:20:38 nas01 kernel: [ 11516] 0 11516 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 11518] 99 11518 1126 18 49152 0 0 sh Dec 21 01:20:38 nas01 kernel: [ 11525] 99 11525 599590 185741 2293760 0 0 Plex Media Serv Dec 21 01:20:38 nas01 kernel: [ 11526] 0 11526 26924 2065 77824 0 -999 containerd-shim Dec 21 01:20:38 nas01 kernel: [ 11554] 0 11554 49 4 28672 0 0 s6-svscan Dec 21 01:20:38 nas01 kernel: [ 11655] 0 11655 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12173] 99 12173 426917 39906 749568 0 0 Plex Script Hos Dec 21 01:20:38 nas01 kernel: [ 12460] 0 12460 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12462] 0 12462 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12463] 0 12463 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12464] 0 12464 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12465] 0 12465 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12468] 0 12468 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12469] 0 12469 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12470] 0 12470 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12684] 2 12684 124 1 28672 0 0 s6-fdholderd Dec 21 01:20:38 nas01 kernel: [ 12695] 0 12695 3740 168 73728 0 0 nginx Dec 21 01:20:38 nas01 kernel: [ 12724] 99 12724 2046120 225275 2670592 0 0 CrashPlanServic Dec 21 01:20:38 nas01 kernel: [ 12743] 122 12743 3852 283 61440 0 0 nginx Dec 21 01:20:38 nas01 kernel: [ 12744] 122 12744 3852 283 61440 0 0 nginx Dec 21 01:20:38 nas01 kernel: [ 12768] 0 12768 49 1 28672 0 0 s6-supervise Dec 21 01:20:38 nas01 kernel: [ 12991] 99 12991 93109 444 258048 0 0 Plex Tuner Serv Dec 21 01:20:38 nas01 kernel: [ 13501] 0 13501 20696 3178 208896 0 0 Xvfb Dec 21 01:20:38 nas01 kernel: [ 13620] 0 13620 51 9 28672 0 0 forstdin Dec 21 01:20:38 nas01 kernel: [ 13637] 0 13637 49 1 28672 0 0 forstdin Dec 21 01:20:38 nas01 kernel: [ 13652] 99 13652 29655 632 274432 0 0 openbox Dec 21 01:20:38 nas01 kernel: [ 13670] 0 13670 397 72 49152 0 0 tailstatusfile Dec 21 01:20:38 nas01 kernel: [ 13689] 0 13689 379 1 49152 0 0 tail Dec 21 01:20:38 nas01 kernel: [ 15628] 0 15628 14539 1147 147456 0 0 x11vnc Dec 21 01:20:38 nas01 kernel: [ 15645] 99 15645 396 1 45056 0 0 sh Dec 21 01:20:38 nas01 kernel: [ 15667] 99 15667 154640 8768 1363968 0 0 crashplan Dec 21 01:20:38 nas01 kernel: [ 16596] 99 16596 86805 1636 606208 0 0 crashplan Dec 21 01:20:38 nas01 kernel: [ 17066] 99 17066 93291 6042 712704 0 0 crashplan Dec 21 01:20:38 nas01 kernel: [ 18807] 99 18807 165479 19765 3506176 0 0 crashplan Dec 21 01:20:38 nas01 kernel: [ 15431] 0 15431 2280 660 49152 0 -1000 sshd Dec 21 01:20:38 nas01 kernel: [ 15480] 0 15480 631 431 45056 0 0 inetd Dec 21 01:20:38 nas01 kernel: [ 15505] 0 15505 37473 2155 77824 0 0 nginx Dec 21 01:20:38 nas01 kernel: [ 9770] 1008 9770 1394427 820800 7274496 0 0 smbd Dec 21 01:20:38 nas01 kernel: [ 9871] 1008 9871 2592103 2017939 16863232 0 0 smbd Dec 21 01:20:38 nas01 kernel: [ 8628] 1000 8628 696554 164021 1703936 0 0 smbd Dec 21 01:20:38 nas01 kernel: [ 17343] 1006 17343 21152 5602 200704 0 0 smbd Dec 21 01:20:38 nas01 kernel: [ 29268] 0 29268 397 1 45056 0 0 sh Dec 21 01:20:38 nas01 kernel: [ 29294] 0 29294 28603 830 262144 0 0 yad Dec 21 01:20:38 nas01 kernel: [ 25914] 0 25914 16812 4308 167936 0 0 winbindd Dec 21 01:20:38 nas01 kernel: [ 12875] 99 12875 239478 11712 458752 0 0 Plex Script Hos Dec 21 01:20:38 nas01 kernel: [ 13447] 99 13447 238667 11266 479232 0 0 Plex Script Hos Dec 21 01:20:38 nas01 kernel: [ 13950] 99 13950 237839 9628 446464 0 0 Plex Script Hos Dec 21 01:20:38 nas01 kernel: [ 17864] 0 17864 397 30 49152 0 0 tailstatusfile Dec 21 01:20:38 nas01 kernel: [ 17911] 0 17911 379 1 45056 0 0 stat Dec 21 01:20:38 nas01 kernel: [ 17931] 0 17931 114 1 32768 0 0 sh Dec 21 01:20:38 nas01 kernel: [ 17982] 0 17982 379 1 45056 0 0 md5sum Dec 21 01:20:38 nas01 kernel: [ 18056] 0 18056 379 1 45056 0 0 cut Dec 21 01:20:38 nas01 kernel: [ 18311] 0 18311 612 173 40960 0 0 sleep Dec 21 01:20:38 nas01 kernel: [ 18396] 0 18396 961 691 45056 0 0 sh Dec 21 01:20:38 nas01 kernel: [ 18397] 0 18397 661 203 45056 0 0 timeout Dec 21 01:20:38 nas01 kernel: [ 18398] 0 18398 1524 670 45056 0 0 lsblk Dec 21 01:20:38 nas01 kernel: Out of memory: Kill process 9871 (smbd) score 508 or sacrifice child Dec 21 01:20:38 nas01 kernel: Killed process 9871 (smbd) total-vm:10368412kB, anon-rss:8054248kB, file-rss:4kB, shmem-rss:17504kB Dec 21 01:20:38 nas01 kernel: oom_reaper: reaped process 9871 (smbd), now anon-rss:0kB, file-rss:0kB, shmem-rss:6380kB
  23. Not sure where to start with this. Running 6.8.0 and noticing "stale file handle" errors in my Linux boxes which have mounts to Unraid via SMB / CIFS. The stale file handles happen over time, not clear what triggers them. I see a few threads re: NFS and issues with mover, but these errors are on SMB shares. Multiple shares, multiple boxes (Ubuntu, Debian). Each share has Cache set to "yes". Each Linux box has Unraid shares mounted in fstab. If I notice the stale file handle errors I can "umount /path/to/share" and then "mount -a" to restore the connection, but it's only a matter of time before they break again. Thoughts?
  24. I love knowing my data is protected. I would like to see options for multiple servers and high availability. Starting with VM and containers. I’d love to replace my Proxmox boxes with Unraid.
  25. Thanks @BRiT - I noticed that as well. Proxmox is a "service account" for my Proxmox hosts to conduct backup and access installer files (i.e. Linux ISO's). Any idea why this would be causing smbd to run away with memory? Is there something I can tune / fix / limit in Unraid for this?