zoggy

Members
  • Posts

    695
  • Joined

  • Last visited

  • Days Won

    1

zoggy last won the day on September 25 2022

zoggy had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

zoggy's Achievements

Enthusiast

Enthusiast (6/14)

36

Reputation

  1. 6.12.7rc2, just to note this bug is still outstanding:
  2. unraid 6.12.6 - after updating locations to 2024.02.10-o24, now i have a drive that is flashing which stops if i uncheck 'flash warning' but its not clear what the warning its trying to show... recheck and on hover its: spun disk up, now instead of N/A it says 0 there. i'm guessing the warnings are from: is the plugin just not happy with the 'command timeout' value from segates now? 1 1 1 are three different buckets, so i would imagine you would take the max value of all three buckets.. in this case 1 and compare to value which is less than 5.. so shouldnt be warning. docs on the value and how segate does it https://t1.daumcdn.net/brunch/service/user/axm/file/zRYOdwPu3OMoKYmBOby1fEEQEbU.pdf
  3. just fyi had someone reach out to sab about your container with unrar being broken as is just wouldnt extract files during direct unpack or not: 2024-01-25 14:51:03,438::DEBUG::[directunpacker:347] DirectUnpack Unrar output: UNRAR 7.00 beta 3 freeware Copyright (c) 1993-2023 Alexander Roshal ... 2024-01-25 14:51:31,550::DEBUG::[newsunpack:860] UNRAR output: UNRAR 7.00 beta 3 freeware Copyright (c) 1993-2023 Alexander Roshal 2024-01-25 14:51:31,550::INFO::[newsunpack:863] Unpacked 0 files/folders in 0 seconds I had them switch to linuxserver docker to test and everything worked fine. I sent them your side up to share relevant logs and info (dunno if they have done that just yet) - just sharing here in case they never make it. btw, i personally run the unrar 7 betas without any issues.
  4. after updating to latest 13.x + you probably see the app 500s when loading. this is due to change, go add variable APP_KEY and set it using the following instructions: https://docs.speedtest-tracker.dev/faqs#i-get-a-warning-on-container-start-up-that-the-app_key-is-missing then it will load just fine again.
  5. looking at logs, i see smbd core'd a few days ago (it then auto recovered). i never noticed anything on my side. just noting logs here in case others have seen Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828234, 0] ../../source3/smbd/close.c:1397(close_directory) Aug 19 20:30:27 husky smbd[15075]: close_directory: Could not get share mode lock for TV/Foundation 2021 Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828285, 0] ../../source3/smbd/fd_handle.c:39(fd_handle_destructor) Aug 19 20:30:27 husky smbd[15075]: PANIC: assert failed at ../../source3/smbd/fd_handle.c(39): (fh->fd == -1) || (fh->fd == AT_FDCWD) Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828296, 0] ../../lib/util/fault.c:173(smb_panic_log) Aug 19 20:30:27 husky smbd[15075]: =============================================================== Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828315, 0] ../../lib/util/fault.c:174(smb_panic_log) Aug 19 20:30:27 husky smbd[15075]: INTERNAL ERROR: assert failed: (fh->fd == -1) || (fh->fd == AT_FDCWD) in pid 15075 (4.17.7) Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828324, 0] ../../lib/util/fault.c:178(smb_panic_log) Aug 19 20:30:27 husky smbd[15075]: If you are running a recent Samba version, and if you think this problem is not yet fixed in the latest versions, please consider reporting this bug, see https://wiki.samba.org/index.php/Bug_Reporting Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828333, 0] ../../lib/util/fault.c:183(smb_panic_log) Aug 19 20:30:27 husky smbd[15075]: =============================================================== Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828348, 0] ../../lib/util/fault.c:184(smb_panic_log) Aug 19 20:30:27 husky smbd[15075]: PANIC (pid 15075): assert failed: (fh->fd == -1) || (fh->fd == AT_FDCWD) in 4.17.7 Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828708, 0] ../../lib/util/fault.c:292(log_stack_trace) Aug 19 20:30:27 husky smbd[15075]: BACKTRACE: 32 stack frames: Aug 19 20:30:27 husky smbd[15075]: #0 /usr/lib64/libgenrand-samba4.so(log_stack_trace+0x2e) [0x15498340e64e] Aug 19 20:30:27 husky smbd[15075]: #1 /usr/lib64/libgenrand-samba4.so(smb_panic+0x9) [0x15498340e8a9] Aug 19 20:30:27 husky smbd[15075]: #2 /usr/lib64/libsmbd-base-samba4.so(+0x4d10b) [0x1549837f010b] Aug 19 20:30:27 husky smbd[15075]: #3 /usr/lib64/libtalloc.so.2(+0x44df) [0x1549833bd4df] Aug 19 20:30:27 husky smbd[15075]: #4 /usr/lib64/libsmbd-base-samba4.so(file_free+0xd6) [0x1549837fd276] Aug 19 20:30:27 husky smbd[15075]: #5 /usr/lib64/libsmbd-base-samba4.so(close_file_free+0x29) [0x15498382da99] Aug 19 20:30:27 husky smbd[15075]: #6 /usr/lib64/libsmbd-base-samba4.so(+0x5cfd6) [0x1549837fffd6] Aug 19 20:30:27 husky smbd[15075]: #7 /usr/lib64/libsmbd-base-samba4.so(+0x5d122) [0x154983800122] Aug 19 20:30:27 husky smbd[15075]: #8 /usr/lib64/libsmbd-base-samba4.so(files_forall+0x19) [0x1549837fc0a9] Aug 19 20:30:27 husky smbd[15075]: #9 /usr/lib64/libsmbd-base-samba4.so(file_close_conn+0x44) [0x1549837fc114] Aug 19 20:30:27 husky smbd[15075]: #10 /usr/lib64/libsmbd-base-samba4.so(close_cnum+0x67) [0x154983848d27] Aug 19 20:30:27 husky smbd[15075]: #11 /usr/lib64/libsmbd-base-samba4.so(smbXsrv_tcon_disconnect+0x4e) [0x15498387c00e] Aug 19 20:30:27 husky smbd[15075]: #12 /usr/lib64/libsmbd-base-samba4.so(+0xbbc02) [0x15498385ec02] Aug 19 20:30:27 husky smbd[15075]: #13 /usr/lib64/libtevent.so.0(tevent_common_invoke_immediate_handler+0x17a) [0x1549833d0e2a] Aug 19 20:30:27 husky smbd[15075]: #14 /usr/lib64/libtevent.so.0(tevent_common_loop_immediate+0x16) [0x1549833d0e46] Aug 19 20:30:27 husky smbd[15075]: #15 /usr/lib64/libtevent.so.0(+0xebfb) [0x1549833d6bfb] Aug 19 20:30:27 husky smbd[15075]: #16 /usr/lib64/libtevent.so.0(+0xcef7) [0x1549833d4ef7] Aug 19 20:30:27 husky smbd[15075]: #17 /usr/lib64/libtevent.so.0(_tevent_loop_once+0x91) [0x1549833cfba1] Aug 19 20:30:27 husky smbd[15075]: #18 /usr/lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x1549833cfe7b] Aug 19 20:30:27 husky smbd[15075]: #19 /usr/lib64/libtevent.so.0(+0xce97) [0x1549833d4e97] Aug 19 20:30:27 husky smbd[15075]: #20 /usr/lib64/libsmbd-base-samba4.so(smbd_process+0x817) [0x154983846a37] Aug 19 20:30:27 husky smbd[15075]: #21 /usr/sbin/smbd(+0xb090) [0x5654df159090] Aug 19 20:30:27 husky smbd[15075]: #22 /usr/lib64/libtevent.so.0(tevent_common_invoke_fd_handler+0x91) [0x1549833d08c1] Aug 19 20:30:27 husky smbd[15075]: #23 /usr/lib64/libtevent.so.0(+0xee07) [0x1549833d6e07] Aug 19 20:30:27 husky smbd[15075]: #24 /usr/lib64/libtevent.so.0(+0xcef7) [0x1549833d4ef7] Aug 19 20:30:27 husky smbd[15075]: #25 /usr/lib64/libtevent.so.0(_tevent_loop_once+0x91) [0x1549833cfba1] Aug 19 20:30:27 husky smbd[15075]: #26 /usr/lib64/libtevent.so.0(tevent_common_loop_wait+0x1b) [0x1549833cfe7b] Aug 19 20:30:27 husky smbd[15075]: #27 /usr/lib64/libtevent.so.0(+0xce97) [0x1549833d4e97] Aug 19 20:30:27 husky smbd[15075]: #28 /usr/sbin/smbd(main+0x1489) [0x5654df156259] Aug 19 20:30:27 husky smbd[15075]: #29 /lib64/libc.so.6(+0x236b7) [0x1549831d76b7] Aug 19 20:30:27 husky smbd[15075]: #30 /lib64/libc.so.6(__libc_start_main+0x85) [0x1549831d7775] Aug 19 20:30:27 husky smbd[15075]: #31 /usr/sbin/smbd(_start+0x21) [0x5654df156b31] Aug 19 20:30:27 husky smbd[15075]: [2023/08/19 20:30:27.828898, 0] ../../source3/lib/dumpcore.c:315(dump_core) Aug 19 20:30:27 husky smbd[15075]: dumping core in /var/log/samba/cores/smbd no actual core present in the dir :/var/log/samba/cores/smbd# ls -alh total 0 drwx------ 2 root root 40 Jul 14 22:48 ./ drwx------ 9 root root 180 Jul 14 22:49 ../ husky-diagnostics-20230823-0049.zip
  6. ive heard this is a solution to use, https://github.com/tiredofit/docker-db-backup you basically dont touch your actual db, and just have that docker run backups periodically from it to a place of your choosing which you back that up.
  7. netdata changed a bit of stuff recently (their icon are svg), so fallout is that unraid netdata template's icon is no longer there. Since unraid doesnt support loading svg, the easy fix for icon is just to point to old image from old version when it was still there: https://raw.githubusercontent.com/netdata/netdata/v1.40/web/gui/dashboard/images/ms-icon-70x70.png
  8. if it helps track this down: i do not use nfs, it is set to no in unraid.
  9. Jun 30 16:55:37 husky unraid-api[8238]: ✔️ UNRAID API started successfully! Jun 30 16:55:39 husky rpc.statd[8351]: Version 2.6.2 starting Jun 30 16:55:39 husky rpc.statd[8353]: Version 2.6.2 starting Jun 30 16:55:39 husky sm-notify[8352]: Version 2.6.2 starting Jun 30 16:55:39 husky rpc.statd[8351]: Failed to read /var/lib/nfs/state: Success Jun 30 16:55:39 husky rpc.statd[8351]: Initializing NSM state Jun 30 16:55:39 husky sm-notify[8354]: Version 2.6.2 starting Jun 30 16:55:39 husky sm-notify[8354]: Already notifying clients; Exiting! Jun 30 16:55:39 husky rpc.statd[8353]: Failed to register (statd, 1, udp): svc_reg() err: RPC: Success Jun 30 16:55:39 husky rpc.statd[8351]: Failed to register (statd, 1, tcp): svc_reg() err: RPC: Success Jun 30 16:55:39 husky rpc.statd[8351]: Failed to register (statd, 1, udp6): svc_reg() err: RPC: Success Jun 30 16:55:39 husky rpc.statd[8351]: Failed to register (statd, 1, tcp6): svc_reg() err: RPC: Success Jun 30 16:55:41 husky kernel: RPC: Registered named UNIX socket transport module. Jun 30 16:55:41 husky kernel: RPC: Registered udp transport module. Jun 30 16:55:41 husky kernel: RPC: Registered tcp transport module. Jun 30 16:55:41 husky kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jun 30 16:55:41 husky rpc.nfsd[8486]: unable to bind AF_INET TCP socket: errno 98 (Address already in use) Jun 30 16:55:41 husky rpc.nfsd[8485]: unable to bind AF_INET TCP socket: errno 98 (Address already in use) Jun 30 16:55:42 husky kernel: NFSD: Using UMH upcall client tracking operations. Jun 30 16:55:42 husky kernel: NFSD: starting 90-second grace period (net f0000000) Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 1, udp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 1, tcp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 1, udp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 1, tcp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 2, udp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 2, tcp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 2, udp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 2, tcp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 3, udp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 3, tcp): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8508]: Version 2.6.2 starting Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 3, udp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: Failed to register (mountd, 3, tcp6): svc_reg() err: RPC: Success Jun 30 16:55:42 husky rpc.mountd[8507]: mountd: No V2 or V3 listeners created! Jun 30 16:55:42 husky rpc.mountd[8510]: Version 2.6.2 starting Just upgraded to 6.12.2 from 6.12.1 and seeing some new entries in log after booting. looking at 6.12.1 logs, it normally looks like: Jun 20 17:08:42 husky unraid-api[8534]: ✔️ UNRAID API started successfully! Jun 20 17:08:43 husky kernel: RPC: Registered named UNIX socket transport module. Jun 20 17:08:43 husky kernel: RPC: Registered udp transport module. Jun 20 17:08:43 husky kernel: RPC: Registered tcp transport module. Jun 20 17:08:43 husky kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jun 20 17:08:43 husky rpc.nfsd[8619]: unable to bind AF_INET TCP socket: errno 98 (Address already in use) Jun 20 17:08:43 husky rpc.nfsd[8619]: unable to bind AF_INET TCP socket: errno 98 (Address already in use) Jun 20 17:08:43 husky rpc.nfsd[8619]: unable to set any sockets for nfsd Jun 20 17:08:44 husky kernel: NFSD: Using UMH upcall client tracking operations. Jun 20 17:08:44 husky kernel: NFSD: starting 90-second grace period (net f0000000) husky-diagnostics-20230630-1747.zip
  10. tested when upgrading to 6.12.1 and yep worked fine. so no idea what happened on the 6.11.5->6.12.0, gremlins.
  11. main tab > array operations > move -- Move will immediately invoke the Mover. (Schedule)
  12. Thanks for the detailed response, makes sense. It makes sense now what happen. About the card, I just needed a card in the box to use with monitor from time to time / pass to vm as there is no igpu. It was cheap (was like $40) x1 card that is fanless/single slot. I dont need transcoding so no worries
  13. the old ca backup app used to just inherit how the dockers were sorted in unraid docker gui. it would show them in "unraid docker gui" reverse order, which is how it would stop them. then when it starts back up it does the reverse of that (aka, the normal "unraid docker page gui" order)
  14. ok took me a sec but I see, by default it only backs up the "internal volume", but then down below you can exclude things from within that via the selector At first glance i didnt realize you could drill down, but I see now. Thanks! Dunno how hard it would be to have that tree auto expand to the 'internal volume' path, but prob not worth the hassle