steini84

Community Developer
  • Posts

    434
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by steini84

  1. Check the Dashboard tab, you are in Main Sent from my iPhone using Tapatalk
  2. I use check_mk for overall monitoring of my servers and everything that has a network. It seems that the unassigned drives are being monitored by unraid : Sent from my iPhone using Tapatalk
  3. Built zfs-2.0.0-rc5 for 6.8.3 & 6.9.0-beta30 *see the first post to see how to enable unstable builds
  4. I´m sorry, but I dont use Windows You could send a question to this awesome podcast: https://2.5admins.com/ It has Jim Salter the author of Sanoid/Syncoid and the literal king of ZFS Allan Jude
  5. Good idea, I completely forgot to separate the stable builds from the RC version. I upgraded the plugin so that by default you install the cached version if available. If not the plugin check for a stable version by default. However you can touch a file on disk and enable unstable builds (like the 2.0.0 RC series) #Enable unstable builds touch /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot #Disable unstable builds rm /boot/config/plugins/unRAID6-ZFS/USE_UNSTABLE_BUILDS rm /boot/config/plugins/unRAID6-ZFS/packages/* #Then reboot The builds are pretty much vanilla ZFS and you can check the build scripts on Github: https://github.com/Steini1984/unRAID6-ZFS/blob/master/build.sh (to build latest ZFS) https://github.com/Steini1984/unRAID6-ZFS/blob/master/build_github.sh (to build custom versions like RC) I feel your pain and it´s incredibly frustrating when the server has issues! I remember having incredible problems with my server a few years back. Random disks failing, processes crashing and all around pain. Took me probably a week of debugging and the solution was a new PSU. Hope you can fix your problems and I really hope that at least ZFS in not the culprit
  6. Rebuilt zfs-2.0.0-rc4 for unRAID 6.8.3 & 6.9.0-beta30
  7. Sorry my bad. Had the rc3 files cached rebuilding now Sent from my iPhone using Tapatalk
  8. built zfs-2.0.0-rc4 for unRAID 6.8.3 & 6.9.0-beta30
  9. I have had problems with that before and I had to do zfs destroy dataset rm -rf mount point Then zfs destroy dataset again Sent from my iPhone using Tapatalk
  10. First off here you can see the build script. https://github.com/Steini1984/unRAID6-ZFS/blob/master/build.sh
  11. So what happens is that the plugin first checks if you have a locally cahed package to install in /boot/config/plugins/unRAID6-ZFS/packages/ and if not it check on github. If I understand correctly you are running unRAID 6.8.3 stable and want to run zfs 2.0.0-rc3? This is what I did to achive what you want. Have the plugin installed and run these commands rm /boot/config/plugins/unRAID6-ZFS/packages/zfs* wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz https://www.dropbox.com/s/wmzxjyzqs9b9fxz/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz?dl=0 wget -O /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5 https://www.dropbox.com/s/3onv1qur26yxb7n/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5?dl=0 Before you reboot you can run this command and test if everything went as expected cat /etc/unraid-version && md5sum /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz && cat /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz.md5 and you should get this exact output: version="6.8.3" 8a6c48b7c3ff3e9a91ce400e9ff05ad6 /boot/config/plugins/unRAID6-ZFS/packages/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz 8a6c48b7c3ff3e9a91ce400e9ff05ad6 /root/mount/zfs-2.0.0-rc3-unRAID-6.8.3.x86_64.tgz then you can reboot and can confirm if it worked like expected: root@Tower:~# dmesg | grep ZFS && cat /etc/unraid-version [ 33.429241] ZFS: Loaded module v2.0.0-rc3, ZFS pool version 5000, ZFS filesystem version 5 version="6.8.3"
  12. Check this thread out since it has some good info: https://www.reddit.com/r/zfs/comments/ew9zjm/turning_off_dedup_question_for_someone_more/
  13. Maybe you can just use dedupe on the dataset that houses the builds? Sent from my iPhone using Tapatalk
  14. It is probably the dedup that is killing you. I want to steal this quote "Although it sounds cool, deduplication is rarely worth it. It usually creates a lot of memory problems " REF: https://bigstep.com/blog/zfs-best-practices-and-caveats
  15. You are using almost all of your memory. You can either use a smaller arc or add swap... maybe both Adding swap: #first create a 8gb zvol where <pool> is the name of your pool: zfs create -V 8G -b $(getconf PAGESIZE) \ -o primarycache=metadata \ -o com.sun:auto-snapshot=false <pool>/swap #then make it a swap partition mkswap -f /dev/zvol/<pool>/swap swapon /dev/zvol/<pool>/swap #to make it persistent you need to add this to your go file: swapon /dev/zvol/<pool>/swap
  16. Ok so the limit is working, but what about: free -g
  17. can you paste the output of arcstat
  18. Built zfs-0.8.5 for unRAID-6.8.3 I also built zfs-0.8.5 for unRAID-6.9.0-beta30 for those who want to try the unRAID beta but stay on the latest stable ZFS version. To install 0.8.5 on beta30 you have to run these commands and reboot: rm /boot/config/plugins/unRAID6-ZFS/packages/zfs* wget -P /boot/config/plugins/unRAID6-ZFS/packages/ https://github.com/Steini1984/unRAID6-ZFS/raw/master/packages/zfs-0.8.5-unRAID-6.9.0-beta30.x86_64.tgz wget -P /boot/config/plugins/unRAID6-ZFS/packages/ https://raw.githubusercontent.com/Steini1984/unRAID6-ZFS/master/packages/zfs-0.8.5-unRAID-6.9.0-beta30.x86_64.tgz.md5
  19. Hello I am running into 3 problems I have not seen in the stable version: Oct 7 03:50:33 Unraid crond[1705]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null Oct 7 03:52:33 Unraid kernel: cat: page allocation failure: order:5, mode:0x40dc0(GFP_KERNEL|__GFP_COMP|__GFP_ZERO), nodemask=(null),cpuset=/,mems_allowed=0 Oct 7 03:52:33 Unraid kernel: CPU: 0 PID: 30268 Comm: cat Tainted: P O 5.8.13-Unraid #1 Oct 7 03:52:33 Unraid kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./B75 Pro3, BIOS P1.60 12/10/2012 Oct 7 03:52:33 Unraid kernel: Call Trace: Oct 7 03:52:33 Unraid kernel: dump_stack+0x6b/0x83 Oct 7 03:52:33 Unraid kernel: warn_alloc+0xe2/0x160 Oct 7 03:52:33 Unraid kernel: ? _cond_resched+0x1b/0x1e Oct 7 03:52:33 Unraid kernel: ? __alloc_pages_direct_compact+0xff/0x126 Oct 7 03:52:33 Unraid kernel: __alloc_pages_slowpath.constprop.0+0x753/0x780 Oct 7 03:52:33 Unraid kernel: ? __alloc_pages_nodemask+0x1a9/0x1fc Oct 7 03:52:33 Unraid kernel: __alloc_pages_nodemask+0x1a1/0x1fc Oct 7 03:52:33 Unraid kernel: kmalloc_order+0x15/0x67 Oct 7 03:52:33 Unraid kernel: proc_sys_call_handler+0xb2/0x132 Oct 7 03:52:33 Unraid kernel: vfs_read+0xa8/0x103 Oct 7 03:52:33 Unraid kernel: ksys_read+0x71/0xba Oct 7 03:52:33 Unraid kernel: do_syscall_64+0x7a/0x94 Oct 7 03:52:33 Unraid kernel: entry_SYSCALL_64_after_hwframe+0x44/0xa9 Oct 7 03:52:33 Unraid kernel: RIP: 0033:0x15226d55282e Oct 7 03:52:33 Unraid kernel: Code: c0 e9 f6 fe ff ff 50 48 8d 3d b6 5d 0a 00 e8 e9 fd 01 00 66 0f 1f 84 00 00 00 00 00 64 8b 04 25 18 00 00 00 85 c0 75 14 0f 05 <48> 3d 00 f0 ff ff 77 5a c3 66 0f 1f 84 00 00 00 00 00 48 83 ec 28 Oct 7 03:52:33 Unraid kernel: RSP: 002b:00007fffc8b571b8 EFLAGS: 00000246 ORIG_RAX: 0000000000000000 Oct 7 03:52:33 Unraid kernel: RAX: ffffffffffffffda RBX: 0000000000020000 RCX: 000015226d55282e Oct 7 03:52:33 Unraid kernel: RDX: 0000000000020000 RSI: 000015226d42b000 RDI: 0000000000000003 Oct 7 03:52:33 Unraid kernel: RBP: 000015226d42b000 R08: 000015226d42a010 R09: 0000000000000000 Oct 7 03:52:33 Unraid kernel: R10: 000015226d65ea90 R11: 0000000000000246 R12: 0000000000402e80 Oct 7 03:52:33 Unraid kernel: R13: 0000000000000003 R14: 0000000000020000 R15: 0000000000020000 Oct 7 03:52:33 Unraid kernel: Mem-Info: Oct 7 03:52:33 Unraid kernel: active_anon:1276148 inactive_anon:877919 isolated_anon:0 Oct 7 03:52:33 Unraid kernel: active_file:302697 inactive_file:4606484 isolated_file:0 Oct 7 03:52:33 Unraid kernel: unevictable:1729 dirty:82 writeback:5 Oct 7 03:52:33 Unraid kernel: slab_reclaimable:253868 slab_unreclaimable:288535 Oct 7 03:52:33 Unraid kernel: mapped:130341 shmem:298117 pagetables:13068 bounce:0 Oct 7 03:52:33 Unraid kernel: free:150406 free_pcp:0 free_cma:0 Oct 7 03:52:33 Unraid kernel: Node 0 active_anon:5104592kB inactive_anon:3511676kB active_file:1210788kB inactive_file:18425936kB unevictable:6916kB isolated(anon):0kB isolated(file):0kB mapped:521364kB dirty:328kB writeback:20kB shmem:1192468kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 4222976kB writeback_tmp:0kB all_unreclaimable? no Oct 7 03:52:33 Unraid kernel: Node 0 DMA free:15896kB min:32kB low:44kB high:56kB reserved_highatomic:0KB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15980kB managed:15896kB mlocked:0kB kernel_stack:0kB pagetables:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Oct 7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 3053 31654 31654 Oct 7 03:52:33 Unraid kernel: Node 0 DMA32 free:424868kB min:6516kB low:9640kB high:12764kB reserved_highatomic:0KB active_anon:551912kB inactive_anon:505524kB active_file:28248kB inactive_file:1381904kB unevictable:0kB writepending:4kB present:3355692kB managed:3272532kB mlocked:0kB kernel_stack:96kB pagetables:1008kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Oct 7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 0 28600 28600 Oct 7 03:52:33 Unraid kernel: Node 0 Normal free:160860kB min:61032kB low:90316kB high:119600kB reserved_highatomic:2048KB active_anon:4553276kB inactive_anon:3006284kB active_file:1182540kB inactive_file:17043816kB unevictable:6916kB writepending:344kB present:29857792kB managed:29287124kB mlocked:16kB kernel_stack:36568kB pagetables:51264kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB Oct 7 03:52:33 Unraid kernel: lowmem_reserve[]: 0 0 0 0 Oct 7 03:52:33 Unraid kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (M) 3*4096kB (M) = 15896kB Oct 7 03:52:33 Unraid kernel: Node 0 DMA32: 24345*4kB (UME) 28528*8kB (UME) 5509*16kB (UME) 362*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 425332kB Oct 7 03:52:33 Unraid kernel: Node 0 Normal: 3839*4kB (UMEH) 4780*8kB (UMEH) 6632*16kB (UEH) 52*32kB (UEH) 5*64kB (H) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 161692kB Oct 7 03:52:33 Unraid kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB Oct 7 03:52:33 Unraid kernel: 5218938 total pagecache pages Oct 7 03:52:33 Unraid kernel: 25186 pages in swap cache Oct 7 03:52:33 Unraid kernel: Swap cache stats: add 233014, delete 207828, find 21328/25054 Oct 7 03:52:33 Unraid kernel: Free swap = 7567100kB Oct 7 03:52:33 Unraid kernel: Total swap = 8388604kB Oct 7 03:52:33 Unraid kernel: 8307366 pages RAM Oct 7 03:52:33 Unraid kernel: 0 pages HighMem/MovableOnly Oct 7 03:52:33 Unraid kernel: 163478 pages reserved Oct 7 03:52:33 Unraid kernel: 0 pages cma reserved Oct 7 09:45:45 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping. Oct 7 09:47:15 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping. Oct 7 09:48:45 Unraid avahi-daemon[9922]: Record [Unraid._ssh._tcp.local#011IN#011SRV 0 0 22 Unraid.local ; ttl=120] not fitting in legacy unicast packet, dropping. Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797515, 0] ../../lib/util/fault.c:79(fault_report) Oct 7 10:12:09 Unraid smbd[29711]: =============================================================== Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797569, 0] ../../lib/util/fault.c:80(fault_report) Oct 7 10:12:09 Unraid smbd[29711]: INTERNAL ERROR: Signal 11 in pid 29711 (4.12.7) Oct 7 10:12:09 Unraid smbd[29711]: If you are running a recent Samba version, and if you think this problem is not yet fixed in the latest versions, please consider reporting this bug, see https://wiki.samba.org/index.php/Bug_Reporting Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797595, 0] ../../lib/util/fault.c:86(fault_report) Oct 7 10:12:09 Unraid smbd[29711]: =============================================================== Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797611, 0] ../../source3/lib/util.c:829(smb_panic_s3) Oct 7 10:12:09 Unraid smbd[29711]: PANIC (pid 29711): internal error Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.797697, 0] ../../lib/util/fault.c:222(log_stack_trace) Oct 7 10:12:09 Unraid smbd[29711]: BACKTRACE: Oct 7 10:12:09 Unraid smbd[29711]: #0 log_stack_trace + 0x39 [ip=0x15106b3f0f49] [sp=0x7ffdd6132a40] Oct 7 10:12:09 Unraid smbd[29711]: #1 smb_panic_s3 + 0x23 [ip=0x15106afbf223] [sp=0x7ffdd6133380] Oct 7 10:12:09 Unraid smbd[29711]: #2 smb_panic + 0x2f [ip=0x15106b3f115f] [sp=0x7ffdd61333a0] Oct 7 10:12:09 Unraid smbd[29711]: #3 smb_panic + 0x27d [ip=0x15106b3f13ad] [sp=0x7ffdd61334b0] Oct 7 10:12:09 Unraid smbd[29711]: #4 funlockfile + 0x50 [ip=0x15106a6d1690] [sp=0x7ffdd61334c0] Oct 7 10:12:09 Unraid smbd[29711]: #5 create_file_default + 0x76a [ip=0x15106b2099ba] [sp=0x7ffdd6133a70] Oct 7 10:12:09 Unraid smbd[29711]: #6 close_file + 0xcb [ip=0x15106b20a36b] [sp=0x7ffdd6133aa0] Oct 7 10:12:09 Unraid smbd[29711]: #7 file_close_user + 0x35 [ip=0x15106b1afad5] [sp=0x7ffdd6133cc0] Oct 7 10:12:09 Unraid smbd[29711]: #8 smbXsrv_session_logoff + 0x4d [ip=0x15106b25190d] [sp=0x7ffdd6133ce0] Oct 7 10:12:09 Unraid smbd[29711]: #9 smbXsrv_session_logoff + 0x3e2 [ip=0x15106b251ca2] [sp=0x7ffdd6133d30] Oct 7 10:12:09 Unraid smbd[29711]: #10 dbwrap_unmarshall + 0x186 [ip=0x151069efb6b6] [sp=0x7ffdd6133d50] Oct 7 10:12:09 Unraid smbd[29711]: #11 dbwrap_unmarshall + 0x3bb [ip=0x151069efb8eb] [sp=0x7ffdd6133e10] Oct 7 10:12:09 Unraid smbd[29711]: #12 dbwrap_traverse + 0x7 [ip=0x151069ef9f37] [sp=0x7ffdd6133e40] Oct 7 10:12:09 Unraid smbd[29711]: #13 smbXsrv_session_logoff_all + 0x5c [ip=0x15106b251e5c] [sp=0x7ffdd6133e50] Oct 7 10:12:09 Unraid smbd[29711]: #14 smbXsrv_open_cleanup + 0x4d2 [ip=0x15106b2573e2] [sp=0x7ffdd6133e90] Oct 7 10:12:09 Unraid smbd[29711]: #15 smbd_exit_server_cleanly + 0x10 [ip=0x15106b257980] [sp=0x7ffdd6133ef0] Oct 7 10:12:09 Unraid smbd[29711]: #16 exit_server_cleanly + 0x14 [ip=0x15106a862284] [sp=0x7ffdd6133f00] Oct 7 10:12:09 Unraid smbd[29711]: #17 smbd_server_connection_terminate_ex + 0x111 [ip=0x15106b2337c1] [sp=0x7ffdd6133f10] Oct 7 10:12:09 Unraid smbd[29711]: #18 smbd_smb2_request_dispatch_immediate + 0x569 [ip=0x15106b2365d9] [sp=0x7ffdd6133f40] Oct 7 10:12:09 Unraid smbd[29711]: #19 tevent_common_invoke_fd_handler + 0x7d [ip=0x15106a82370d] [sp=0x7ffdd6133fb0] Oct 7 10:12:09 Unraid smbd[29711]: #20 tevent_wakeup_recv + 0x1097 [ip=0x15106a829a77] [sp=0x7ffdd6133fe0] Oct 7 10:12:09 Unraid smbd[29711]: #21 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x15106a827c07] [sp=0x7ffdd6134040] Oct 7 10:12:09 Unraid smbd[29711]: #22 _tevent_loop_once + 0x94 [ip=0x15106a822df4] [sp=0x7ffdd6134060] Oct 7 10:12:09 Unraid smbd[29711]: #23 tevent_common_loop_wait + 0x1b [ip=0x15106a82309b] [sp=0x7ffdd6134090] Oct 7 10:12:09 Unraid smbd[29711]: #24 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x15106a827ba7] [sp=0x7ffdd61340b0] Oct 7 10:12:09 Unraid smbd[29711]: #25 smbd_process + 0x7a7 [ip=0x15106b225bb7] [sp=0x7ffdd61340d0] Oct 7 10:12:09 Unraid smbd[29711]: #26 _start + 0x2271 [ip=0x565531933241] [sp=0x7ffdd6134160] Oct 7 10:12:09 Unraid smbd[29711]: #27 tevent_common_invoke_fd_handler + 0x7d [ip=0x15106a82370d] [sp=0x7ffdd6134230] Oct 7 10:12:09 Unraid smbd[29711]: #28 tevent_wakeup_recv + 0x1097 [ip=0x15106a829a77] [sp=0x7ffdd6134260] Oct 7 10:12:09 Unraid smbd[29711]: #29 tevent_cleanup_pending_signal_handlers + 0xb7 [ip=0x15106a827c07] [sp=0x7ffdd61342c0] Oct 7 10:12:09 Unraid smbd[29711]: #30 _tevent_loop_once + 0x94 [ip=0x15106a822df4] [sp=0x7ffdd61342e0] Oct 7 10:12:09 Unraid smbd[29711]: #31 tevent_common_loop_wait + 0x1b [ip=0x15106a82309b] [sp=0x7ffdd6134310] Oct 7 10:12:09 Unraid smbd[29711]: #32 tevent_cleanup_pending_signal_handlers + 0x57 [ip=0x15106a827ba7] [sp=0x7ffdd6134330] Oct 7 10:12:09 Unraid smbd[29711]: #33 main + 0x1b2f [ip=0x565531930c1f] [sp=0x7ffdd6134350] Oct 7 10:12:09 Unraid smbd[29711]: #34 __libc_start_main + 0xeb [ip=0x15106a4fce5b] [sp=0x7ffdd6134700] Oct 7 10:12:09 Unraid smbd[29711]: #35 _start + 0x2a [ip=0x565531930ffa] [sp=0x7ffdd61347c0] Oct 7 10:12:09 Unraid smbd[29711]: [2020/10/07 10:12:09.815130, 0] ../../source3/lib/dumpcore.c:315(dump_core) Oct 7 10:12:09 Unraid smbd[29711]: dumping core in /var/log/samba/cores/smbd Oct 7 10:12:09 Unraid smbd[29711]: unraid-diagnostics-20201007-1028 2.zip
  20. Same here - 2 linux Vms and the log filled quickly Going to update to beta 30 and see if that helps Saw you comment under the beta-30 release and one of my Vms had reverted to virto also.. .Strange, but that should fix it
  21. Added zfs-2.0.0-rc3 for unRAID-6.9.0-beta30
  22. I run it every minute, but for a simple snapshot without replication znapsend might be easier to setup Sent from my iPhone using Tapatalk
  23. I World reccomend checking out: https://forums.unraid.net/topic/94549-sanoidsyncoid-zfs-snapshots-and-replication/ Or https://forums.unraid.net/topic/84442-znapzend-plugin-for-unraid/ Sent from my iPhone using Tapatalk
  24. FYI i installed 6.9.0-beta29 and the warnings seem to have stopped: root@Unraid:~# cat /var/log/syslog | grep -i lp_bool root@Unraid:~# I also cleaned up some things related to my normal user account since that ability was removed ("only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')"