sunbear

Members
  • Posts

    103
  • Joined

  • Last visited

Everything posted by sunbear

  1. Any luck with this? I'm having the same issue. Think it's something off with my smb-extra settings.
  2. Has anyone been able to get this to work with an ASROCK RACK X570D4U or X570D4U-2L2T? The previous plugin version had a patch that was supposed to work for these models but it doesn't work with this new plugin. All readings and communication seem to be working but when I do a CONFIGURE for the fans, it only detects ONE of my 5 fans. And control doesn't seem to be working.
  3. In other words, the same performance, regardless which path I use?
  4. Yes, I would be very interested! Thanks. I'm wondering what the GUI will show for a pool that I modify in this way? I assume capacity/usage calculations will still work correctly with the additional mirror (I guess the capacity would stay the same).
  5. Is there a specific path that I need to reference in order to get the supposed IO performance boost from using exclusive shares? In other words do I need to use the following path: /mnt/pool/exclusive-share Or the following: /mnt/user/exclusive-share Or does it not matter?
  6. I know currently if you already have a vdev of mirrors, the unraid GUI will let you add an identical vdev of mirrors striped, increasing your pool capacity (I assume this is raid01 like you mention). I'm just not sure if it will let you do the inverse (add an identical group of striped drives in mirror config (i.e. raid10). I haven't been able to find anyone discuss such a configuration and I'd like to know it's possible before I spend the money on the drives. I was hoping to avoid command line because I'm a noob. But I suppose that can be my last resort.
  7. I am currently running two pcie4.0 NVME drives in zfs raid0 for that sweet sweet performance. However, I am worried about the lack of error correction/drive failure protection. If I were to buy 2 more NVME drives, would it be possible to then run the two new drives raid0/stripped and the mirror both sets for drive protection? Please note that I am not asking if you can stripe two sets of mirrors, which I know you can do and easily add mirror groups within raid. I'm asking if it is possible to MIRROR two sets of STRIPED drives. The former gives you the 2X read speeds but doesn't give you the benefit of 2X write speeds, while the later gives you both 2X read speeds and 2X write speeds (theoretically of course). Thanks.
  8. I'm having the same lock-ups and log errors as OP. On rc7. It doesn't lock up completely just extremely unresponsive. Most dockers not working. @DuzAwe, did you figure anything out with this?
  9. Is the new version available in the apps store yet? How do I install otherwise?
  10. So if a user is adding multiple identical drives, can I assume that it will generally always make more sense to add a raid-protected pool rather than adding individual drives to the parity-protected array? Thanks so much for the responses, btw. These are super helpful!
  11. Awesome, thank you. Would you say there is any difference in the feature set between ZFS protection and File Integrity's protection (blake3)? Or do they both just provide notification of corruption and that's it? Lately, the File Integrity plugin has been very processor intensive when running checks, so I'm wondering if ZFS may be better. Am I correct in assuming ZFS has no "scanning" process and the detection is done automatically? Or is it like the btrfs check which is quite quick? Last thing, so if I have a mirrored or raidz2 pool is it possible to ALSO have it protected under the array parity drive, or is it just like another cache pool?
  12. TWO QUESTIONS: 1. If I convert my XFS array to ZFS, does it make sense to still use the File Integrity Plugin? Or is it overkill since checksums are already verified with ZFS? 2. In order to use the raid features of ZFS, does that require me to create a separate array with a separate parity drive? Or can it be utilized under a parity drive for an already existing XFS array?
  13. I believe I have updated since then but I think you may be correct.
  14. Just had this happen to me. Rebooted server. Any info on the cause or a fix?
  15. IMO, having something like the following, #foregroundButton=[true|false] #backgroundButton=[true|false] would make a lot more sense and prevent one of them becoming redundant when you use the other. But my problem isn't about the wording, it's about the fact that if I use the variables at all (regardless of setting) BOTH buttons disappear, preventing me from running the script. It took a bit of fidgeting to realize I had to remove the variables altogether to get the buttons back.
  16. I've been having these strange graphql requests on my nginx docker recently and haven't been able figure out what the hell they are. After a quick google search I noticed that people are associating this with the MySevers plugin? Is this correct? Is there a way that I can turn this off? I'm really not a fan of things randomly scanning my docker network without my knowledge or permission. Luckily my nginx server has been rejecting it but I assume my other dockers aren't. Can I get some clarification on what this is, or if I'm way off base here?
  17. These variables no longer seem to be working. #foregroundOnly=false #backgroundOnly=true If I use them both, neither buttons show up in the GUI, regardless if true or false. Basically, if you define either variable, they are interpreted as TRUE.
  18. I've got the following BTRFS errors. Feb 18 11:31:54 SERV-X370 kernel: BTRFS warning (device loop2): csum failed root 1714 ino 10971 off 1146880 csum 0x5afe1bfa expected csum 0x47c5e48d mirror 1 Feb 18 11:31:54 SERV-X370 kernel: BTRFS error (device loop2): bdev /dev/loop2 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0 Diagnostics attached. I saw another post with this error and user was told to reformat cache but they had an NVME drive. Can someone tell me if I need to reformat and the best recommended command after backup? I'm currently formatted w/ RAID5 (1c3 for system & metadata) and would like to do the same. serv-x370-diagnostics-20230218-1714.zip
  19. In case anyone ends up here, I'm about 75% sure that I solved the issue. It wasn't my BACKUPPC docker accessing my DATABASES live but when it was accessing my NEXTCLOUD-devoted share while Nextcloud was running that the docker daemon would crash and eventually cause my entire system to freeze up. I've been backing up this Nextcloud share live for several years now and for some reason some update (either w/ nextcloud, docker, or unraid) suddenly caused this to no longer be possible. I still have to figure out how to backup that share, for now I've just turned that backup off but I assume I will have to script some temporary shutdown for the nextcloud docker. I'll give this a few more days to make sure I'm not imagining things and then will mark this as solved.
  20. Did you ever solve this? I'm getting the same errors. <22>Jan 4 09:00:06 SERVER sSMTP[16696]: Creating SSL connection to host <22>Jan 4 09:00:07 SERVER sSMTP[16696]: SSL connection using TLS_AES_256_GCM_SHA384 <19>Jan 4 09:00:11 SERVER sSMTP[16696]: 550 5.2.254 InvalidRecipientsException; Sender throttled due to continuous invalid recipients errors.; STOREDRV.Submission.Exception:InvalidRecipientsException; Failed to process message due to a permanent exception with message [BeginDiagnosticData]Recipient 'root' is not resolved. All recipients must be resolved before a message can be submitted. InvalidRecipientsException: Recipient 'root' is not resolved. All recipients must be resolved before a message can be submitted.[EndDiagnosticData] [Hostname=REDACTED.namprd14.prod.outlook.com]
  21. Ok thanks, I will look into that. But I actually think I may have isolated the issue. The hang only seems to happen when I have my backuppc docker running. I think there might be some kind of interference when a backup runs on my host appdata folder while other dockers are running. I'm thinking it is my influxdb docker with its database in the appdata folder that is getting accessed by backuppc and causing some kind of memory issue and crashing the docker daemon. The obvious solution would be to shut down my other dockers while running a backup of the appdata folder. My only question is why is this suddenly become an issue? I have been backing up databases like this for several years. Was there something updated with the docker daemon to cause this? Is it standard practice to shutdown any dockers before backing up data that they are accessing?
  22. I have memory swap set to -1 with "--memory-swap=-1" and am also limiting memory to 8G with "--memory=8g". I have 24G total available. Should I set the memory swap to something else?
  23. Ok, apparently it doesn't require the influxdb docker running to cause the docker daemon to hang like I thought it did. I have even less clue what is going on now. Something was updated after the unraid version update greater than 6.10.3 that consistently causes my docker daemon to hang, forcing a hard reset of my entire system to get it back. If I can't continue to update my system, this software has become unusable for me and I'm going to be forced to switch to something like TrueNAS. Can whoever does the updates for unraid please give me some guesses for what this might be? Or some suggestions for troubleshooting?
  24. Ok, I fixed the time issue. It's definitely my influx docker activating OOM killer and the docker subsequently crashing, making the whole system unresponsive. I've tried everything to fix this but am out of things to try other than no longer using influx (or switching to influx 2). I've removed and recreated the docker from scratch, started a fresh new database and transferring my old data to the new database, I've tried the two different type of memory limit parameters for the docker, tried without memory limits. Something in the updates after unraid 6.10.3 changed something that is causing my influx database to crash from oom every night around the same time. If I revert my unraid version, the problem is completely gone. I've pasted my syslog at time of crash again below. Does anyone have any other suggestions? <13>Dec 21 03:00:01 SERV-X370 root: Starting Mover <13>Dec 21 03:00:01 SERV-X370 root: Forcing turbo write on <4>Dec 21 03:00:01 SERV-X370 kernel: mdcmd (75): set md_write_method 1 <4>Dec 21 03:00:01 SERV-X370 kernel: <13>Dec 21 03:00:01 SERV-X370 root: ionice -c 2 -n 7 nice -n 5 /usr/local/emhttp/plugins/ca.mover.tuning/age_mover start 15 0 0 '' '' '' '' no 80 '' '' <4>Dec 21 05:23:20 SERV-X370 kernel: influxd invoked oom-killer: gfp_mask=0x8c40(GFP_NOFS|__GFP_NOFAIL), order=0, oom_score_adj=0 <4>Dec 21 05:23:20 SERV-X370 kernel: CPU: 14 PID: 12746 Comm: influxd Not tainted 5.19.17-Unraid #2 <4>Dec 21 05:23:20 SERV-X370 kernel: Hardware name: Micro-Star International Co., Ltd. MS-7A33/X370 SLI PLUS (MS-7A33), BIOS 3.JU 11/02/2021 <4>Dec 21 05:23:20 SERV-X370 kernel: Call Trace: <4>Dec 21 05:23:20 SERV-X370 kernel: <TASK> <4>Dec 21 05:23:20 SERV-X370 kernel: dump_stack_lvl+0x44/0x5c <4>Dec 21 05:23:20 SERV-X370 kernel: dump_header+0x4a/0x1ff <4>Dec 21 05:23:20 SERV-X370 kernel: oom_kill_process+0x80/0x111 <4>Dec 21 05:23:20 SERV-X370 kernel: out_of_memory+0x3e8/0x41a <4>Dec 21 05:23:20 SERV-X370 kernel: mem_cgroup_out_of_memory+0x7c/0xb2 <4>Dec 21 05:23:20 SERV-X370 kernel: try_charge_memcg+0x44a/0x55e <4>Dec 21 05:23:20 SERV-X370 kernel: ? get_page_from_freelist+0x6ff/0x82d <4>Dec 21 05:23:20 SERV-X370 kernel: charge_memcg+0x29/0x71 <4>Dec 21 05:23:20 SERV-X370 kernel: __mem_cgroup_charge+0x29/0x41 <4>Dec 21 05:23:20 SERV-X370 kernel: __filemap_add_folio+0xb9/0x34b <4>Dec 21 05:23:20 SERV-X370 kernel: ? lruvec_page_state+0x43/0x43 <4>Dec 21 05:23:20 SERV-X370 kernel: filemap_add_folio+0x37/0x91 <4>Dec 21 05:23:20 SERV-X370 kernel: __filemap_get_folio+0x1a4/0x1ff <4>Dec 21 05:23:20 SERV-X370 kernel: pagecache_get_page+0x13/0x8c <4>Dec 21 05:23:20 SERV-X370 kernel: alloc_extent_buffer+0x12d/0x38b <4>Dec 21 05:23:20 SERV-X370 kernel: ? read_extent_buffer+0x22/0x9b <4>Dec 21 05:23:20 SERV-X370 kernel: read_tree_block+0x21/0x7f <4>Dec 21 05:23:20 SERV-X370 kernel: read_block_for_search+0x200/0x27d <4>Dec 21 05:23:20 SERV-X370 kernel: btrfs_search_slot+0x6f7/0x7c5 <4>Dec 21 05:23:20 SERV-X370 kernel: btrfs_lookup_csum+0x5b/0xfd <4>Dec 21 05:23:20 SERV-X370 kernel: btrfs_lookup_bio_sums+0x1f4/0x4a2 <4>Dec 21 05:23:20 SERV-X370 kernel: btrfs_submit_data_bio+0x102/0x18b <4>Dec 21 05:23:20 SERV-X370 kernel: submit_extent_page+0x390/0x3d2 <4>Dec 21 05:23:20 SERV-X370 kernel: ? btrfs_repair_one_sector+0x30a/0x30a <4>Dec 21 05:23:20 SERV-X370 kernel: ? set_extent_bit+0x169/0x493 <4>Dec 21 05:23:20 SERV-X370 kernel: ? _raw_spin_unlock+0x14/0x29 <4>Dec 21 05:23:20 SERV-X370 kernel: ? set_extent_bit+0x18b/0x493 <4>Dec 21 05:23:20 SERV-X370 kernel: btrfs_do_readpage+0x487/0x4ed <4>Dec 21 05:23:20 SERV-X370 kernel: ? btrfs_repair_one_sector+0x30a/0x30a <4>Dec 21 05:23:20 SERV-X370 kernel: extent_readahead+0x209/0x280 <4>Dec 21 05:23:20 SERV-X370 kernel: read_pages+0x4a/0xe9 <4>Dec 21 05:23:20 SERV-X370 kernel: page_cache_ra_unbounded+0x10c/0x13f <4>Dec 21 05:23:20 SERV-X370 kernel: filemap_fault+0x2e7/0x524 <4>Dec 21 05:23:20 SERV-X370 kernel: __do_fault+0x30/0x6e <4>Dec 21 05:23:20 SERV-X370 kernel: __handle_mm_fault+0x9a5/0xc7d <4>Dec 21 05:23:20 SERV-X370 kernel: handle_mm_fault+0x113/0x1d7 <4>Dec 21 05:23:20 SERV-X370 kernel: do_user_addr_fault+0x36a/0x514 <4>Dec 21 05:23:20 SERV-X370 kernel: exc_page_fault+0xfc/0x11e <4>Dec 21 05:23:20 SERV-X370 kernel: asm_exc_page_fault+0x22/0x30 <4>Dec 21 05:23:20 SERV-X370 kernel: RIP: 0033:0x1267dd0 <4>Dec 21 05:23:20 SERV-X370 kernel: Code: Unable to access opcode bytes at RIP 0x1267da6. <4>Dec 21 05:23:20 SERV-X370 kernel: RSP: 002b:000000c212eed7c8 EFLAGS: 00010202 <4>Dec 21 05:23:20 SERV-X370 kernel: RAX: 00000000000a66b6 RBX: 000000c213914d20 RCX: 000000000000001a <4>Dec 21 05:23:20 SERV-X370 kernel: RDX: 000000c21385774a RSI: 000000c213857765 RDI: 0000000000000000 <4>Dec 21 05:23:20 SERV-X370 kernel: RBP: 000000c212eed998 R08: 000000c212eed780 R09: 0000000000000063 <4>Dec 21 05:23:20 SERV-X370 kernel: R10: 0000000000000030 R11: 0000000000000100 R12: 0000000000000680 <4>Dec 21 05:23:20 SERV-X370 kernel: R13: 0000000000000180 R14: 0000000000000014 R15: 0000000000000200 <4>Dec 21 05:23:20 SERV-X370 kernel: </TASK> <6>Dec 21 05:23:20 SERV-X370 kernel: memory: usage 8388608kB, limit 8388608kB, failcnt 224254 <6>Dec 21 05:23:20 SERV-X370 kernel: memory+swap: usage 8388608kB, limit 9007199254740988kB, failcnt 0 <6>Dec 21 05:23:20 SERV-X370 kernel: kmem: usage 20376kB, limit 9007199254740988kB, failcnt 0 <6>Dec 21 05:23:20 SERV-X370 kernel: Memory cgroup stats for /docker/290c6fea5fd3a05ed4cc4d6d45b203d7fcf5dd2c08965f4ab8434c3025941022: <6>Dec 21 05:23:20 SERV-X370 kernel: anon 8560111616 <6>Dec 21 05:23:20 SERV-X370 kernel: file 8957952 <6>Dec 21 05:23:20 SERV-X370 kernel: kernel 20865024 <6>Dec 21 05:23:20 SERV-X370 kernel: kernel_stack 720896 <6>Dec 21 05:23:20 SERV-X370 kernel: pagetables 18878464 <6>Dec 21 05:23:20 SERV-X370 kernel: percpu 14960 <6>Dec 21 05:23:20 SERV-X370 kernel: sock 0 <6>Dec 21 05:23:20 SERV-X370 kernel: vmalloc 32768 <6>Dec 21 05:23:20 SERV-X370 kernel: shmem 0 <6>Dec 21 05:23:20 SERV-X370 kernel: file_mapped 8192 <6>Dec 21 05:23:20 SERV-X370 kernel: file_dirty 0 <6>Dec 21 05:23:20 SERV-X370 kernel: file_writeback 0 <6>Dec 21 05:23:20 SERV-X370 kernel: swapcached 0 <6>Dec 21 05:23:20 SERV-X370 kernel: anon_thp 0 <6>Dec 21 05:23:20 SERV-X370 kernel: file_thp 0 <6>Dec 21 05:23:20 SERV-X370 kernel: shmem_thp 0 <6>Dec 21 05:23:20 SERV-X370 kernel: inactive_anon 8553947136 <6>Dec 21 05:23:20 SERV-X370 kernel: active_anon 6164480 <6>Dec 21 05:23:20 SERV-X370 kernel: inactive_file 8392704 <6>Dec 21 05:23:20 SERV-X370 kernel: active_file 0 <6>Dec 21 05:23:20 SERV-X370 kernel: unevictable 0 <6>Dec 21 05:23:20 SERV-X370 kernel: slab_reclaimable 590168 <6>Dec 21 05:23:20 SERV-X370 kernel: slab_unreclaimable 535288 <6>Dec 21 05:23:20 SERV-X370 kernel: slab 1125456 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_refault_anon 0 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_refault_file 82685793 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_activate_anon 0 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_activate_file 13472836 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_restore_anon 0 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_restore_file 4873761 <6>Dec 21 05:23:20 SERV-X370 kernel: workingset_nodereclaim 1664 <6>Dec 21 05:23:20 SERV-X370 kernel: pgfault 3626958 <6>Dec 21 05:23:20 SERV-X370 kernel: pgmajfault 157695 <6>Dec 21 05:23:20 SERV-X370 kernel: pgrefill 20783523 <6>Dec 21 05:23:20 SERV-X370 kernel: pgscan 468181478 <6>Dec 21 05:23:20 SERV-X370 kernel: pgsteal 83488920 <6>Dec 21 05:23:20 SERV-X370 kernel: pgactivate 5328178 <6>Dec 21 05:23:20 SERV-X370 kernel: pgdeactivate 18803208 <6>Dec 21 05:23:20 SERV-X370 kernel: pglazyfree 394022 <6>Dec 21 05:23:20 SERV-X370 kernel: pglazyfreed 382682 <6>Dec 21 05:23:20 SERV-X370 kernel: thp_fault_alloc 1 <6>Dec 21 05:23:20 SERV-X370 kernel: thp_collapse_alloc 0 <6>Dec 21 05:23:20 SERV-X370 kernel: Tasks state (memory values in pages): <6>Dec 21 05:23:20 SERV-X370 kernel: [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name <6>Dec 21 05:23:20 SERV-X370 kernel: [ 12607] 0 12607 2883688 2089516 18767872 0 0 influxd <6>Dec 21 05:23:20 SERV-X370 kernel: [ 12872] 0 12872 1071 23 57344 0 0 sh <6>Dec 21 05:23:20 SERV-X370 kernel: [ 4187] 0 4187 1071 16 53248 0 0 sh <6>Dec 21 05:23:20 SERV-X370 kernel: [ 22708] 0 22708 1071 17 53248 0 0 sh <6>Dec 21 05:23:20 SERV-X370 kernel: oom-kill:constraint=CONSTRAINT_MEMCG,nodemask=(null),cpuset=290c6fea5fd3a05ed4cc4d6d45b203d7fcf5dd2c08965f4ab8434c3025941022,mems_allowed=0,oom_memcg=/docker/290c6fea5fd3a05ed4cc4d6d45b203d7fcf5dd2c08965f4ab8434c3025941022,task_memcg=/docker/290c6fea5fd3a05ed4cc4d6d45b203d7fcf5dd2c08965f4ab8434c3025941022,task=influxd,pid=12607,uid=0 <3>Dec 21 05:23:20 SERV-X370 kernel: Memory cgroup out of memory: Killed process 12607 (influxd) total-vm:11534752kB, anon-rss:8358064kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:18328kB oom_score_adj:0