nerv

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by nerv

  1. Closing this out, after 2 weeks no crashes. Seems very likely the UPS was just failing.
  2. Thanks, that's what I saw as well. Are there specific types of hardware issues that don't show up in logs? I seem to recall memory issues and that type of thing typically showing up. Just trying to narrow where to start looking. For now I've moved it off the UPS to see if that is the cause.
  3. I've attached diagnostics, but my server has started rebooting randomly every few days. I haven't upgraded the OS or really changed anything, so I'm wondering if there is a hardware issue. However, looking at sys logs, there's nothing around the time the reboot happens. Any ideas? My only theory atm is my battery backup is dieing and randomly cutting power. media-diagnostics-20240314-1046.zip syslog-192.168.86.42.log
  4. Hmm, no that's probably it. The warning just showed up for the first time after a month which is odd, but that makes the most sense. Thanks!
  5. Yea I did change the setting and the crashes stopped, but I got the warning again. Not sure why I'm getting the warning again.
  6. No crashes still for a month+ so that definitely seems to have fixed it. Weirdly though I just got an alert from fix common errors that it found Macvlan call traces. I confirmed I still have docker set to ipvlan. Should I ignore this? I also have some VMs running.
  7. So far so good, so I'm going to mark this solved and cross my fingers. Will reopen if I get a new crash + logs. Thanks @JorgeB!
  8. Got it, I changed that. I'll follow up if that fixes the issue. That's actually the first time I've seen that log fwiw, or at least the first time fix common errors alerted me to it was the other day. I had assumed that was because Mullvad took down my VPN for maintenance and I had to switch. Maybe that's unrelated though. I'm not sure where that setting actually came from. Maybe an older legacy thing as I've had unraid for ~10 years? I do have a fairly unusual setup with dockers with individual dual IPs on a vlan + on a wireguard tunnel, but as far as I can tell I didn't set it that way due to one of those, and everything is still working with the new setting. Doesn't matter, just curious :).
  9. Ever since upgrading to 6.11.5 my Unraid system has started crashing after having never previously crashed before. It was every 2 weeks or so, but recently it has increased to every 2-3 days. I've attached diagnostics and my syslog, any ideas on the cause? The most recent crash was sometime on the 25th, and I turned the server back on the morning of the 26th. There's a CPU warning in the logs right before then (see below), and I see something similar a few other places. Is this potentially related? Any help greatly appreciated. Thanks! Mar 25 06:55:24 Media kernel: ------------[ cut here ]------------ Mar 25 06:55:24 Media kernel: WARNING: CPU: 16 PID: 6805 at net/netfilter/nf_nat_core.c:594 nf_nat_setup_info+0x73/0x7b1 [nf_nat] Mar 25 06:55:24 Media kernel: Modules linked in: af_packet tcp_diag udp_diag inet_diag xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat vhost_net tun vhost vhost_iotlb tap veth macvlan xt_nat xt_tcpudp xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetlink xfrm_user iptable_nat nf_nat br_netfilter xfs md_mod it87 hwmon_vid xt_connmark nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 xt_mark iptable_mangle xt_comment xt_addrtype iptable_raw wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge 8021q garp mrp stp llc bonding tls ixgbe xfrm_algo mdio x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel ipmi_ssif kvm crct10dif_pclmul crc32_pclmul crc32c_intel ast ghash_clmulni_intel drm_vram_helper i2c_algo_bit aesni_intel drm_ttm_helper ttm crypto_simd cryptd drm_kms_helper rapl intel_cstate drm intel_uncore mpt3sas backlight agpgart Mar 25 06:55:24 Media kernel: i2c_i801 i2c_smbus ahci syscopyarea i2c_core sysfillrect raid_class sysimgblt libahci fb_sys_fops scsi_transport_sas wmi acpi_ipmi ipmi_si button unix [last unloaded: xfrm_algo] Mar 25 06:55:24 Media kernel: CPU: 16 PID: 6805 Comm: kworker/16:2 Tainted: G W 5.19.17-Unraid #2 Mar 25 06:55:24 Media kernel: Hardware name: Cirrascale VB1416/GA-7PESH2, BIOS R17 06/26/2018 Mar 25 06:55:24 Media kernel: Workqueue: events macvlan_process_broadcast [macvlan] Mar 25 06:55:24 Media kernel: RIP: 0010:nf_nat_setup_info+0x73/0x7b1 [nf_nat] Mar 25 06:55:24 Media kernel: Code: 48 8b 87 80 00 00 00 48 89 fb 49 89 f4 76 04 0f 0b eb 0e 83 7c 24 1c 00 75 07 25 80 00 00 00 eb 05 25 00 01 00 00 85 c0 74 07 <0f> 0b e9 6a 06 00 00 48 8b 83 88 00 00 00 48 8d 73 58 48 8d 7c 24 Mar 25 06:55:24 Media kernel: RSP: 0018:ffffc90006784bc8 EFLAGS: 00010202 Mar 25 06:55:24 Media kernel: RAX: 0000000000000080 RBX: ffff8882a61a5700 RCX: ffff889099a38400 Mar 25 06:55:24 Media kernel: RDX: 0000000000000000 RSI: ffffc90006784cac RDI: ffff8882a61a5700 Mar 25 06:55:24 Media kernel: RBP: ffffc90006784c90 R08: 000000006756a8c0 R09: 0000000000000000 Mar 25 06:55:24 Media kernel: R10: 0000000000000158 R11: 0000000000000000 R12: ffffc90006784cac Mar 25 06:55:24 Media kernel: R13: 000000006756a800 R14: ffffc90006784d90 R15: 0000000000000000 Mar 25 06:55:24 Media kernel: FS: 0000000000000000(0000) GS:ffff88903fc00000(0000) knlGS:0000000000000000 Mar 25 06:55:24 Media kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Mar 25 06:55:24 Media kernel: CR2: 0000151c1dd49bf8 CR3: 000000000220a006 CR4: 00000000001726e0 Mar 25 06:55:24 Media kernel: Call Trace: Mar 25 06:55:24 Media kernel: <IRQ> Mar 25 06:55:24 Media kernel: ? xt_write_recseq_end+0xf/0x1c [ip_tables] Mar 25 06:55:24 Media kernel: ? __local_bh_enable_ip+0x56/0x6b Mar 25 06:55:24 Media kernel: ? ipt_do_table+0x57a/0x5bf [ip_tables] Mar 25 06:55:24 Media kernel: ? xt_write_recseq_end+0xf/0x1c [ip_tables] Mar 25 06:55:24 Media kernel: __nf_nat_alloc_null_binding+0x66/0x81 [nf_nat] Mar 25 06:55:24 Media kernel: nf_nat_inet_fn+0xc0/0x1a8 [nf_nat] Mar 25 06:55:24 Media kernel: nf_nat_ipv4_local_in+0x2a/0xaa [nf_nat] Mar 25 06:55:24 Media kernel: nf_hook_slow+0x3d/0x96 Mar 25 06:55:24 Media kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Mar 25 06:55:24 Media kernel: NF_HOOK.constprop.0+0x79/0xd9 Mar 25 06:55:24 Media kernel: ? ip_protocol_deliver_rcu+0x164/0x164 Mar 25 06:55:24 Media kernel: ip_sabotage_in+0x4a/0x58 [br_netfilter] Mar 25 06:55:24 Media kernel: nf_hook_slow+0x3d/0x96 Mar 25 06:55:24 Media kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7 Mar 25 06:55:24 Media kernel: NF_HOOK.constprop.0+0x79/0xd9 Mar 25 06:55:24 Media kernel: ? ip_rcv_finish_core.constprop.0+0x3b7/0x3b7 Mar 25 06:55:24 Media kernel: __netif_receive_skb_one_core+0x77/0x9c Mar 25 06:55:24 Media kernel: process_backlog+0x8c/0x116 Mar 25 06:55:24 Media kernel: __napi_poll.constprop.0+0x2b/0x124 Mar 25 06:55:24 Media kernel: net_rx_action+0x159/0x24f Mar 25 06:55:24 Media kernel: __do_softirq+0x129/0x288 Mar 25 06:55:24 Media kernel: do_softirq+0x7f/0xab Mar 25 06:55:24 Media kernel: </IRQ> Mar 25 06:55:24 Media kernel: <TASK> Mar 25 06:55:24 Media kernel: __local_bh_enable_ip+0x4c/0x6b Mar 25 06:55:24 Media kernel: netif_rx+0x52/0x5a Mar 25 06:55:24 Media kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Mar 25 06:55:24 Media kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan] Mar 25 06:55:24 Media kernel: process_one_work+0x1ab/0x295 Mar 25 06:55:24 Media kernel: worker_thread+0x18b/0x244 Mar 25 06:55:24 Media kernel: ? rescuer_thread+0x281/0x281 Mar 25 06:55:24 Media kernel: kthread+0xe7/0xef Mar 25 06:55:24 Media kernel: ? kthread_complete_and_exit+0x1b/0x1b Mar 25 06:55:24 Media kernel: ret_from_fork+0x22/0x30 Mar 25 06:55:24 Media kernel: </TASK> Mar 25 06:55:24 Media kernel: ---[ end trace 0000000000000000 ]--- Mar 26 08:10:34 Media kernel: mdcmd (36): set md_write_method 1 peer-Media-wg0-1 (1).zip syslog-192.168.86.42 (1).log
  10. Using the VPN tunneled access for docker, is it possible to not use the tunnel for some local IPs? My dockers on wg1 lose access to dockers I don't want on the tunnel running on another vlan. I tried modifying allowed IPS to exclude 192.168.0.0/16 with a calculator, but when I put that in allowed IPs the handshake fails. edit: This is on 6.11.5
  11. Hey folks, I was wondering if it's possible to allow some access around docker isolation? I created a wireguard tunnel for docker which works great, except I can't access the dockers unless the source IP is on my main network (that unraid runs on) or the docker wireguard network. It looks like this is caused by the iptable rules below, but adding rules to allow access to the wireguard subnet doesn't seem to work (or I'm doing it wrong) I inserted the first two rules below to try and allow traffic in/out of 172.31.201.7, but no dice. Is this possible to accomplish? -A DOCKER-ISOLATION-STAGE-1 -i 172.31.201.7 -j ACCEPT -A DOCKER-ISOLATION-STAGE-1 -d 172.31.201.7/32 -j ACCEPT -A DOCKER-ISOLATION-STAGE-1 -i docker0 ! -o docker0 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-baf8ebd07571 ! -o br-baf8ebd07571 -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -i br-3df3529e5e0f ! -o br-3df3529e5e0f -j DOCKER-ISOLATION-STAGE-2 -A DOCKER-ISOLATION-STAGE-1 -j RETURN -A DOCKER-ISOLATION-STAGE-2 -o docker0 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-baf8ebd07571 -j DROP -A DOCKER-ISOLATION-STAGE-2 -o br-3df3529e5e0f -j DROP -A DOCKER-ISOLATION-STAGE-2 -j RETURN
  12. I rely on the VMs for several things and the crashes have been ~2 weeks apart, so not really an option for me. If it's a VM issue, sounds like a problem with unraid?
  13. I updated to 6.11.5 in the last few weeks and have had two overnight crashes since (maybe unrelated as I added some drives and such as well). I enabled storing syslogs after the first crash, and got these logs last night. I've truncated the logs to the last few hours where the server crashed and before I rebooted. Is this a hardware issue on my side? Or is there something going wrong with unraid? Help appreciated! crash_logs.rtf media-diagnostics-20230113-0703.zip
  14. Bumping this. I've now realized I can't any URLs served by SWAG unless I'm on my private network (192,.168,86.0/24) as well. Basically it seems like any docker on the wireguard network is somehow only able to receive traffic from the wireguard network or 192.168.86.0/24. Do I need to change something in Unraid's settings?
  15. I recently created a wireguard tunnel of the type "tunneled access for dockers" using mullvad. Took some doing, but I got dockers on it with portforwarding etc working. The issue I'm having is any docker running on the wireguard docker network (172.31.201.0/24) can only access other dockers on that network, or local address on my main network (192.168.86.0/24). Unfortunately though, I run a couple of VLANs and I'd like the wireguard docker network to be able to access those as well (ping/traceroute can't find routes right now). My understanding of wireguard is pretty basic at this point, so perhaps there's something simple I'm missing? Here's my wireguard config, but let me know what else I can post. [Interface] PrivateKey=REMOVED Address=10.65.30.79 PostUp=logger -t wireguard 'Tunnel WireGuard-wg1 started' PostDown=logger -t wireguard 'Tunnel WireGuard-wg1 stopped' PostUp=ip -4 route flush table 201 PostUp=ip -4 route add default via 10.65.30.79 dev wg1 table 201 PostUp=ip -4 route add 192.168.86.0/24 via 192.168.86.1 dev br0 table 201 PostUp=ip -4 route add 192.168.84.0/24 via 192.168.86.1 dev br0 table 201 PostDown=ip -4 route flush table 201 PostDown=ip -4 route add unreachable default table 201 PostDown=ip -4 route add 192.168.86.0/24 via 192.168.86.1 dev br0 table 201 PostDown=ip -4 route add 192.168.84.0/24 via 192.168.86.1 dev br0 table 201 [Peer] PublicKey=REMOVED Endpoint=REMOVED AllowedIPs=0.0.0.0/0 This is essentially Mullvad's default config, but I added the kill switch lines from here, then I added the PostUp/Down for 192.168.84.0/24 hoping that would work. Any ideas what I'm doing wrong? Thanks for the help.
  16. I rebooted, and it started but my VMs were gone. I think I must have accidentally deleted/corruputed by libvert.img.
  17. VMs been running for a couple years or so, not sure what changed. I did move from the deprecated CA backup to v2 this morning, but not sure why that would have messed this up. I've tried moving my libvert.img to allow it to create a new one, but that didn't seem to work. Diagnostics attached. Thanks and let me know if I can provide more. media-diagnostics-20221220-1130.zip
  18. Everything seems to be working fine, but fix common problems detected a hardware error and suggested I post my diagnostics for help. I installed mcelog via nerdpack as it instructed, but note I installed mcelog after I got the error. I'm not sure if that means I need to wait for another error to occur for it to be in the logs, let me know. Thanks! media-diagnostics-20211201-0656.zip
  19. Thanks, I think I made the mistake before as well. Obvious in hindsight... I disabled the disk and set it to reiser and everything looked fine, so I'm rebuilding the new disk as reiser now.
  20. Hey folks, I had a disk failing SMART checks with reallocated sectors going up steadily so I replaced it. It was an old 2TB drive formatted with reiser, and I replaced it with a 4TB and chose xfs. The array appeared to rebuild fine, but has the "Unmountable: No file system" issue, which the other disk didn't have when I removed it. xfs_repair spins for a long time looking for the secondary superblock (output below). What's the best way to proceed here? There's data on the disk I'd like and I have 2 parity disks. I'm worried if I do something dumb here it will clear the disk and parity will happily accept that erasing my data. I'm not sure if there's a way to get the old disk to be emulated at this point, because I unassigned the disk and started the array and it still shows as unmountable. thanks! xfs_repair -v /dev/md3 Phase 1 - find and verify superblock... bad primary superblock - bad magic number !!! attempting to find secondary superblock... .......................................................................................................................................................................................................................................................................................................
  21. for those stumbling upon this, I think I figured out a way to work around it. I created a separate SMB share that only had 1 hard link each in it and it solved the issue. Basically I have SMB share1 structured share1/folder1 and share1/folder2 where folder1 and folder2 have hard links to the same file. I created a share2 that directly mounts folder1. share1 still can't access all the hard linked files in folder1 and folder2, but share1 works fine, and this is good enough for my use.
  22. I don't know enough about the plugin to know if it's possibly a plugin issue, or an underlying issue with the unraid filesystem that the plugin is using. Any thoughts? If it's the latter I can try and follow up more with generic unraid help (long shot), but at least something to try.
  23. No. Here are the two hard links (move title replaced). Movie.2020.REPACK.1080p.WEB.h264-WATCHER.mkv Movie (2020) WEBDL-1080p Proper.mkv