denisvic

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by denisvic

  1. Hello all, It's twice now that I find my machine completely blocked. No response to the ping. I have to physically restart the server to regain access. Could you help me diagnose the problem ? Regards nas-diagnostics-20230227-0908.zip
  2. I've backed up my data, how now can I format the cache without GUI ?? Thank you for your help
  3. It seems to refer to one disk (nvme) of my two-disk cache (1 x nvme and 1 x SATA SSD). Could you help me solve this issue ? Can I simply remove the nvme disk (redundancy), format it (or replace if faulty) and "rebuild" the FS ? Thank you for your help
  4. My fault I haven't seen "diagnostics" command. I've sent you the file by PM (maybe with some private data)
  5. Could you help me generate diagnostics without webui (broken)
  6. Hi all, This morning I noticed some problems on my network (internet access up but DNS down). I use Adguard Home for my DNS. UNRAID webgui is unresponsive but SSH yes. Some dockers work well (tailscale for example) some others don't work at all (unifi-controller, adguard, ...). I've tried to reboot server without a change. "My Servers" seems to work well but I didn't activate remote access. I've seen some bad words in syslog (attached). Someone can help me to solve this weird issue ? syslog_unraid.txt
  7. I am totally devastated, how will I be able to recover my data...
  8. Now when I try to repair my unmount drives, I have this error : Phase 1 - find and verify superblock... - reporting progress in intervals of 15 minutes Superblock has unknown compat/rocompat/incompat features (0x0/0x0/0x10). Using a more recent xfs_repair is recommended. Found unsupported filesystem features. Exiting now.
  9. I've just tried a restore 6.9.2 and now I cannot connect anymore to my instance. It tries to connect to weird https adress when I try to reach http web page.
  10. Hi all, I am completely lost. I just upgraded my system that has been running for months on 6.9.2. And I feel like I've lost all my data. My configuration: HP Microserver Gen 8 with 4 mechanical disks configured in XFS array (3+1) and 2 SSD disks in cache (BTRFS). After the upgrade, I had problems with the cache. The pool was completely corrupted. I was forced to format and restore a backup of my appdata which hosts my docker containers. Afterwards I had a lot of input/output errors and doing xfs_repair on my mechanical disks there are errors everywhere. Pages and pages of errors. One of my disks appears almost empty when it was half full. I don't know what to do anymore to try to get out of it. Can you help me? server-diagnostics-20220520-1947.zip
  11. I've disabled all custom IP containers but I still see some weird logs : Aug 27 04:11:42 Server kernel: eth0: renamed from vethafe0fd2 Aug 27 04:11:42 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth0059477: link becomes ready Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 5(veth0059477) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 5(veth0059477) entered forwarding state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered disabled state Aug 27 04:11:42 Server kernel: device vethbcda03c entered promiscuous mode Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered blocking state Aug 27 04:11:42 Server kernel: br-eec931236e1c: port 6(vethbcda03c) entered forwarding state Aug 27 04:11:42 Server kernel: eth0: renamed from veth8efdec0 Aug 27 04:11:42 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethbcda03c: link becomes ready Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered disabled state Aug 27 04:11:43 Server kernel: device veth7dc791f entered promiscuous mode Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered forwarding state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered disabled state Aug 27 04:11:43 Server kernel: eth0: renamed from veth77fdfbd Aug 27 04:11:43 Server kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth7dc791f: link becomes ready Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered blocking state Aug 27 04:11:43 Server kernel: br-eec931236e1c: port 7(veth7dc791f) entered forwarding state Aug 27 04:11:43 Server CA Backup/Restore: ####################### Aug 27 04:11:43 Server CA Backup/Restore: appData Backup complete Aug 27 04:11:43 Server CA Backup/Restore: #######################
  12. t worked for months, I don't understand why suddenly the problem appears. I have disabled the containers with custom IP and indeed it is better.
  13. I just had a new crash. Here's what I find in flash : Aug 21 13:05:09 Server kernel: ------------[ cut here ]------------ Aug 21 13:05:09 Server kernel: WARNING: CPU: 2 PID: 125 at net/netfilter/nf_conntrack_core.c:1120 __nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 21 13:05:09 Server kernel: Modules linked in: xt_mark macvlan xt_comment xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle ip6table_nat iptable_mangle nf_tables vhost_net tun vhost vhost_iotlb tap veth xt_nat xt_tcpudp xt_conntrack nf_conntrack_netlink nfnetlink xt_addrtype br_netfilter xfs md_mod iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libblake2s blake2s_x86_64 libblake2s_generic libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bonding tg3 ipmi_ssif i2c_core x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel aesni_intel crypto_simd cryptd glue_helper nvme acpi_ipmi ahci rapl nvme_core ipmi_si libahci acpi_power_meter intel_cstate intel_uncore thermal button ie31200_edac [last unloaded: tg3] Aug 21 13:05:09 Server kernel: CPU: 2 PID: 125 Comm: kworker/2:1 Tainted: G I 5.10.28-Unraid #1 Aug 21 13:05:09 Server kernel: Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 05/21/2018 Aug 21 13:05:09 Server kernel: Workqueue: events macvlan_process_broadcast [macvlan] Aug 21 13:05:09 Server kernel: RIP: 0010:__nf_conntrack_confirm+0x9b/0x1e6 [nf_conntrack] Aug 21 13:05:09 Server kernel: Code: e8 dc f8 ff ff 44 89 fa 89 c6 41 89 c4 48 c1 eb 20 89 df 41 89 de e8 36 f6 ff ff 84 c0 75 bb 48 8b 85 80 00 00 00 a8 08 74 18 <0f> 0b 89 df 44 89 e6 31 db e8 6d f3 ff ff e8 35 f5 ff ff e9 22 01 Aug 21 13:05:09 Server kernel: RSP: 0018:ffffc900002c4dd8 EFLAGS: 00010202 Aug 21 13:05:09 Server kernel: RAX: 0000000000000188 RBX: 000000000000a9e1 RCX: 0000000028ac8310 Aug 21 13:05:09 Server kernel: RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffffa02d0c08 Aug 21 13:05:09 Server kernel: RBP: ffff8883ffd09040 R08: 000000004dc1cb9c R09: 0000000000000000 Aug 21 13:05:09 Server kernel: R10: 0000000000000098 R11: ffff88813ec75000 R12: 0000000000000e82 Aug 21 13:05:09 Server kernel: R13: ffffffff8210b440 R14: 000000000000a9e1 R15: 0000000000000000 Aug 21 13:05:09 Server kernel: FS: 0000000000000000(0000) GS:ffff888436e80000(0000) knlGS:0000000000000000 Aug 21 13:05:09 Server kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Aug 21 13:05:09 Server kernel: CR2: 0000145fc6720718 CR3: 000000000200a001 CR4: 00000000000606e0 Aug 21 13:05:09 Server kernel: Call Trace: Aug 21 13:05:09 Server kernel: <IRQ> Aug 21 13:05:09 Server kernel: nf_conntrack_confirm+0x2f/0x36 [nf_conntrack] Aug 21 13:05:09 Server kernel: nf_hook_slow+0x39/0x8e Aug 21 13:05:09 Server kernel: nf_hook.constprop.0+0xb1/0xd8 Aug 21 13:05:09 Server kernel: ? ip_protocol_deliver_rcu+0xfe/0xfe Aug 21 13:05:09 Server kernel: ip_local_deliver+0x49/0x75 Aug 21 13:05:09 Server kernel: __netif_receive_skb_one_core+0x74/0x95 Aug 21 13:05:09 Server kernel: process_backlog+0xa3/0x13b Aug 21 13:05:09 Server kernel: net_rx_action+0xf4/0x29d Aug 21 13:05:09 Server kernel: __do_softirq+0xc4/0x1c2 Aug 21 13:05:09 Server kernel: asm_call_irq_on_stack+0x12/0x20 Aug 21 13:05:09 Server kernel: </IRQ> Aug 21 13:05:09 Server kernel: do_softirq_own_stack+0x2c/0x39 Aug 21 13:05:09 Server kernel: do_softirq+0x3a/0x44 Aug 21 13:05:09 Server kernel: netif_rx_ni+0x1c/0x22 Aug 21 13:05:09 Server kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Aug 21 13:05:09 Server kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Aug 21 13:05:09 Server kernel: process_one_work+0x13c/0x1d5 Aug 21 13:05:09 Server kernel: worker_thread+0x18b/0x22f Aug 21 13:05:09 Server kernel: ? process_scheduled_works+0x27/0x27 Aug 21 13:05:09 Server kernel: kthread+0xe5/0xea Aug 21 13:05:09 Server kernel: ? __kthread_bind_mask+0x57/0x57 Aug 21 13:05:09 Server kernel: ret_from_fork+0x22/0x30 Aug 21 13:05:09 Server kernel: ---[ end trace cebed80e37e250d0 ]---
  14. Hello to all, I have been experiencing a random problem on my UNRAID server for a few weeks now, even though it has been running for months without any problems. For a few weeks now, I have regularly found that the server is no longer accessible without any response to the ping. I have to restart it to get access again. The syslog is erased at each startup, I can't make a diagnosis. It is an HP Proliant micro Gen8 server with 4 disks. Has anyone experienced this kind of problem. Translated with www.DeepL.com/Translator (free version) server-diagnostics-20210825-1017.zip
  15. Is it possible to ask for a manual remove? Same problem here.
  16. Hi, I tried to use this container but I have an error when I try to get my browser to it : To you have an idea of what happens ?
  17. II use nginx as reverse proxy for unraid UI
  18. Thank you for this wonderful software it's like magic. I've a minor problem with port number. Using default port number (6237), when I click on "Open Web UI" link, it tries to open port 6238. If I choose 6238 as port for UI, it tries to open 6239 and so on. I have to manually type port number in browser which is not difficult but just annoying. Unbalance 5.6.4 version. Regards
  19. I tested the new feature and I have a problem. Server appears as online in webUI of UnRAID but offline in "my servers" section of unraid website. I have to type "unraid-api restart" in order to make it work. And after some hours, problem appear again.