Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. I didn't notice the errors on disk2 before, those are unrelated to the format problem, you should run an extended SMART test on that disk.
  2. It does suggest some compatibility issue with the newer kernel but with nothing logged it's difficult to say for sure, look for a BIOS update and/or try 6.11.3, or the next release.
  3. Correct setting is cache=yes, you can turn on the GUI help for more info.
  4. It's a known issue with v6.11.2, update to v6.11.3 and re-format the disks.
  5. You still have appdata in the array, move everything back to cache.
  6. Set the shares(s) you want moved to cache=yes, disabled the VM service and run the mover.
  7. If xfs_repair - v asks for -L use it.
  8. rootfs is full, this will cause all sorts of problem since the OS needs this to run, check all you makings, anything writing to anywhere other than /boot or /mnt/user (or disk paths) will be writing to RAM.
  9. Disk dropped offline, it it keeps doing that ddrescue also won't work, but you can try, and yes it works with any filesystem, encrypted or not.
  10. Enable the syslog server and post that if it happens again.
  11. Parity didn't come back online, like mentioned you should power cycle the server, just rebooting is usually not enough, and if power cycling also doesn't do it check/replace cables.
  12. Give us an example with the logs showing that.
  13. UD supports mounting encrypted disks, if using a different password than the array you can set it in the settings.
  14. Docker image can easily be recreated, important is the appdata folder, make sure you have a backup of that or move/copy to the array.
  15. Reboot to clear the logs and post new diags, also a screenshot form "Shares" -> "Compute All"
  16. There were read errors on parity during disk1 rebuild, disk looks OK but you can run an extended SMART test to confirm.
  17. You need to update to Unraid v6.11.3 first, there's a problem with v6.11.2 partitioning >2TB devices, after upgrading repeat the procedure above.
  18. There's no need to do this, you only need to change the boot options in the board BIOS, legacy boot is always enabled in Unraid.
  19. Nov 13 09:27:06 Dwigt kernel: BUG: unable to handle page fault for address: ffffffff81fbb972 Nov 13 09:27:06 Dwigt kernel: #PF: supervisor write access in kernel mode Nov 13 09:27:06 Dwigt kernel: #PF: error_code(0x0003) - permissions violation Nov 13 09:27:06 Dwigt kernel: PGD 400e067 P4D 400e067 PUD 400f063 PMD 132eff063 PTE 8000000003fbb061 Nov 13 09:27:06 Dwigt kernel: Oops: 0003 [#1] PREEMPT SMP PTI Nov 13 09:27:06 Dwigt kernel: CPU: 2 PID: 15 Comm: rcu_preempt Tainted: P O 5.19.9-Unraid #1 Nov 13 09:27:06 Dwigt kernel: Hardware name: Gigabyte Technology Co., Ltd. Z87X-UD4H/Z87X-UD4H-CF, BIOS F9 03/18/2014 Nov 13 09:27:06 Dwigt kernel: RIP: 0010:rcu_gp_kthread+0x3d/0x14d Nov 13 09:27:06 Dwigt kernel: Code: 48 89 44 24 18 31 c0 65 48 8b 1c 25 c0 bb 01 00 48 8b 15 d6 38 0c 01 48 8b 35 ff 9b fe 00 48 8b 3d 58 9d fe 00 e8 48 ab ff ff <66> c7 05 1c 9c fe 00 01 00 66 8b 05 13 9c fe 00 a8 01 75 44 48 8d Nov 13 09:27:06 Dwigt kernel: RSP: 0018:ffffc9000009fef0 EFLAGS: 00010286 Nov 13 09:27:06 Dwigt kernel: RAX: 0000000080000000 RBX: ffff8881001f5e80 RCX: 0000000000000000 Nov 13 09:27:06 Dwigt kernel: RDX: ffffffff81ebaeca RSI: 0000000004198534 RDI: ffffffff820bbb08 Nov 13 09:27:06 Dwigt kernel: RBP: ffff888100141a80 R08: 0000000000000000 R09: ffff88882f32c070 Nov 13 09:27:06 Dwigt kernel: R10: 0000000000000000 R11: 0000000000000019 R12: ffffc9000002fdb8 Nov 13 09:27:06 Dwigt kernel: R13: ffffffff810d1d10 R14: 0000000000000000 R15: ffff8881001f5e80 Nov 13 09:27:06 Dwigt kernel: FS: 0000000000000000(0000) GS:ffff88882f300000(0000) knlGS:0000000000000000 Nov 13 09:27:06 Dwigt kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Nov 13 09:27:06 Dwigt kernel: CR2: ffffffff81fbb972 CR3: 000000000400a001 CR4: 00000000001726e0 Nov 13 09:27:06 Dwigt kernel: Call Trace: Nov 13 09:27:06 Dwigt kernel: <TASK> Nov 13 09:27:06 Dwigt kernel: kthread+0xe7/0xef Nov 13 09:27:06 Dwigt kernel: ? kthread_complete_and_exit+0x1b/0x1b Nov 13 09:27:06 Dwigt kernel: ret_from_fork+0x22/0x30 Nov 13 09:27:06 Dwigt kernel: </TASK> Nov 13 09:27:06 Dwigt kernel: Modules linked in: tcp_diag udp_diag inet_diag af_packet nvidia_uvm(PO) xt_nat veth ipvlan nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter xt_CHECKSUM xt_conntrack ipt_REJECT nf_reject_ipv4 xt_tcpudp ip6table_mangle ip6table_nat iptable_mangle vhost_net tun vhost vhost_iotlb tap xfs md_mod it87 hwmon_vid efivarfs iptable_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 wireguard curve25519_x86_64 libcurve25519_generic libchacha20poly1305 chacha_x86_64 poly1305_x86_64 ip6_udp_tunnel udp_tunnel libchacha ip6table_filter ip6_tables iptable_filter ip_tables x_tables bridge stp llc bonding tls ipv6 nvidia_drm(PO) nvidia_modeset(PO) x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel i915 kvm nvidia(PO) iosf_mbi drm_buddy i2c_algo_bit crct10dif_pclmul ttm crc32_pclmul crc32c_intel ghash_clmulni_intel drm_display_helper aesni_intel crypto_simd mxm_wmi cryptd rapl drm_kms_helper intel_cstate intel_uncore drm i2c_i801 i2c_smbus ahci Nov 13 09:27:06 Dwigt kernel: libahci e1000e intel_gtt agpgart input_leds led_class cp210x i2c_core usbserial syscopyarea sysfillrect sysimgblt fb_sys_fops thermal fan button video wmi backlight unix Nov 13 09:27:06 Dwigt kernel: CR2: ffffffff81fbb972 Nov 13 09:27:06 Dwigt kernel: ---[ end trace 0000000000000000 ]--- There's this crash, not clear to me what caused it, one thing you can try is to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  20. Like mentioned the disk was already disabled at boot, i.e., it was already disabled or it got disabled during the shutdown before, but without diags showing what happened cannot really say wy, other than almost certain it wasn't the update, if it happens again see if you can grab the diags before rebooting.
  21. You can also do that, but add an end slash to the source path or it will create a folder called old-disk-12 on destination: rsync -av /mnt/disks/old-disk-12/ /mnt/disk12
×
×
  • Create New...