Vr2Io

Members
  • Posts

    3649
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Vr2Io

  1. Call trace have found in previous log, Do you perform memory test to prove system stability ? Yes .... some post say newer Intel platform trend unstable with current Unraid, but I can't verify that because I stay in 9th gen.
  2. Both disk look good ( I have many same model but all over 5yrs age ), disable disk seems not happen in log time so can't check the reason.
  3. The log haven't error and just new boot, so no crash info. have provide. Seems system unstable issue last long time ( according your old post ), some through as below - 4 memory stick run in 2666Mhz, it may have problem, pls try memory test or clock down it, make sure they are stable enough. - With GPU, 10G NIC, expander and lot of spinner disk, pls ensure PSU can power them well. Could you test whole system without spinner disk or just keep minimal, best remove HBA and expander, then test remain component ...... after all show positive then add back them.
  4. From description, it show memory stick was good, just can't run in dual stick with current mobo, that's why I suggest you clock down the memory clock. Those problem are quite common and RMA those strick won't help much.
  5. You can complete disk migration then change to new platform, no need two Unraid in parallel.
  6. Try disable XMP or manual setting lower memory clock rate in BIOS.
  7. You need perform export again.
  8. Suspect CPU clock rate too low / too weak CPU CPU is Intel Xeon CPU D-1541 @ 2.10GHz 8 core 16 threads
  9. Not likely XFS / parity relate. Pls shoot those PCIe / NVMe error, try re-insert the NVMe or move to different slot.
  10. This likely existing USB stick haven't legacy boot sector / file. Simple run makebootable.bat ( something like that ) at USB stick root folder ( if under Windows ) will solve the problem.
  11. You also need clear attribute on those .nfo file. and re-export the hash.
  12. Below are the log in last 4 min, it seems not a clean shutdown. Dec 21 18:47:05 groudon shutdown[13781]: shutting down for system halt Dec 21 18:51:13 groudon root: umount: /mnt/disk1: target is busy. Dec 21 18:51:13 groudon emhttpd: shcmd (106): exit status: 32 Dec 21 18:51:13 groudon emhttpd: Retry unmounting disk share(s)... For test UPS shutdown server, pls simulate first instead actually cutting UPS power, otherwise you may kill the battery. upsmon -c fsd After all fine then perform real power cut situation.
  13. These bus were 32bit in 33Mhz, if double clock rate to 66Mhz, then bandwidth double to 266MB/s, if further double bus width to 64bit then will be 533MB/s. PE1*1 was the name, it probably PCIe 3.0. In PCI 32bit 33Mhz, you will got ~100MB/s throughput.
  14. Haven't notice official doc will umount the target disk, my bad. May be best someone could try reproduce same problem or not. It seems problem relate umount ( umount not complete ), some post also relate to this. Anyway as official mention, start array in maintenance mode could improve zero disk performance but it also greatly avoid umount failure issue too.
  15. I don't think unmount an array member disk was a correct step. In this case, zero a member disk also cause parity update too. Slowdown also expected.
  16. This shouldn't happen. Below post have some script could help you to identify what have write to docker image / folder. So you may mapping container folder to anywhere you like.
  17. Not must pool, pool could lost all if out of redundancy. You may try no parity array will solve the problem or not.
  18. Call trace look like docker network relate, pls try using IPVLAN. And why so much docker "vethxxxxxx" message in the log For example : mine only few, it should record if docker start/stop/update dmesg -T | grep veth [Sat Dec 2 11:33:29 2023] eth0: renamed from vethddce601 [Sat Dec 2 11:33:41 2023] eth0: renamed from veth4c48c7a [Sat Dec 2 11:33:46 2023] eth0: renamed from veth8140ad3 [Sat Dec 2 11:34:19 2023] eth0: renamed from vethab126d3 [Sat Dec 2 11:34:27 2023] eth0: renamed from vetha49f590 [Sat Dec 2 11:34:34 2023] eth0: renamed from veth7566107 [Sat Dec 2 11:34:41 2023] eth0: renamed from veth9bb2973 [Sat Dec 2 11:35:37 2023] eth0: renamed from veth79004cd [Wed Dec 20 12:53:13 2023] veth9bb2973: renamed from eth0 [Wed Dec 20 12:53:13 2023] eth0: renamed from veth91cf68b
  19. As you have RAID-Z2 pool, so it allow two disk failure / missing, and you have 12TB data which can't fit in one 10TB disk. You shouldn't destroy the RAID-Z2 pool, just cleanup two disk under Unraid and format it, then boot back TrueNAS and confirm it can mount both, then copy all data to those disks.
  20. Do you swap the cable / port for eth0 ( port 21 ) from ( port 22 -24 ) to rule out actual problem. ( If cable tester haven't shoot that )
  21. Dec 19 19:35:55 Tower kernel: r8169 0000:08:00.0: no MMIO resource found Dec 19 19:35:55 Tower kernel: r8169 0000:0a:00.0: unknown chip XID 000, contact r8169 maintainers (see MAINTAINERS file) Try disable IOMMU in BIOS or in SYSLINUX CONFIGURATION page.
  22. Sound interesting for high performance with multiple thread.
  23. I use Sonoff Dongle-E with Z2M, never crash.