Jump to content

JorgeB

Moderators
  • Posts

    67,812
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. After booting with the trial key with a new config (don't copy anything for now from the old one), type 'diagnostics' in the console, they will be saved to the flash, then please upload them here.
  2. Only eth0 should have a gateway assigned, remove it from the other interfaces.
  3. Cache should mount normally after a reboot, backup anything you need, not sure re-formatting will help for this, but it won't hurt.
  4. Constant crashing logged, there could be a recent hardware issue or it doesn't like that kernel, if the above doesn't help you can try v6.11-rc1, includes a much newer kernel.
  5. Yes, if they were never reset, more info here.
  6. Since btrfs is detecting data corruption suggest starting by running memtest.
  7. In the diags, meminfo.txt Part Number: CMK16GX4M2B3000C15 Rank: 2
  8. Couple of DIMMs are dual rank, so speed should be set @ 1866MT/s, also make sure power supply idle control is correctly set: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  9. Jul 26 21:22:50 Tower kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Jul 26 21:22:50 Tower kernel: blk_update_request: I/O error, dev nvme0n1, sector 627533784 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 Jul 26 21:22:50 Tower kernel: nvme 0000:02:00.0: enabling device (0000 -> 0002) Jul 26 21:22:50 Tower kernel: nvme nvme0: Removing after probe failure status: -19 NVMe device dropped offline, this can sometimes help: Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference. P.S. server is running out of RAM, you should limit resources.
  10. Unfortunately there's nothing relevant logged, this could indicate a hardware issue, one thing you can try is to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  11. Jul 26 12:51:19 unraid kernel: BTRFS error (device nvme0n1p1): block=230151340032 write time tree block corruption detected Btrfs went read only because it detected corruption before writing the data to the device, this is usually bad RAM or something else causing kernel memory corruption. P.S. also saw some macvlan call traces, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right))
  12. Looks like a device problem, it's not giving a complete SMART report: >> Terminate command early due to bad response to IEC mode page
  13. Nothing relevant logged, could be this issue:
  14. You should create a new post in the KVM section, explain the problem and post the diagnostics and the VM XML.
  15. That's not possible, you can run Windows as a VM inside Unraid or dual boot one or the other.
  16. Please don't create multiple threads about the same thing, see my reply in your other post.
  17. Best bet is to post in the existing container support thread:
  18. Strange that docker would cause crashing during boot, but won't say it's impossible, assuming the crashing happens after array start.
  19. Enable the syslog server and post that together with the diagnostics after a crash.
  20. Try renaming network.cfg and rebooting to reset network settings, though it won't help if the issue is with the docker settings, still worth trying, you can revert back to the old settings.
  21. Looks like a device problem, if possible try connecting it to the onboard Intel SATA controller, errors are easier to read.
  22. Start by posting the diagnostics.
  23. https://forums.unraid.net/topic/90586-virbr0-and-br0-diferences-and-how-to-use-them/
  24. If the issue followed the disk it's likely a disk problem.
  25. No personal experience with that model but unlikely that a RAID controller will correctly support spin down.
×
×
  • Create New...