Jump to content

JorgeB

Moderators
  • Posts

    67,808
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Don't remember any recent issues with device removal, there were with device replacement, just check that the pool is using just one profile for data and metadata and it's one with redundancy, or post the diags so we can check.
  2. Correct, for array devices a btrfs scrub can only detected data checksum errors, it can't repair them since there's no redundancy, and no parity can't help with data, it can repair metadata errors since that is redundant.
  3. Update to 6.10.3 and they will work.
  4. Respect maximum officially supported RAM speed for you config.
  5. Click the minus to hide the partition, you can also set to show or hide partitions by default for each device by clicking on its settings, and next time or if you have more questions please use the existing plugin support thread.
  6. There is a problem with the flash drive, try re-formatting it or replacing it and then post new diags after array start.
  7. Enable the syslog server and post that and the full diagnostics after the next crash.
  8. Depends on what the dockers are doing, just having the docker service enabled should not slow down, but if some container is reading or writing a disk it will.
  9. It's the transid, they must be the same for all devices in a pool in sync. Please post new diags so I can better answer the other questions.
  10. top - 20:02:41 up 2 days, 11:58, 0 users, load average: 91.94, 86.80, 80.17 Tasks: 493 total, 3 running, 490 sleeping, 0 stopped, 0 zombie %Cpu(s): 3.8 us, 47.7 sy, 0.0 ni, 0.8 id, 46.2 wa, 0.0 hi, 1.5 si, 0.0 st MiB Mem : 128561.5 total, 789.1 free, 26788.8 used, 100983.6 buff/cache MiB Swap: 0.0 total, 0.0 free, 0.0 used. 99438.7 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 23186 imagema+ 20 0 2695560 65296 29028 S 605.9 0.0 104:44.36 smbd It's not a bad reading, though keep in mind that the dashboard includes IO wait, that doesn't really load the CPU, still load average is very high, as is the CPU usage by smb, what are you using to read from the disks?
  11. It's not logged as a disk problem and the disk looks healthy, so most likely a power/connection problem, if the emulated disk is mounting and contents look correct I would recommend checking/replacing cables/slot first then rebuild on top. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  12. JorgeB

    NVME error

    That was just for you to test with latest release, already re-opened the report since the issue persists.
  13. JorgeB

    NVME error

    Changed Status to Open
  14. Dec 11 08:40:14 Tower kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Dec 11 08:40:14 Tower kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  15. Good, also a good idea to keep a backup of that file.
  16. Is disk1 mounting now? Original libvirt.img might be there.
  17. Enable the syslog server and post that and the complete diagnostics after the next crash.
  18. https://wiki.unraid.net/The_parity_swap_procedure
  19. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  20. Still not the correct way of doing, you must use the md device, or parity will become out of sync, use: xfs_repair -v /dev/md2
  21. https://wiki.unraid.net/Manual/Changing_The_Flash_Device#What_to_do_if_you_have_no_backup_and_do_not_know_your_disk_assignments
  22. That's not the correct device, see the link above, or just use the GUI.
  23. Jul 6 20:32:52 Tower kernel: BTRFS info (device sdd1): bdev /dev/sdd1 errs: wr 71067533, rd 6954308, flush 124982, corrupt 25433259, gen 3 This shows that cache1 dropped offline in the past, and by the number of errors it was for a long time or multiple times. Jul 6 20:33:49 Tower kernel: BTRFS: error (device sdd1) in btrfs_finish_ordered_io:3091: errno=-5 IO failure Jul 6 20:33:49 Tower kernel: BTRFS info (device sdd1): forced readonly Unclear why but the filesystem is failing to get back in sync, so best bet is to backup and reformat the pool, then see here for better pool monitoring.
  24. Boot in safe mode, enable the syslog server, and post that after it shuts down.
×
×
  • Create New...