Jump to content

JorgeB

Moderators
  • Posts

    67,737
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Looks more like a connection/power problem, there are also multiple timeout errors with the cache device, also make sure to check this.
  2. GUI and df for example will show the correct stats for that pool, not all tools show correct stats with btrfs, there are currently about 198GiB (or 213GB) used.
  3. Once it detects one error you can stop, test one DIMM at a time.
  4. Yes, that's the default, but it can be changed: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  5. You can just power off, replace devices, when you power back on that pool will be empty, assign new device, start array, format new device.
  6. Btrfs is detecting data corruption: Jan 16 08:50:46 ZEUS kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme0n1p1 errs: wr 0, rd 0, flush 0, corrupt 52, gen 0 Jan 16 08:50:46 ZEUS kernel: BTRFS info (device nvme0n1p1): bdev /dev/nvme1n1p1 errs: wr 0, rd 0, flush 0, corrupt 56, gen 0 This is usually a RAM issue, start by running memtest.
  7. That's a pool, not an array, you mentioned array disk, for pools you don't need to do a new config.
  8. You need to do a new config (tools -> new config) to be able to assign a different device.
  9. Jan 13 06:47:17 ServerPC kernel: macvlan_broadcast+0x10e/0x13c [macvlan] Jan 13 06:47:17 ServerPC kernel: macvlan_process_broadcast+0xf8/0x143 [macvlan] Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
  10. Disk was already disabled at boot so we can't see what happened, SMART looks fine, I would recommended swapping/replacing cables/slot to rule that out and rebuild.
  11. Docker image is corrupt, delete and re-create: https://forums.unraid.net/topic/57181-docker-faq/?do=findComment&comment=564309
  12. Unraid disables a disk every time a write to it fails, but won't disable more disks than parity drives since it can't emulate and write to those disks, if you had 4 parity drives all four would get disabled.
  13. Filesystem Size Used Avail Use% Mounted on /dev/md1 3.7T 986G 2.7T 27% /mnt/disk1 /dev/md2 3.7T 929G 2.8T 25% /mnt/disk2 They are mounting and they have some data, is this about what you expected to have in terms of used space? If yes you can repeat for the other ones, two by two.
  14. I know, but initially they weren't mounting, as the screenshot you posted shows.
  15. Previous diags are after array stop, post new diags after array start in normal mode.
  16. According to diags disks 1 and 2 mounted, or did you format them? Screenshot is maybe in maintenance mode? Jan 20 13:57:00 Tower emhttpd: shcmd (6033): xfs_growfs /mnt/disk1 Jan 20 13:57:00 Tower kernel: xfs filesystem being mounted at /mnt/disk1 supports timestamps until 2038 (0x7fffffff) Jan 20 13:57:00 Tower root: meta-data=/dev/md1 isize=512 agcount=4, agsize=244188659 blks Jan 20 13:57:00 Tower root: = sectsz=512 attr=2, projid32bit=1 Jan 20 13:57:00 Tower root: = crc=1 finobt=1, sparse=1, rmapbt=0 Jan 20 13:57:00 Tower root: = reflink=1 Jan 20 13:57:00 Tower root: data = bsize=4096 blocks=976754633, imaxpct=5 Jan 20 13:57:00 Tower root: = sunit=0 swidth=0 blks Jan 20 13:57:00 Tower root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 20 13:57:00 Tower root: log =internal log bsize=4096 blocks=476930, version=2 Jan 20 13:57:00 Tower root: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 20 13:57:00 Tower root: realtime =none extsz=4096 blocks=0, rtextents=0 Jan 20 13:57:00 Tower emhttpd: shcmd (6034): mkdir -p /mnt/disk2 Jan 20 13:57:00 Tower emhttpd: shcmd (6035): mount -t xfs -o noatime /dev/md2 /mnt/disk2 Jan 20 13:57:00 Tower kernel: XFS (md2): Mounting V5 Filesystem Jan 20 13:57:00 Tower kernel: XFS (md2): Ending clean mount Jan 20 13:57:01 Tower kernel: xfs filesystem being mounted at /mnt/disk2 supports timestamps until 2038 (0x7fffffff) Jan 20 13:57:01 Tower emhttpd: shcmd (6036): xfs_growfs /mnt/disk2 Jan 20 13:57:01 Tower root: meta-data=/dev/md2 isize=512 agcount=4, agsize=244188659 blks Jan 20 13:57:01 Tower root: = sectsz=512 attr=2, projid32bit=1 Jan 20 13:57:01 Tower root: = crc=1 finobt=1, sparse=1, rmapbt=0 Jan 20 13:57:01 Tower root: = reflink=1 Jan 20 13:57:01 Tower root: data = bsize=4096 blocks=976754633, imaxpct=5 Jan 20 13:57:01 Tower root: = sunit=0 swidth=0 blks Jan 20 13:57:01 Tower root: naming =version 2 bsize=4096 ascii-ci=0, ftype=1 Jan 20 13:57:01 Tower root: log =internal log bsize=4096 blocks=476930, version=2 Jan 20 13:57:01 Tower root: = sectsz=512 sunit=0 blks, lazy-count=1 Jan 20 13:57:01 Tower root: realtime =none extsz=4096 blocks=0, rtextents=0
  17. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  18. Where are you seeing errors? I'm not seeing anything out of the ordinary in the screenshot posted.
  19. According to the bug report it just crashes without spitting anything to the log.
  20. NVMe doesn't share lanes with SATA, this is the problem: Jan 19 21:37:36 Tower kernel: ahci 0000:00:17.0: Found 1 remapped NVMe devices. Jan 19 21:37:36 Tower kernel: ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.
  21. Run a correcting check, then run a non correcting one, without rebooting, and post new diags.
×
×
  • Create New...