Jump to content

JorgeB

Moderators
  • Posts

    67,572
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Create checksum before the transfer then check them after.
  2. UDMA CRC errors are a connection issue, usually the SATA cable, start by replacing that, note that the errors don't reset, but as long as they don't keep increasing it's solved.
  3. Btrfs is more susceptible to hardware issues, so having recurring issues with it (and now without it) could mean a hardware problem, start by running memtest, if that doesn't find anything another thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  4. Onboard SATA is set to IDE, set it to AHCI, though that just by itself shouldn't cause disk errors it's not good for performance/reliability and issues with one disk can affect the slave or master on the same channel.
  5. UD SSD is dropping offline, then reconnecting with a new identifier, but since the other one wasn't unmounted it cannot mount due to duplicate UUID, check cables, or swap it with another drive, or if possible use the an onboard SATA port instead so that it could also be trimmed.
  6. There are read erros on disk5 with two other disable disks, so those can't be correctly emulated, check/replace cables on disk5 and post new diags after array start.
  7. No NIC is being detected in the second diags, make sure it's enable in the board BIOS, if it is try an ad-don NIC.
  8. You can never just re-add the member to the pool, all data will be deleted like mentioned in the warning, if it happens again post diags because the best way is not always the same, it depends on the situation.
  9. That is very slow and unusual, start with a single stream iperf test to check network bandwidth.
  10. You can try this, another thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  11. SAS2LP is not recommended for a long time, they are known to drop disks without a reason, though strange it's still the same one, still would recommended replacing it with an LSI.
  12. You mentioned you had backups, just format that disk and restore from them, parity remains in sync after formatting a disk.
  13. NVMe device is dropping offline: Jan 9 12:50:22 Unraid kernel: nvme nvme1: Device not ready; aborting reset, CSTS=0x1 Jan 9 12:50:22 Unraid kernel: nvme nvme1: Removing after probe failure status: -19 Jan 9 12:50:53 Unraid kernel: nvme nvme1: Device not ready; aborting reset, CSTS=0x1 Look for a BIOS update, this can also sometimes help, some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append" and before "initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
  14. Go to Settings -> NFS -> Tunable (fuse_remember) and remove "antagon" from there, it can only be a number, default is "330".
  15. No, but it's normal with NetApp enclosures. You'd need to do a new config, or use an enclosure without that issue.
  16. There's filesystem corruption on disk12, but it might not be fixable on a failing disk, hence why I suggested ddrescue.
  17. You can click on any btrfs filesystems and this is one of the options: There's no GUI option for email in the end though.
  18. When you re-added the dropped device it was wiped: Jan 8 12:58:33 MEDiiiA emhttpd: shcmd (210): /sbin/wipefs -a /dev/sdf1 Jan 8 12:58:35 MEDiiiA root: /dev/sdf1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d There would have been a warning on the GUI that all data on that device would be deleted, you can try this to recover the superblock with the array stopped: btrfs-select-super -s 1 /dev/sdf1 Then start the array, most likely it will still be unmountable since the other device was re-balanced, but maybe you can then recover at least some data manually using the options in the FAQ.
  19. Correct. This is correct, only scrub can be done with the fs online, check in btrfs should only be done without the repair option. Usually not needed as a regular option.
  20. I assume the SSD is on the onboard controller? Strangely one of the ports is being detected as as dummy port, i.,e. not active/working: Jan 8 07:54:37 Tower kernel: ata1: SATA max UDMA/133 abar m2048@0xf7e1a000 port 0xf7e1a100 irq 25 Jan 8 07:54:37 Tower kernel: ata2: SATA max UDMA/133 abar m2048@0xf7e1a000 port 0xf7e1a180 irq 25 Jan 8 07:54:37 Tower kernel: ata3: SATA max UDMA/133 abar m2048@0xf7e1a000 port 0xf7e1a200 irq 25 Jan 8 07:54:37 Tower kernel: ata4: SATA max UDMA/133 abar m2048@0xf7e1a000 port 0xf7e1a280 irq 25 Jan 8 07:54:37 Tower kernel: ata5: SATA max UDMA/133 abar m2048@0xf7e1a000 port 0xf7e1a300 irq 25 Jan 8 07:54:37 Tower kernel: ata6: DUMMY
  21. With v6.8 and if the array was started manually the VMs didn't auto-start, this was useful to for example correct any pass-trough config issues, with v6.9-rc2 VMs still auto-start after manual array start, not sure this was done on purpose or not but it's IMHO a very bad idea.
  22. Correction, they do on v6.9, they don't on v6.8, not sure this was changed on purpose but if it's IMHO a bad idea.
  23. VMs won't autostart after manual array start.
  24. Disable array auto start by editing disk.cfg on your flash drive (config/disk.cfg) and changing startArray="yes" to "no", if then you start the array manually VMs won't auto-start.
×
×
  • Create New...