Jump to content

JorgeB

Moderators
  • Posts

    67,647
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You can check bellow for a list of recommended controllers:
  2. That suggest those files might be corrupt, post current syslog.
  3. These Asmedia controllers are two port SATA controllers: 03:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:1060] Kernel driver in use: ahci Kernel modules: ahci 04:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:1060] Kernel driver in use: ahci Kernel modules: ahci 05:00.0 SATA controller [0106]: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:0612] (rev 02) Subsystem: ASMedia Technology Inc. ASM1062 Serial ATA Controller [1b21:1060] Kernel driver in use: ahci Kernel modules: ahci If they have more than two ports they have SATA port multipliers (or are connected to an external enclosure with a port multiplier).
  4. After everything is copied you can try to init the log tree: First unmount filesystem with: umount /x Then: btrfs rescue zero-log /dev/nvme0n1p1 Start the array normally and see if the pool mounts, if it does and all looks fine nothing more should be needed, but make sure you do regular backups of anything important.
  5. I can't see which drive it is just by the log but it was parity in the previous diags, so likley still is. There are also more ATA errors: May 24 22:08:18 Plex kernel: ata9.00: failed to read SCR 1 (Emask=0x40) May 24 22:08:18 Plex kernel: ata9.01: failed to read SCR 1 (Emask=0x40) May 24 22:08:18 Plex kernel: ata9.02: failed to read SCR 1 (Emask=0x40) And these show you're using SATA port multipliers, those should really be avoided.
  6. The syslog is after a crash, wait for another crash then post it.
  7. Try this option first, but note that the device was missing the partition: mount -o ro,notreelog,nologreplay /dev/nvme0n1p1 /x
  8. Enable this then post that log after a crash together with the diagnostics.
  9. Changed Status to Closed Changed Priority to Other
  10. You should post the diagnostics, if that's the only NIC and you can't boot with GUI mode you can get them on the console by typing "diagnostics"
  11. Note that with gigabit you can't get more than around 115MB/s, diags saved during a transfer might show something, also good idea to run a single stream iperf test to check network bandwidth.
  12. See here for some recovery options, first one to try in this case would be the nologreplay mount option.
  13. Wipe the SSDs with blkdiscard -f /dev/sdX Then add to pool and re-format.
  14. Your disks should be fast enough for gigabit to be the bottleneck when transferring with turbo write, I can transfer to one of my arrays at 200MB/s+ sustained with disks slower than those.
  15. Plug and play. Basically yes, see if the server starts working normally. Disk3 on an Asmedia controller, replace cables on that disk.
  16. There are constant ATA errors on disk6, replace cables and try again.
  17. No need if you're splitting any directory as required, it will move to the next disk when it hits the share's minimum free space.
  18. Yes, I forgot that, you can't use an existing config with a trial key.
  19. If it's completely full it's possibly one of the reasons it's crashing, a COW filesystem should never be completely full, but see here for some more recovery options, then re-format.
  20. You can copy super.dat but that just contains the array assignments, cache still needs to be re-assigned.
  21. shfs segfaulted, rebooting will bring the shares back.
  22. You just need to re-assign all the previous pool members, order is not important, also there can't be a "all data on this device will be deleted at array start" or similar warning next to any of the pool devices.
  23. If there's no data there you just need to re-format.
×
×
  • Create New...