Jump to content

JorgeB

Moderators
  • Posts

    67,656
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Those files are corrupt and will need replacing, would still like to see full diags when it's done.
  2. It wasn't used on the command but it was used for the format, so possibly it's the default now: I already posted earlier that a xfs drive formatted with v6.8 mounts normally with v6.7
  3. Yes, but that's not on disk.cfg, you can create a script with the user scripts plugin and have it run at array start.
  4. Copy some data and run a couple of parity checks, there should be always 0 sync errors.
  5. In my experience it's only worth increasing if you do mainly do small transfers that will always (or mostly) fit in cache and can wait for the data flush before starting another transfer, or the data flush will be slower than before.
  6. By default 20% free RAM is used for write cache.
  7. That's while it's being cached to RAM, if it drops after that your SSD can't keep up, even the fastest normal SATA SSD (not NMVe) will usually max out at around 300MB/s, slower TLC based SSD can drop to <100MB/s.
  8. Cache filesystem is corrupt, backup any important data there and re-format, then recreate docker image.
  9. Basically if all is working correctly nothing to worry about, a bios update might help. @SquidAny chance to make FCP report IRQ16 issue only when the user is using the mvsas driver (SASLP/SAS22LP), in that case it would be a problem since performance would be severely downgraded.
  10. Correction, 8 Intel SATA2 ports, so you can use up to 10 devices with any additional controller.
  11. You have 2 Intel SATA3 ports and 4 Intel SATA2 ports, connect any SSD to the SATA3 ports, for disks SATA2 is enough.
  12. Depends on how many extra ports you need after using the 6 Intel ones, for 2 extra ports get an Asmedia based controller, for 8 ports get an LSI HBA.
  13. Depends mostly on the hardware used, 1GB/s is possible with fast enough NVMe devices.
  14. Don't use the onboard Marvell controller (4 grey ports), it's dropping disks, and they are known to do that, all 4 disks with errors are connected there.
  15. Was able to get the start of the error by tailing the syslog, might help see what the problem is, this time was after formatting with btrfs, no encryption, after a clearing. syslog.txt
  16. Run a scrub on disk2 then grab and post full diagnostics.
  17. Yes. Some models form the ones listed above can be IT or RAID mode, 9300-8i is IT mode only.
  18. It's plug'n play SFF-8643 to SFF-8643
  19. Not with 8 disks, there can be with more disks (using an expander ) or SSDs.
  20. There can be little differences from version to version, wiki isn't always up to date, mount problems are unrelated to that.
  21. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  22. You can't put back original disk while disk status is invalid, you can do this, assuming parity is still valid with old disk3: -Tools -> New Config -> Retain current configuration: All -> Apply -Assign any missing disk(s) if needed, including old disk3 and new disk2 -Important - After checking the assignments leave the browser on that page, the "Main" page. -Open an SSH session/use the console and type (don't copy/paste directly from the forum, as sometimes it can insert extra characters): mdcmd set invalidslot 2 29 -Back on the GUI and without refreshing the page, just start the array, do not check the "parity is already valid" box (GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the invalid slot command, but they won't be as long as the procedure was correctly done), disk2 will start rebuilding, disk should mount immediately but if it's unmountable don't format, wait for the rebuild to finish and then run a filesystem check Keep old disk2 intact for now in case it's needed.
  23. But looking more carefully at your diags, checksum errors are almost certainly related to this: Nov 19 06:38:56 Finalizer kernel: BTRFS info (device sdn1): bdev /dev/sdp1 errs: wr 220877319, rd 749360, flush 0, corrupt 0, gen 0 One of your cache devices is dropping offline, errors are being corrected after it came back online, you should run a scrub, see here for more info.
×
×
  • Create New...