Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Minimum size for cache and/or pool must be correctly set.
  2. Raid0 can only use 1TB, change to single profile if you want the full 1.5TB.
  3. Do you mean it's unassigned? If yes the right place to ask would be in the existing UD plugin support thread, but you just need to click on it.
  4. No. You can set the share use cache setting to "prefer", then run the mover, files can't be in use or they won't be moved.
  5. Yep. Only if you have the syslog server enable.
  6. Yes, just assign a new parity disk and start the array.
  7. Only one NVMe device is being detected by Linux, not sure what the other device is about, but it's the same one, you can see the serial number on the SMART report. Try power cycling the server or check if the other device is being detected in the BIOS.
  8. Either they are in metadata, though usually "metadata leaf" or similar is mentioned in those cases, or they are related to now empty extents, in any case recommend backing up and re-formatting the pool. This is an example of you it usually looks in the log when a scrub finds data corruption: Tower1 kernel: BTRFS warning (device md1): checksum error at logical 1006087647232 on dev /dev/md1, physical 1005022294016, root 5, inode 512268, offset 16712871936, length 4096, links 1 (path: plots/plot-k32-2022-01-03-12-03-d59a0e1f87141f9355fd42074dd671c706152c741767e01bb52946991f4a9e59.plot) Tower1 kernel: BTRFS error (device md1): bdev /dev/md1 errs: wr 0, rd 0, flush 0, corrupt 1, gen 0
  9. Yes, that sets the default for any new filesystem, you can also click on the disk with the array stopped and change the filesystem for any disk.
  10. Not unless you can mount it. Btrfs restore should maintain the paths, just copy keeping them. You can try. XFS is usually more tolerant of hardware issues and easier to recover, on the other hand hardware issues/data corruption can go unnoticed for longer. No, neither parity or mirrors can help with data corruption caused by RAM/bad hardware, you can change your hardware to a board/CPU combo that supports ECC RAM to avoid bad RAM issues, but also keep in mind that you always need to have backups of anything important, there are many different ways you can lose data.
  11. FYI pretty sure my controller based on this chip is corrupting data, will investigate further to confirm, and while I doubt it's a general issue with this Asmedia chip it's always good to keep this in mind with these cheap controllers, you can get a good one or not, sometimes they have UDMA CRC issues, or it could be more serious, hence when possible it's preferable to use an LSI HBA, they can be bough used at reasonable prices and should be much more reliable, though they also consume more power and generate more heat.
  12. That's normal, you can use a for all. Could be, though possibly there will be data corruption due to the RAM issues. After recovering everything you can you should format it then restore the data.
  13. This is a different error, mountpoint now exists, though not sure what this error is about
  14. Since you started a new thread in the general support forum and that's likely the best place for this let's continue there, we can reopen this in the future if needed.
  15. Post the diagnostics with the HBA installed in the x16 slot, but if it's not being detected there's not much you can do other than using different hardware or updating the BIOS.
  16. Run a scrub, it should list the affected files in the log.
  17. Now try again the recovery options in the FAQ. This means you didn't create the mountpoint first as instructed there. Also as instructed you must specify the partition, so it should be /dev/nvme0n1p1
  18. Diags are just after rebooting, so not much to see.
  19. Please us the existing plugin support thread if further support is needed.
  20. Unlikely that the NVMe devices are the problem, if no RAM errors were found for now run a scrub to see if all errors are correctable, if they are reset see here to filesystem stats and how to monitor the pool for further errors, if they aren't all correctable backup and reformat the pool then also monitor.
  21. As mentioned: That means you're booting UEFI, either change to legacy BIOS/CSM or download latest Passmark Memtest86 that supports UEFI only.
  22. The below might help and it's worth a shot, if not best bet is a different NVMe device (or different board). Some NVMe devices have issues with power states on Linux, try this, on the main GUI page click on flash, scroll down to "Syslinux Configuration", make sure it's set to "menu view" (on the top right) and add this to your default boot option, after "append initrd=/bzroot" nvme_core.default_ps_max_latency_us=0 e.g.: append initrd=/bzroot nvme_core.default_ps_max_latency_us=0 Reboot and see if it makes a difference.
×
×
  • Create New...