Jump to content

JorgeB

Moderators
  • Posts

    67,572
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You're using a SAS2LP, those are known to in some cases generate the same 5 sync errors corrupting data, IIRC it only happens in the first check after a reboot, you should replace it with an LSI.
  2. Parity dropped offline, because of that there's no SMART report, but most likely a connection/power issue, replace/swap cables/slot with a different disk to rule that out and see if it happens again.
  3. This is working me, mobile and desktop:
  4. If your pools are all btrfs you can ignore.
  5. Jan 24 11:51:10 bronas kernel: BTRFS info (device sdk1): bdev /dev/sdd1 errs: wr 0, rd 5, flush 0, corrupt 0, gen 0 This means there were read error before on cache2, see here for better pool monitoring. Jan 24 11:51:10 bronas kernel: BTRFS critical (device sdk1): corrupt leaf: root=2 block=756464238592 slot=123, unexpected item end, have 26532 expect 10148 This means there's filesystem corruption, best bet is to backup cache, re-format and restore the data.
  6. Seems to be a controller problem. Jan 24 02:23:15 freesuper kernel: aacraid: Host bus reset request. SCSI hang ? https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=545988
  7. Please post only in English here, there's a Spanish section of the forum if you prefer.
  8. Do you know the disk assignments and/or have/posted any recent diagnostics?
  9. There's no SMART for that drive, check/replace cables and power cycle and post new diags.
  10. Those controllers are PCI, the whole PCI bus (133MB/s max speed) is shared by all the devices using it, so terrible for performance, you should really use PCIe controllers.
  11. Try to format the disk with Windows, just to check it can do it, also try the onboard SATA ports with Unraid, if there are any.
  12. You are also running the RAM above the max officially supported speed, that's also a known problem.
  13. No, raid0 needs to stripe the data to at least two devices, so when the first one is full it can't write anymore, only the single profile can use multiple devices with different capacities. It's not the pool you need to expand, it's the partition on the cloned device, /dev/sdX1, but like mentioned it won't make any difference.
  14. If that's all that's logged not much to go on, those are normal, if there's nothing after that the crash is most likely a hardware problem.
  15. The way it drops looks like a device problem, if you enable turbo write and transfer directly to the array is it the same or better?
  16. That looks like a NIC/driver problem, don't remember seeing that NIC used with Unraid before, so not sure if it's reliable, try a different one if possible, Intel recommended.
  17. This issue, having very high system load when using the user shares it's not new, but it only affects some users (or some hardware) and it's difficult to reproduce, to minimize it you can try running the mover only during off hours and/or using disks shares whenever possible.
  18. Found out recently that when reading a corrupt btrfs file using user shares the transfer doesn't abort with an IO error as it should, this is similar to what happened before with Samba AIO enable, but in this case it also affects local transfers, when using disk shares it works as it's supposed to, I never caught this before since I always tested with disk shares, it goes back to at least v6.6, which is the oldest release I tried. At first I though it would be related to having direct IO enable, but I'm seeing the same with direct IO disable, I know of this issue occurring when O_DIRECT is used, does FUSE use that or anything similar? This isn't a big deal since data corruption (on good hardware) is extremely rare and there other ways of monitoring for corruption, still it would be nice if it worked as it should.
  19. Can't help with that, never used Grafana.
  20. Also note that with raid0 the extra space won't be usable anyway, so not much point.
  21. btrfs recognizes the devices by the UUID, that's why it will recognize a clone. Like mentioned, the clone will have the same partition size as the source, so you first need to expand the partition, only then the filesystem.
×
×
  • Create New...