Jump to content

JorgeB

Moderators
  • Posts

    67,884
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Enable the syslog server and post that after a crash.
  2. Stop array, unassign cache1 (the Hitachi drive), start array, stop array, re-assign cache1, start array and post new diags.
  3. That is what you should have done, formatting is never part of a rebuild, now and if you still have the old disk, do a new config wit the old disk (Tools -> New config), re-sync parity, then replace the disk.
  4. It's not a limitation, it's a bug , it's already been fixed and v6.11.4 should be available soon.
  5. It's a known problem with v6.11.2, upgrade to v6.11.3 and try again.
  6. It's a known issue with v6.11.3 with a trial key, it's already been fixed and v6.11.4 should be available soon.
  7. You don't lose data, just upgrade Unraid and repeat the rebuild, unassign the disk, start array, stop array, re-assign the disk, start array to rebuild.
  8. No, just remove that attribute from the ones monitored.
  9. Sorry, no more ideas, maybe try resetting all network settings (including all docker network settings) and start over.
  10. It should go without saying that you must have backups of anything important, for the dockers you just need the appdata folder, go to shares and click on "compute" for the appdata share, it will show where the data is, if some of it is in the pool move/copy to the array.
  11. Scrub is aborting, best bet is to backup and recreate the pool, then see here for better pool monitoring so if that device drops again you'll be immediately notified.
  12. If the current issue is limited to the log tree like that log snipped appears to indicate it might be solved by zeroing it, but first, and if backups are not up to date, recommend first using option #1 here to backup anything important, then try: btrfs rescue zero-log /dev/nvme0n1p1 If it works and since btrfs was detecting some corruption you should run a scrub after mount.
  13. Looks more like a cable/connection problem.
  14. Not familiar with those shelfs but post the diags to see if the HBA is being correctly detected, also make sure for now you only connect one cable from the HBA to the 1st module on the shelf.
  15. Stop the array, unassign cache2, start array, stop array, re-assign cache2, start array, post new diags.
  16. For me SMB with v6.11.3 is performing about the same as v6.11.0, there is a small though measurable slowdown, especially with large files, my guess is that they are caused by the Samba update from v4.7.0 to v4.7.2 which includes some CVEs: samba: version 4.17.2 (CVE-2021-20251 CVE-2022-3437 CVE-2022-3592) But note that this update was done on v6.11.2, so v6.11.3 should be very similar to v6.11.2 P.S. tests are done from one Unraid server to another, both with the same release, I tried before doing Windows to Unraid tests but they were not repeatable after some time, i.e., a test done 6 months later could produce very different results, likely because of the Windows updates installed in the meantime. P.P.S. only did tests with mitigations=off for now, though AFAIK no new mitigations in the kernel since v6.11.0.
  17. Easiest way would be to swap one of the with the LSI and test.
  18. Use UD plugin to connect to Windows via SMB, and yes you can copy the data without parity assigned, then assign one when done, this can also make the copy faster.
  19. Post new diags after running a scrub.
  20. I would try testdisk first to see if it can recover the old partition, other option would be to create a new partition with the correct layout and hope the filesystem is still valid.
×
×
  • Create New...