Jump to content

JorgeB

Moderators
  • Posts

    67,480
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. This would be a question to ask on the UD support thread:
  2. The files failing to move have checksum errors, i.e., are corrupt, btrfs gives an i/o error when corruption is detected so you now something is wrong, you need to delete those files and restore from backups, this can't happen out of the blue, so if it keeps happening you might have a hardware problem, like bad RAM.
  3. Format button is next to the start/stop array buttons.
  4. It appears to be a GUI problem, clear process concluded: Jul 10 00:36:30 Tower kernel: md: recovery thread: clear ... ... Jul 10 12:56:51 Tower kernel: md: sync done. time=44421sec Jul 10 12:56:51 Tower kernel: md: recovery thread: exit status: 0 Not sure the disk was added to the array though, you'll need to reboot to find out.
  5. All files on the diags are empty, reboot and try to get new diags as soon as the issue starts.
  6. That process is normal and part of Unraid's user shares, you need to try and find what is causing it to use a lot of CPU, anything mapped or using /mnt/user/xxx will be using that process.
  7. You can do a new config, you won't lose anything except what was on those disks, including any docker data/appdata if it was there.
  8. Can't see any errors on the log, it's like they aren't even connected until you replug them, try a different USB controller if available, also anything changed hardware or software in the last week when this issue started?
  9. Yes, though you'd still want devices with deterministic read zeros after trim support, since when support does get implemented it will likely only work with those devices.
  10. It won't be wiped as long as it was formatted with a relatively recent UD release, like in the last couple of years or so, and even if an older release was used it won't be, it will just be unmountable due to invalid partition layout.
  11. First thing would be this again, I assume you've run before when it was detecting 1 or 2 errors, now it appears to be worse and the errors are on different sectors which is consistent with a RAM issue, with more errors it might be easier to detect any RAM issue.
  12. No, each array data disk is still an independent filesystem, what you can do with the latest betas is to have multiple btrfs storage pools, each pool is independent and can use one of the available btrfs profiles, they behave like the normal cache pool.
  13. Yep, that's one of the reasons you don't recommend RAID controllers with Unraid, also note that if you change those disks to a different controllers they will likely need to be rebuilt, or at least require a new config.
  14. Adaptec is being correctly detected and the driver loaded, I'm not familiar with that model but if it's a RAID controller you might need to create a raid0 or similar volume for each disk, or it won't present any devices to the OS.
  15. That means disk is fine for now, just keep an eye on it.
  16. Please post the diagnostics: Tools -> Diagnostics
  17. NVMe devices usually run hotter than 2.5 SATA devices, even when idle, after some writes they can easily go to 70C or more, some boards include a cooler on one or more m.2 slots, mine usually idle at about 40/45C and go to 65C+ during sustained writes, I have the warning set at 70C.
  18. Just an update, I still can't reproduce this issue, I was looking at other threads that reported the same problem before and it happened with various disk/parity size combos, 2 to 4TB, 4 to 8TB, etc, so and since I could never reproduce it with the small SSDs I have on my test server I tried with 4 and 8TB disks that I had available, it's now past the old parity size and no errors, I'm going to let it finish but I would expect that if there were errors they would have already been detected by now: Next I'll reproduce the exact configuration you had on your server and try again, luckily I can get it exactly the same with the disks I have, but I'll need to remove disks from 3 different servers, so it's going to take a few days/weeks.
  19. Yes, everything will be deleted, all data, all partitions.
  20. For now yes. Yes, would be good to compare if the errors are in the same sectors or not.
  21. Some Marvell controllers have issues with IOMMU enable, especially the 9230 like you're using, this works for some but Marvell controllers in general are recommended for Unraid v6.
  22. Please run two consecutive checks without rebooting (so we can compare the errors) and post the complete diagnostics: Settings -> Diagnostics
  23. Whatever you prefer, reconstruct write is faster at the expense of all disks spinning up for writes, more info here.
×
×
  • Create New...