Jump to content

JorgeB

Moderators
  • Posts

    67,779
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. This happened to me before, and it can happen with parity, usually easily fixable, just no clear why a balance is running, if it started automatically post the diagnostics to see why.
  2. I don't know why the logs aren't showing the boot process from the start, I can't even see the HBA firmware version, but they are still filled with HBA errors, see if it's using the latest firmware, if not upgrade.
  3. Strange, if it was a power/HBA issue I would also expect issues during a parity check.
  4. Both times there are some LSI HBA related error messages, those by itself should not a reason for Unraid to crash, but they could be the result of a hardware issue, like bad power, writing will require more power than reading, or a problem with the HBA, have you tried running a parity check to see if it also crashes?
  5. That firmware doesn't have any major issues AFAIK, but you should always upgrade to the latest, instructions here.
  6. Yep, IMHO servers and overclocking never really go well together, and running 2133 and 2400 rated DIMMs @ 3200MT/s is pushing your luck
  7. Syslog starts over after every reboot, enable the syslog server and post that together with the full diagnostics after a crash.
  8. Correct, you do this: https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself The final option, assuming actual disk has a clean filesystem without a lost+fond folder, and it should have, you do a new config and instead of rebuilding disk3 based on the emulated disk you re-sync parity based on the actual disk.
  9. May 5 10:26:53 LuNAS kernel: ? Sys_DumpTask+0xe9/0xf1 [corefreqk] May 5 10:26:53 LuNAS kernel: Cycle_AMD_Family_17h+0x31b/0xb37 [corefreqk] Unisntall the corefreq plugin.
  10. Yes, that's expected, you need to rebuild the disk to fix that, but first check the lost+found folder, if there are lots of files there and/or the emulated disk is missing some data you should not rebuild on top of the old disk, you have more than one option: -rebuild to a new disk if available, then you can compare the data with the old disk -compare the old disk with the emulated disk now, you can do that by first stopping the array, unassign disk3, start the array, Unraid will continue to emulate the disk, use the UD plugin to mount the old disk and compare the data with the emulated one, just note that before mounting the disk with UD you need to change the XFS UUID since it will be the same as the emulated disk, you can to that in the UD settings -final option is instead of rebuilding the disk doing a new config (Tools -> New config) to reset the array and re-sync parity based on the actual disk3, which should not have the lost+found folder but you should confirm before doing it, using the same procedure to mount it with UD as described above.
  11. This usually means memory problems or some other kernel memory corruption, start by running memtest, the filesystem should be OK after that's fixed, since it went read-only to prevent corruption, though if it's bad RAM there could be some data corruption, etc.
  12. Stop the array, then start in maintenance mode and run the command, after that's done start the array and disk3 should now mount, but because of the filesystem corruption there can be some data loss, so look for a lost+found folder and see if there are any lost files there, if all looks OK you can rebuild on top, if it doesn't you can instead try to access the data on the actual disk.
  13. Start here: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  14. Sorry for replying in english, this caught my eye while browsing. You always need to specify the partition to check a filesystem, i.e.: xfs_repair -v /dev/sdf1 Also note that the above will check the actual disk, not the emulated disk which is what you currently have, to check and repair the filesystem in the emulated disk (or any other assigned array disk and keep parity valid) you need to start the array in maintenance mode and use: xfs_repair -v /dev/md3 Where the 3 is the disk number, if it asks for -L use it, more info in the check filesystem wiki page.
  15. Looks clean so far. Something is writing to disk4
  16. Unless you have checksums for the existing files, or were using btrfs, there's no way to known if the problem is with parity or data, we can only try to find the current issue and put parity back in sync.
  17. See if it get faster if you set it to performance, or slower if set to powersave. Parity sync is single threaded, and with a parity2 it uses more CPU, still with just 8 disks I would think it should be faster than that with that CPU.
  18. You can download the stable zip then just replace all the bz* files on the flash drive and reboot.
  19. Constant errors with the HBA, post new diags after rebooting since can't see the boot sequence on the previous logs.
  20. Still issues with disk1, also swap/replace power cable.
  21. Run memtest, if no errors are found after a couple of passes run a correcting check followed by a non correcting one, if there are errors on the 2nd run post new diags without rebooting.
  22. Appdata is set to cache="yes", so every time the mover runs it moves files to the array, set to cache=prefer, stop docker service and move everything to cache.
×
×
  • Create New...