Jump to content

JorgeB

Moderators
  • Posts

    67,704
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Problem with the NVMe cache device: Oct 25 15:36:50 Elsa kernel: blk_update_request: critical medium error, dev nvme0n1, sector 327930968 op 0x1:(WRITE) flags 0x8800 phys_seg 1 prio class 0 This is not a software issue.
  2. Meant attach the file here, instead of copy/past, diags are the normal diagnostics.
  3. You need to do a new config and re-sync parity, or it will be erased.
  4. Well that's expected if the disks are unmountable, disk5 dropped offline, check/replace cables and post new diags after array start.
  5. If the same happens with smartctl it's not an Unraid issue, look if it has been reported to smartmontools.
  6. The linked ddrescue thread shows how you can get a list of corrupt files for that, for disk2 check if the old disk really failed or if you can still use it, if it really failed you also might be able to use ddrescue on it, since the rebuilt disk2 will have corruption and no way of knowing which files are corrupt, unless you have pre-existing checksums.
  7. Please post the complete syslog, you can get the diags after rebooting, in this case it's mostly to see the hardware in use.
  8. Unfortunately there's nothing relevant logged before the crash: Oct 24 16:00:04 StormfatherUnR root: mover: finished Oct 24 16:24:42 StormfatherUnR kernel: microcode: microcode updated early to revision 0xde, date = 2020-05-24 This usually indicates a hardware issue, one thing you can try it to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
  9. You can do a new config to re-enable all the drives, but like mentioned USB is not recommended for array/pool devices, and you'll likely run into issues again.
  10. Disk2 is disabled and disk1 is failing, with single parity there will likely be some data loss, disk2 looks OK so all the data saved there before it got disabled should be fine, for disk1 I would use ddrescue to recover as much as possible, then do a new config with old disk2 and a new disk1 and re-sync parity.
  11. They will be emulated until you replace that drive, if you want to move the data then you can manually copy the data to other disk(s) (or use the unbalance plugin) then do a new config and re-sync parity without that disk.
  12. Can't say since there's no SMART for either, check connections and post new diags.
  13. You can also rebuild on top of the old disk, as long as the emulated disk is mounting and contents look correct.
  14. Disk being precleared was generating constant errors, also your parity disk dropped offline.
  15. You have to rebuild the disk or do a new config and correct parity, either way it will take about the same time, 2nd option is usually best if the emulated disk is not mounting or required a filesystem check that created some lost+found content.
  16. Please don't post in multiple threads about the same thing, do what I asked in the other one.
  17. If it's a single device cache you just need to re-assigned it.
  18. Enable the syslog server then post that after a crash and the diags in a new thread please.
  19. It's still the Unraid driver crashing, not much else I can suggest other than using different hardware, LT might be able to give more info on the cause.
  20. That's normal, btrfs will abort any file operation with i/o error if data corruption is detected, so you don't unknowingly copy corrupt data.
  21. It's a known btrfs issue when using an odd number of devices in raid1, the value will change as you fill up the pool and get closer to real one, and you can use the total space.
×
×
  • Create New...