Jump to content

JorgeB

Moderators
  • Posts

    67,504
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Just a heads up for anyone using zfs with send/receive, there's an old and scary bug where files on the receive side can get silently corrupted, reported in 2017, and it's still present since there's not an easy way to fix, it only affects datasets with a record size >128K, I just checked and I'm sing the default 128K on my pool, but still scary.
  2. I missed the edit, but no, doesn't look like a similar issue, in that case log shows the data disks mounting and then server crashes when mounting cache, in your case it doesn't even start to mount disk1. I see no reason to think there's a problem with the data, at least so far.
  3. Still no clue on the syslog on what the problem is, I would suggest two things: #1 - backup current flash, recreate and restore only your key and super.dat (disk assignments), if it still doesn't start like that it's most likely a hardware problem, see #2 #2 - try booting the array on another board, since there are few disks not so complicated, as long as you have another board/pc available.
  4. Yes, those are both single 1TB platter disks, and they are SMR, but like mentioned and in my experience Seagate has the best firmware for SMR disks, I have used several SMR disks with Unraid, 2.5 and 3.5", and Seagate disks perform usually much better, most times about the same as CMR, I also have Toshiba SMR (and had some WD in the past) and those perform much worse.
  5. Yes, any single platter 1TB 2.5" disk is SMR. You can't use just that part of the model, e.g.: ST1000LM015 -> CMR ST2000LM015 -> SMR
  6. Run another non-correcting check without rebooting and post new diags.
  7. If you mean these they are also SMR, but in my experience Seagate SMR perform much better with Unraid than Toshiba and especially WD SMR, usually you don't even notice they are SMR.
  8. No problem trying to mount the emulated disk, whatever you see there is what will be on the rebuilt disk later.
  9. No, but I've seen other people complaining of similar issues, I've never observed that on my servers, and I do multi terabyte moves with some frequency, one thing that will cause problems is having docker/VMs images on the array, other than that it seems some hardware/configs are more prone to these, don't know why.
  10. It happened before this last boot, so we can't see what what caused it the diags, but the disk does look fine so you can rebuild on top, suggest replacing/swapping cables before doing it to rule them out if it happens again to the same disk.
  11. Yep. miniSAS to SATA forward breakout cables, make sure forward is mentioned since reverse cables look the same but won't work for this case.
  12. Yes. Correct, should still run xfs_repair first on the emulated disk. That's from the UD plugin, but yes you can remove them.
  13. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed. Note that it's not clear to me that the controller is what's causing the boot problem, but still not recommended.
  14. Not that I can see, especially if it also happens during safe mode, one thing you could do is to check the cooling, CPU might be throtling.
  15. Disk should mount now, check contents look correct, also look for a lost+found folder and any data inside there.
  16. Post a screenshot of main to see the current array status.
  17. I told to proceed with rebuilding on top if the emulated disk mounted correctly, if it doesn't there might be better ways, see for example if one of the disks mounts with UD.
  18. No, and it always shows what drives will be formatted next to the format button.
  19. And this might be the main issue, there are no errors on the syslog, disks can possibly mount but take a long time, server load is very high for what it's doing, but I see no reason for this high load.
  20. That should work but ideally you wouldn't need to run xfs_repair, when you first unassigned the disk as the disk mounting?
  21. No, since there's no valid parity. That works
  22. Please post the diagnostics: Tools -> Diagnostics
  23. You can manually set the fs to xfs or run on the command line: xfs_repair -v /dev/mapper/mdX Replace X with correct disk #.
  24. Disk4 looks freshly formatted, it's completely empty, try to mount the old disk with the UD plugin, if it doesn't mount please post diags after trying.
×
×
  • Create New...