Jump to content

JorgeB

Moderators
  • Posts

    67,539
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You can try this and then post that log after a crash.
  2. Still did something to the partitions, for the array devices you should be able to rebuild them to recreate the correct partitions, like this. Cache appears to be gone, since it's not finding a valid filesystem, you can try #btrfs on freenode IRC, maybe they can help.
  3. Yes, that's it, just replace 1 with the number of the disk you want to rebuild, 29 is for parity2 which also needs to be used when there isn't one.
  4. Possibly bad sector(s) got remapped, or is just working for now, these can be intermittent sometimes. Should be, but not that only one of them can be mounted at any time, since they will have a duplicate UUID.
  5. Parity just needs to be the same size as the largest data device, you were talking about 1TB SSDs for data, so you only need a 1TB SSD for parity, and yes, a fast NVMe device with good endurance would be better.
  6. That's means something went wrong with the encryption. That's normal, if the device can't be unencrypted nothing else will work, make sure you use encryption only if really needed, if not it's just one more thing that can go wrong.
  7. It's missing the partition #, correct command is: xfs_repair -v /dev/sdb1 Also only use -L if needed
  8. Yep, this is usually a indication of bad RAM.
  9. It won't, there are some recovery options here.
  10. If you didn't format disk11 parity is still valid without it, so you can use the invalid slot command, but since you didn't post the diags more info is needed, what Unraid release are you using and does it have single or dual parity?
  11. Single raid10 pool should give you better performance and also better distribute the writes among all devices, on the flip side any pool issue will affect all that data, but as long as you have backups I would probably go with a single raid10 pool, or even raid5, if you have really good backups.
  12. That's the normal way, parity swap is for when you have a disabled data disk and a spare larger than parity, it could also be used for an upgrade, but array would be offline during the parity copy portion, so usually not much point.
  13. Please post the diagnostics: Tools -> Diagnostics
  14. Unless ECC can be disable in the BIOS Memtest won't find any errors if ECC is correcting them, check system event log in the BIOS/IPMI, there should be more info there.
  15. e1000e: probe of 0000:00:1f.6 failed with error -2 NIC is failing to initialize, you can try -beta30 to see if works, but possibly not a driver issue.
  16. Very unlikely, but easy to test, just swap cables with another disk.
  17. Check that the 1TB NVMe device is still nvme0n1 and please post output of: cryptsetup luksOpen /dev/nvme0n1p1 nvme0n1p1
  18. Yes, in RAID1 mode (default), max usable size is 480GB. An existing file can only be one cache or one of the array disks, not both, any changes to an existing file will be done where the file resides. NEW files will be written to the array (if the use cache setting for that share and minimum free space are correctly configured), if an existing file grows beyond usable cache space it will just run out of space.
  19. The "Up to 6Gb/s" just means SAS2/SATA3 link for devices, that's never the bottleneck with disks, even 3Gb/s (SATA2) would be enough for most disks, see here for some benchmarks to give you a better idea of the possible performance increase going with a PCIe 3.0 HBA.
  20. If/when it happens I expect to be the same as btrfs, i.e., it can be used as an independent filesystem for data devices and single/mirror/raid for pools, so all of Unraid's array flexibility options can be kept.
  21. It should be, but only if by chance anyone is using the same server with same HBA can say for sure.
  22. Replace it, or live with with, not much else you can do.
  23. Don't see any crash there, just failure to unmount the disks because something was still using them, you can try this.
×
×
  • Create New...