Jump to content

JorgeB

Moderators
  • Posts

    67,092
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. Correct, that's optional, got it since I've found one cheap, my other similar server has a clip in the power supply plug to turn on the second case together with the first one.
  2. Disk looks fine, suggest replacing/swapping cables just to rull them out and rebuild, you can also run an extended SMART test.
  3. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  4. No, you need to run it again with --rebuild-tree
  5. Pool was raid1, so most data should still be available, though likely not all because of the filesystem corruption, start the pool with just the remaining device, if it doesn't mount see here for some recovery options.
  6. That would suggest the SSD failed, try it on a different SATA port to confirm.
  7. There are only two options, both are described there.
  8. XFS uses around 1GB per TB for filesystem housekeeping, so that's normal on an empty 8TB disk.
  9. Again, SAS2LP is not recommend and likely the source of the problem.
  10. Yes, you can try copying the data form the emulated disk7 before doing it, but since parity is failing you might not be able to copy it all.
  11. Diags don't show nothing of what you describe, only that there is filesystem corruption on disk1, but I have no idea if it was correctly rebuilt or not, but since parity state is unknown not many options now, run a filesystem check on disk1.
  12. Yes, you can easily recover the array even if the flash drive fails completely and you don't have a backup, but you should have one, it can be done in the GUI by clicking on flash and backup, soon it will be possible to autobackup to the cloud with the myservers plugin. No, needs a USB flash with an unique GUID
  13. Tools -> New config -> keep all assignments except parity -> reassign disk7 as parity -> start array to begin parity sync
  14. Correct, but if parity is already present any added disk needs to be cleared for parity to be maintained. You can but you should really avoid that, not recommended for both performance and readability reasons. Yes, you can do it with or without parity, parity means you'll have redundancy from the beginning, but it can be slower writing, unless turbo write is used, and like mentioned will require all disks to be cleared as they are added.
  15. Something is missing in your description: - the diags are after this, ideally we'd want those - what disk failed and doesn't have any data? all your disks show some data, though disk1 has fs corruption. - if there was a parity check after the unclean shutdown it would be non correct, i.e., parity wouldn't be touched. - parity is now disabled, did this happened before or after that?
  16. No, in fact you want to avoid RAID controllers, use onboard SATA ports and/or an HBA. Yes, Unraid always needs to format the devices, even when using a supported filesystem, and EXT4 isn't one of them. You just need one free disk, added it as disk1 and format with Unraid, mount one of your ext4 disks with the UD plugin and copy the data to disk1, then add that disk as disk2, a new ext4 as unassigned and repeat.
  17. No need to with a single disk, and if you plan to only use one data drive for now when adding parity better to assign it to parity2 slot to avoid this happening again.
  18. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed. There are also some LSI HBAs with 16 or even 24 ports, but they are usually more expensive than two or three 8 port HBAs, or one 8 port HBA plus a SAS expander.
  19. You can use a single port to the expander, and the other port can connect to another one if needed, but it will have less bandwidth with a single link, if it's enough or not depends on the HBA (PCIe 2.0 or 3.0, SAS2 or SAS3), expanders (SAS2 or SAS3) and the number and type of devices used, you can check this thread for some performance numbers.
  20. Problem appears to have started because with a single data disk parity will be a mirror of disk1, and when you started moving the disks around btrfs got confused because there were two filesystem with the same UUID, power down, disconnect all the other disks except the original disk1 (serial ending in CJY1), power back on and do a new config and assign it as disk1, start the array and it should mount normally, if it doesn't post new diags. If it does clear/wipe the other disk before adding it back as parity.
  21. If you really need 45 devices I would use one PCIe 3.0 LSI HBA connected to two SAS2 or SAS3 expanders, each expander can usually handle 20 to 36 devices, depending on the model.
  22. It's not being detected, so hardware problem, but can't say if it's the controller or the board, try a different PCIe slot if available, failing that try it on a different board.
  23. You'll need to back up cache pool and re-format, but recommend only doing that after replacing the cables.
  24. Assuming you also replaced the cables for cache1 please post new diags.
×
×
  • Create New...