Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


johnnie.black last won the day on August 19

johnnie.black had the most liked content!

Community Reputation

1915 Hero

About johnnie.black

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

5535 profile views
  1. Both disks I mentioned are on the onboard SATA controller, not the HBA.
  2. There's what appears to be a connection problem with disk13, check/replace cables, there's also a similar problem with disk9, though to a much lesser extent.
  3. Any LSI with a SAS2008/2308/3008/3408 chipset in IT mode, e.g., 9201-8i, 9211-8i, 9207-8i, 9300-8i, 9400-8i, etc and clones, like the Dell H200/H310 and IBM M1015, these latter ones need to be crossflashed.
  4. https://docs.broadcom.com/docs/9300_16i_Package_P16_IT_FW_BIOS_for_MSDOS_Windows.zip
  5. Disabled disk isn't showing full SMART report, it could be going bad or not, try getting a manual SMART report on the console: smartctl -x /dev/sdX
  6. You need to subtract the parity size, with 3 devices correct free space is 2/3 of displayed free size, with 4 devices is 3/4, etc.
  7. It's safer to keep the metadata raid1. The GUI won't report the correct free size since it doesn't account for parity, but at least when used with UD the free space will be recalculated based on the used space, so it's closer to correct as the pool gets filled up, but if you want to know the real free space just subtract the space needed for parity, so if you have a 3 1TB device raid5 pool: GUI shows 3TB free - real free space is 2TB GUI shows 1.5TB free - real free space is 1TB GUI shows 450GB free - real free space is 300GB etc Used space will always be correct on the GUI.
  8. Try toggling Settings -> Display settings -> Display world-wide-name in device ID: You should also update the HBA firmware, it's very old.
  9. You could do a new config and trust parity, but please post diags first since I would expect the LSI to detect the disk as they were.
  10. This implies devices sdg is mounted, and this should be an additional device, not the original device mounted in /mnt/disks/nas-ssd-pool
  11. Xfs uses around 8GB for an empty 8TB drive, for metadata, filesystem housekeeping.
  12. Settings -> Disk Settings -> Change default filesystem to a non encrypted one.
  13. With the array stopped, disable VM and docker services (assuming they are using cache), unassign all cache devices, start array, this will make Unraid "forget" cache config, stop array, re-assign original cache (you will no longer get the "data will be lost" warning), re-enable services and start the array.
  14. https://forums.unraid.net/topic/79611-odd-vm-issue/?do=findComment&comment=764463