Jump to content

JorgeB

Moderators
  • Posts

    67,783
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. You should update the LSI firmware, it's using a very old version that had this issue, and likely a driver change also correct this now. To fix the problem you can do a new config: -Tools -> New Config -> Retain current configuration: All -> Apply Check that all assignments are correct, both array and cache, check "parity is already valid" and start the array
  2. Edit config/ident.cfg on the flash drive and change USE_SSL from "yes" to "no", then reboot, then access the server by IP.
  3. You're not the first to have issues with this server, looks like it doesn't work well with the newer kernel, I would suggest going back to v6.9.2 now, unfortunately some data loss might already happened.
  4. When checking in Windows best to do it with dispart, disk management isn't always the best when there are very small partitions, this is the same flash drive:
  5. Problem is not with cache, it's the single btrfs array disk, disk5, that's the one that should be converted to XFS or encrypted btrfs, note that both require formatting the disk, i.e., all data there will be lost.
  6. Assuming the pool is redundant when a device fails/drops offline the other one continues to work, it's important to monitor the pool to act as soon as possible, if the device failed you can replace it to rebuild the mirror, if it dropped offline you can bring it online then run a scrub to put it back in sync, note that btrfs can only repair the data if COW is enable, for any shares with COW disable, and this was the default for the system and domain shares before 6.10.0, it won't be able to sync the dropped device to due NOCOW also disabling data checksums. There are some corner cases, especially when a device drops and comes back online that can cause some issues, but that can usually be solved, and that's also why it's important to monitor the pools, to minimize that risk.
  7. You're welcome, make sure you monitor the pool so you'll be notified if there are more issues with one of the devices dropping.
  8. Check that it's well seated, if errors persist you likely have a bad CPU.
  9. This is a known issue when there's only one assigned btrfs array drive, it creates an invalid btrfs filesystem on parity that confuses the pool, workaround is to convert that disk to xfs like the other ones, add more btrfs array devices or if you want one but only one btrfs array device use encrypted btrfs.
  10. It's normal, SAS SMART is very different from ATA SMART, from my experience, which is not much with SAS, you wan to monitor "Elements in grown defect list" and "total uncorrected errors" which should both stay 0.
  11. You can copy locally using for example midnight command or the new Dynamix File Manager, you'd need to start the array so: You can also share the UD disk over SMB and copy to another PC, you can check the first couple of posts in the UD support thread for some help with that. If you rebuild now using the old disks you won't get the data that you see in UD, only the one showing in the emulate disks, if that's what you want I can also post instructions.
  12. Once a disk is disabled is must be rebuilt, if the emulated disk is now mounting and contents look correct you can rebuild on top, assuming the disk is healthy. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself
  13. More likely to get help if if post in the existing support thread:
  14. Kernel is very different, so there could be changes that affect that, IMHO it would be worth trying, or GPT with a single partition, GPT doesn't require more than one, depends on how it was created.
  15. Maybe because it's GPT? @GrimDdid you try booting UEFI with an MBR partition?
  16. What I wrote above... Both disks look healthy.
  17. Disk5 read errors are logged as a disk problem, you should run an extended SMART test. Also goo idea to run memtest on the RAM.
  18. IMHO moving everything to the array and back is overkill, just make sure anything important like appdata is backed up, you should always have backups of anything important, redundancy is not a substitute, when you add the device it will keep the existing pool data, and you don't even need to shutdown docker/VMs, they can be online, data is just replicated to the other device.
  19. That's good news! UD historical devices don't really matter for this, but you can remove it now or later, I assume you plan do re-add the other device o the pool? If yes first make sure backups are up to date, then you'll need to wipe the other device before adding it back to the pool, you can do it like this: -check that array auto start is disable, shutdown server -reconnect the other NVMe device -power on the server, don't start the array -wipe the unassigned device with blkdiscard /dev/nvme#n1 Replace # with correct number, not sure if 6.9.2 needs -f for blkdiscard if a data is detected, if yes use it. -assign it back to the pool -start array to begin balance
×
×
  • Create New...