Jump to content

JorgeB

Moderators
  • Posts

    67,681
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. You can try to fix this with gdisk: gdisk /dev/sdX It should show something like this: GPT fdisk (gdisk) version 1.0.5 Caution: invalid main GPT header, but valid backup; regenerating main header from backup! Warning: Invalid CRC on main header data; loaded backup partition table. Warning! One or more CRCs don't match. You should repair the disk! Main header: ERROR Backup header: OK Main partition table: OK Backup partition table: OK Partition table scan: MBR: protective BSD: not present APM: not present GPT: damaged **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** Command (? for help): Enter w next to Command + enter, then y + enter: Command (? for help): w Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING PARTITIONS!! Do you want to proceed? (Y/N): y OK; writing new GUID partition table (GPT) to /dev/sdX. The operation has completed successfully. Reboot and try to mount again, depending on what exactly was damaged it might work, if it doesn't there's another way, rebuilding each disk, one or two at a time since Unraid will recreate the partitions, but before trying that you first need to fix both disable/invalid disks, then ask for more details.
  2. But in case it wasn't clear you now need to assign a 4TB (or larger) disk there.
  3. IIRC there's an old bug that if you cancel a rebuild, and while the disk status is "invalid" as opposed to "disable", you can assign a smaller disk.
  4. The weird errors on disk19 appear to be because you assigned a smaller disk than it was there before, though not sure how you did that: Aug 12 20:12:41 MSSTORAGE kernel: md: disk19 write error, sector=7814036864 Aug 12 20:12:42 MSSTORAGE kernel: attempt to access beyond end of device Current: [id] => ST3000DM001-1CH166_W1F2G6PP [size] => 2930266532 Previous: [idSb] => ST4000VN008-2DR166_ZGY92QQX [sizeSb] => 3907018532
  5. Any of the recommended LSI HBAs works, you can connect one cable for single link or two cables for dual link.
  6. You should first attempt to solve the other issues, or it might make things worse.
  7. It's OK, but if it's a correcting check and there is a RAM issue it can corrupt parity, no problem if it's non correcting.
  8. No, it is possible to boot to memtest by changing the menu but then you couldn't see the results anyway, you need a monitor/ipmi.
  9. No, it's a known issue in some cases, fixed for next rc.
  10. There's filesystem corruption on this pool: Aug 13 00:27:49 Skynet kernel: BTRFS error (device sdb1): block=1476820992 write time tree block corruption detected This error usually indicates bad RAM or other kernel memory corruption, you should run memtest.
  11. Settings -> Global Share Settings -> Enable user shares
  12. If just reformatting doesn't fix the problem you can type (with the array stopped): dd if=/dev/zero of=/dev/sdX bs=4k count=1000 Replace X with correct letter, double check you're doing it to the correct disks, any data will be lost, then start array and format.
  13. Yep, if it doesn't clear the beginning of the disks with dd and re-format, this could be the result of garbage on those disks from previous IDs, raid signatures, etc.
  14. Just notice you're still syncing parity, in that case you can cancel and use sdX1.
  15. Data should be fine but if you leave them like that you can run into trouble mounting them in the future, even if you just forget to set the fs or need to mount them with UD, try this, it should be safe but first do it on one of the empty disks just in case, start the array in maintenance mode them type: xfs_admin -U generate /dev/mdX Replace X with the disk number, e.g. md8, then start array in normal mode and post output of blkid again.
  16. Yeah, those three disks sdf, sdg and sdh don't have a filesystem UUID, was anything different done with them when they were added to the array and formatted?
  17. There were issues with the NVMe device just before the crash, though unclear if the crash was related: Aug 13 02:26:53 Tower kernel: nvme nvme0: frozen state error detected, reset controller Aug 13 02:26:53 Tower kernel: blk_update_request: I/O error, dev nvme0n1, sector 1595697920 op 0x0:(READ) flags 0x80700 phys_seg 32 prio class 0 Aug 13 02:26:54 Tower kernel: pcieport 0000:00:01.0: AER: Root Port link has been reset Aug 13 02:26:54 Tower kernel: pcieport 0000:00:01.0: AER: device recovery successful Let the syslog server enable and see if the same happens again before next crash.
  18. That's good, but that means there's a problem with the auto function, since it wasn't detecting any of the supported filesystems, it's not just Unraid since UD also didn't detect a valid filesystem on those disks, please also post output of: blkid Yes, that can happen with previously used zfs drives since they use 2 partitions and Unraid only wipes one of them, but it's harmless.
  19. Why did you replace disk 17 now? Log is a mess, there are hardware errors for a disk and a strange errors with disk19, you didn't start the array in normal mode so disks weren't mounted, but for now stop the rebuild and post the output of: fdisk -l /dev/sdX for all the invalid partition disks.
  20. Strange, it's like there's no valid filesystem on those disks, please post output of: fdisk -l /dev/sdX for the three unmountable 2TB disks.
  21. Depends on the state of the failing device, also and if using the array as destination make sure you rsync to a disk, or use /mnt/user0/share. Make a new pool of the remaining device only and format it, you can wipe it first.
  22. Formatted with type 2 protection https://forums.unraid.net/topic/110835-help-with-a-sas-drive/
  23. User shares are just top level folders, you can copy and a share will be created, with default settings.
×
×
  • Create New...