Jump to content

JorgeB

Moderators
  • Posts

    67,565
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. It will work, though no need to assign parity if you're not going to sync it, assign it only in the end.
  2. Yes, missed that since it's at the end of the df list, disk was formatted, so it's mountable, but formatting deletes all the data in a disk, and updates parity accordingly:
  3. No valid filesystem is being detected on disk3, also rebuilding an unmountable disk will result in the same.
  4. Leave only the 3 devices and you can try these options, though not sure any of them will work in this case, and if it doesn't best bet is to ask for help on IRC like mentioned in the link.
  5. Do you mean the original device? If it's the original device try adding it back, but you need to this: Stop the array, unassign all cache devices, start array to make Unraid "forget" current cache config, stop array, reassign all cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), start array. Just need to format with the new device, all data will be lost.
  6. The running VM appears to be causing a very high system load, try turning it off (or leaving it off) to see if it makes a difference.
  7. No, it's not mounting, and with an error that doesn't make sense: super_num_devices 3 mismatch with num_devices 3 found here Basically means it's expecting 3 devices and it's finding 3 devices, and that doesn't match. Did you still have anything important on cache? Docker image can easily be recreated, as for Plex was that the database? I don't use Plex so not sure if that is easily recreated or if you have a backup.
  8. I meant does the pool mount without that SSD assigned.
  9. Something very weird is going on with that pool, if you unassign cache1 again does it mount? If it doesn't post diags.
  10. OK, now add back the other SSD and this time it should hopefully work.
  11. You can tail or enable the syslog server, then post that together with the diagnostics.
  12. If it's working keep it, just make sure you always have up to date backups.
  13. It's not, possibly just bad RAM. Recommend you get one.
  14. Checksum errors suggest a hardware problem, like bad RAM, you are also getting multiple segfaults, which suggest the same, since you didn't post the diags we can't see if the RAM is overclocked, check here and if not run memtest.
  15. try --rebuild-tree and also with --scan-whole-partition
  16. That's it. You can, it would be best without doing it, but if rebuilding the superblock isn't working maybe the only option.
  17. What Unraid release? Some older reiserutils have bugs.
  18. That's a Smarmontools issue, it worked with 7.0 and it should go back to normal on a future release.
  19. Current xfs uses a lot more space for metada, so likely there will be some data loss in the beginning of the disk, still worth a try, change the disk fs back to reiser, then rebuild the superblock, then rebuild tree.
  20. Stats with v6.8.x are wrong when using 3 devices in raid1, both used and free, cache is full, you need to free up some space then run device delete again.
  21. Macvlan call traces are usually the result of having dockers with a custom IP address, more info below:
  22. Unraid doesn't currently spin down SAS devices, it should in the near future, for now you can try this:
  23. The notification is normal, just not normal that it's not trying to delete/replace the missing device, try this, stop the array, again unassign cache1, start array and type: btrfs dev del missing /mnt/cache Post new diags after cache pool activity stops.
×
×
  • Create New...