Jump to content

JorgeB

Moderators
  • Posts

    67,480
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Probably best, since there are much less Mac users here, more likely to you get their attention that way.
  2. Try booting in UEFI mode, you need to enable it when creating the boot device, or go to the flash drive and rename EFI- to EFI.
  3. File system corruption is usually not a device problem, though it can be, posting the diags might give some clues.
  4. It's a very important thing to keep in mind with flash devices, cheaper devices can't sustain high speeds for long, usually also depends on how full they are, here's an example from my test server with a cheap TLC SSD, started with the SSD like new after a full device trim: After 30% rebuilding: And is stays like that until the end, good SSDs like the 860 EVO, MX500, etc, can sustain good writes speeds always, large capacity models are also usually faster at writing than small capacity models, since they can write in parallel to the various NAND chips:
  5. Please post the diagnostics: Tools -> Diagnostics
  6. That's what you should do: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  7. You can but you need to specify the correct device, e.g.: xfs_repair -v /dev/mapper/mdX You can also use the GUI: https://wiki.unraid.net/Check_Disk_Filesystems#Checking_and_fixing_drives_in_the_webGui
  8. About 400MB/s, but it mostly depends on the SSDs/controllers used, on my test server I can get up to 500MB/s with small SSD arrays, up to 8 devices.
  9. Yeah, it's not, but I tested myself some time ago and am sure optical and empty card readers don't count.
  10. Trim (manual or plugin) won't work on any array devices. That's expected if anything slows downs by the lack of trim it would be writes, reads should always be good and constant. The only issue with SSDs on the array is the lack of trim, but if you use reasonable quality SSDs should be fine, also I recommended using a faster/higher endurance SSD for parity, like an NVMe device. I've been using a small SSD array for a few months and it's still performing great, basically the same as it was when new, and I've written every SSD about 4 times over (parity 20 times over).
  11. That's weird, please post diags if you didn't rebooted after. What I would guess is that it recovered/replaced the damaged superblock and it's now balancing the data to the other device, if it's showing data and your pool was redundant all data should be correct.
  12. See here, Ryzen with overclocked RAM (which you have) is known to corrupt data resulting in sync errors.
  13. I believe they did in the past, but they don't count for a long time, like 1 or 2 years at least.
  14. Diags are after rebooting so we can't see what happened, but almost certainly the disk was formatted during or after the rebuild, does that ring a bell? There would be a warning like this one:
  15. Not following, if you have a backup restore to the new pool, if you don't you'll need to re-create them.
  16. Please post the diagnostics: Tools -> Diagnostics
  17. This would suggest the drive was formatted, please post the diagnostics, ideally before any reboot.
  18. Use the Krusader docker or Midnight Commander (mc on the console), it's much easier.
  19. I recently traded messages with Tom about this, but it won't hurt adding a post to the feature request.
  20. Optical drives don't count, also same for empty card readers.
  21. NICs are detect but there's an error and the driver is not loaded: Jul 2 09:56:38 Kenchitaru-Serv kernel: igb 0000:09:00.1: The NVM Checksum Is Not Valid Jul 2 09:56:38 Kenchitaru-Serv kernel: igb: probe of 0000:09:00.1 failed with error -5 Quick Google search found this, see if it helps. https://superuser.com/questions/1197908/network-eth0-missing-the-nvm-checksum-is-not-valid-with-asus-maximus-ix-hero
  22. Looks OK to me, docker image should be recreated. The problem was cerated after starting the array with a missing cache device, this removed one of the devices from the pool, though the remaining device should still work assuming it was a redundant pool, but there's some issue with the superblock, I already asked LT to not allow the array to auto-start with a missing cache device, and hopefully that will be implemented soon, but for now and for anyone using a cache pool it's best to disable array auto-start, and always check every device is present before starting. There's an issue with the remaining cache device superblock, not sure why, it's not common, a btrfs maintainer might be able to help restore it without data loss, you'd need to ask for help on IRC or the mailing list, both are mentioned on the FAQ linked earlier.
  23. Yep, like suspected doesn't look good, that device was removed from the pool, and the other one has a damaged superblock, you can try these recovery options, you can try them against both devices but if it works it should be with sdb.
×
×
  • Create New...