Jump to content

JorgeB

Moderators
  • Posts

    67,460
  • Joined

  • Last visited

  • Days Won

    706

Everything posted by JorgeB

  1. Yes, I already asked LT to at least include raid1c3 because it's the best option for raid6 metadata, since it has the same redundancy level. raid1 metadata is fine for raid5, though you could use raid1c3 for increased reliability, raid1c4 is overkill for raid5 IMHO See here how to convert: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421 Note that you can convert just the data profile (dconvert=) or just metadata (mconvert=) by itself, or both at the same time Yes, you can always convert data and/or metadata to any profile, as long the the pool has the minimum required devices.
  2. Because using the GUI it automatically converts from raid0 to single profile when only one device is left. Correct. You can, it's another valid way of making it "forget" the previous pool config.
  3. That suggests a flash drive problem, try recreating it or using a different one.
  4. Make sure it's using the recommended power supply idle control setting.
  5. It is, but it's normal with recent XFS filesystems.
  6. The controller is not being detected on reboots, so it's a hardware issue, look for a board BIOS update or try a different slot/controller if available.
  7. The LSI is a good option, just make sure it's flashed to IT mode.
  8. You need to clear the current stats, or just reboot. Disk looks fine for now, keep an eye on it.
  9. Thanks for reporting back, if you don't mind I'm going to tag this solved.
  10. Since it's a Ryzen server see here first: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/page/2/?tab=comments#comment-819173
  11. Likely a cable problem 199 UDMA_CRC_Error_Count -O-R-- 195 195 000 - 38 Replace the SATA cable and re-sync parity2.
  12. A few errors are normal, even expected after this: Jun 19 21:17:28 NAS emhttpd: unclean shutdown detected Run a correcting check.
  13. Check/replace cables on the parity disk and post new diags.
  14. Note that you need to maintain the minimum number of devices for the profile in use, i.e., you can remove a a device from a 3+ device raid0 pool but you can't remove one from a 2 device raid0 pool (unless it's converted to single profile first). You can also remove multiple devices with a single command (as long as the above rule is observed): btrfs dev del /dev/mapper/sdX1 /dev/mapper/sdY1 /mnt/cache But in practice this does the same as removing one device, then the other, as they are still removed one at a time, just one after the other.
  15. You can do this: -With the array running type on the console: btrfs dev del /dev/mapper/sdX1 /mnt/cache Replace X with correct letter, don't forghet the 1 after it. -Wait for the device to be deletet, i.e., until the command completes and you get the cursor back -Device is now removed from the pool, you don't need to stop the array now, but at next array stop you need to make Unraid forget the now deleted member, for that: -Stop the array -Unassign all pool devices -Start the array to make Unraid "forget" the pool config (note: if the docker and/or VMs services were using that pool best to disable those services before start or Unraid will recreate the images somewhere else, assuming they are using /mnt/user paths) -Stop array (re-enable docker/VM services if disabled above) -Re-assign all pool member except the removed device -Start array -Done
  16. That suggest a problem with the LAN itself, you should get close to line speed with iperf when all is working as it should be.
  17. After 24 Hours: 735778GB - 735080GB = 698GB in the last 24 Hours, while still a lot it's much better than the 3TB a day or so it was writing before, so not a happy camper, but a happier camper
  18. New board is likely trying to boot UEFI only, either enable bios/CSM boot support on the board BIOS or rename EFI- to EFI on the flash drive so that UEFI boot is possible with Unraid.
  19. Yes, but since the removed disk was unassigned it's not decrypted by Unraid, you'd need to manually decrypt it, if you want I can show you how to remove a member form the pool using the CLI, but it's really off topic here, if you want please start a new thread and I'll reply there.
  20. Hmm, this came up recently and I tested myself since btrfs can remove a device from a raid0 pool and it worked with Unraid, just retested and went from a 4 device raid0 to a single device pool, removing one device at a time, the removed device is still mounted by the pool and then then deleted after balancing. Looking @TexasUnraiddiags I think it didn't work form him because his pool was encrypted, the unassigned/removed device wasn't decrypted and so unable to be used during the balance for removing, i.e., same as if the device was disconnected, and in that case obviously it can't mount a raid0 pool r/w with a missing device: Jun 20 13:18:07 NAS kernel: BTRFS warning (device dm-2): devid 5 uuid 6baf6b07-3963-4763-ae97-3e0258cc71a8 is missing Jun 20 13:18:07 NAS kernel: BTRFS warning (device dm-2): chunk 11967397888 missing 1 devices, max tolerance is 0 for writeable mount Jun 20 13:18:07 NAS kernel: BTRFS warning (device dm-2): writeable mount is not allowed due to too many missing devices
  21. If you haven't rebooted yet please post or pm me the diags so I can try to see what happened.
  22. Thanks, it would difficult to for me to help here during the weekend due to lack of time, mover might not work here since the cache is going read only, OP can use e.g. mc or krusader.
  23. Start by running a single stream iperf test to check if the LAN is performing as expected.
  24. You had the other device still connected yes? It needs to be for a non redundant pool to be converted, you just unassign it and start the array. That reminds me that I should add that to the FAQ since while it should be obvious some users might assume it's not needed. Other that that, are you using the new beta? Didn't try it there yet, something might be broken.
  25. 👍 Data should be fine, except any data being written when it crashed, but always good to have a way to confirm.
×
×
  • Create New...