Jump to content

JorgeB

Moderators
  • Posts

    67,771
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. No, some devices come with weird partitions, but blkdiscard should fix that, same as preclear, but without actually adding a write cycle.
  2. Mar 24 17:33:32 nasvm kernel: BTRFS info (device sdi1): bdev /dev/sdi1 errs: wr 18755249, rd 19783429, flush 618, corrupt 70194, gen 25760 This cache device has been dropping offline, see here for better pool monitoring. As for current issue filesystem is corrupt, there are some recovery options here.
  3. Log is completely spammed with SSH related text, reboot and post new diags after array start.
  4. Try wiping the devices first with: blkdiscard -f /dev/sdX blkdiscard -f /dev/nvmeXn1 Not sure if -f is needed with v6.9, if it doesn't work just remove that.
  5. Reboot, if it still doesn't start after that post the diagnostics.
  6. You can still do it but would have to do it manually, i.e., do a new config with the SSDs then copy the data from the disk(s) using for example UD to mount.
  7. Partition for SSDs start on sector 2048 vs sector 64 for disks, so if you use an SSD to replace a disk of the same capacity the partition will be a little smaller and there won't be enough space.
  8. Split level overrides allocation method, if the folders already exist on disk1 data will go there.
  9. Check that the power supply idle control is correctly set, if not they can crash when mostly idle.
  10. Always getting 0% for btrfs data usage ratio, should be 86% in this example.
  11. https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=972660
  12. That suggests a drive problem, assuming cables were replaced/swapped as mentioned.
  13. Ryzen/Threadripper with overclocked RAM like you have is known to corrupt data, see below for more info: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
  14. With the single profile in btrfs data is written in 1GiB chunks always to the device with most available space, so if or when they have similar available space it will alternate between devices, so a single >1GiB file will always be distributed among them. Basically yes, might as well use raid0, unless using different size devices. Basically what you can't do is add the dropped device if Unraid considers that a new device, since it will be wiped and without it the pool won't mount again, when Unraid thinks it's a new device an "all data on this device will be deleted" warning will appear in front of it, to properly add it you can: -stop the array -unassign all members from that pool -start the array so the pool config is "forgotten" -stop array -now re-assign all existing pool devices including the one that dropped earlier -there can't be an "all data on this device will be deleted" warning for any of the devices -start the array and Unraid will import the existing pool
  15. Thanks, forgot that >2TB support was only added on v5
  16. Yes, and there is: Mar 19 20:20:49 IWURSRV kernel: mpt2sas_cm0: mpt3sas_transport_port_remove: removed: sas_addr(0x4433221101000000) Mar 19 20:20:49 IWURSRV kernel: mpt2sas_cm0: removing handle(0x0009), sas_addr(0x4433221101000000) Mar 19 20:20:49 IWURSRV kernel: mpt2sas_cm0: enclosure logical id(0x500605b003cc0de0), slot(2) It is a strange issue, since it keeps happening only with those two disks, if it's not power/cable could be a controller or disk problem, can you connect them, even if temporarily, to the onboard SATA ports?
  17. Sorry missed your first reply, damn forum bug, to separate the devices you can do this: -stop the array -disable VM/Docker services -unassign the devices from both the docker_pool and the vm_pool -start array -stop array -assign both devices to the docker_pool (there can't be an "all data will be deleted" warning in front of any of the pool devices) -you can re-enable VM/Docer services -start array -stop array -unassign the device you want to remove from docker_pool -start array, wait for the btrfs balance to finish, once that's done -stop the array -assign the removed device to vm_pool -start array -vm_pool will be unmountable, format to start using it
  18. Run memtest, could just be bad RAM, or other hardware issue.
  19. That or do a new config and check "parity is already valid", you'll still need to run a correcting parity check, this option will also require for you to fix the filesystem gain, since you fixed the fs on the emulated disk, probably easy to just rebuild, IF the emulated disk is mounting correctly, to check that first unassign disk5 again and start the array in normal mode.
  20. Swap with any disk, just to rule out the cables if the same thing happens again to the same disk, of course if it happens with the one that got the cables that would also point to a problem there.
  21. Disk5 as unassigned, array was started, then assigned again, so now Unraid wants to rebuild it, why did you do that?
  22. Should be fixed now, just start the array normally.
×
×
  • Create New...