Jump to content

JorgeB

Moderators
  • Posts

    67,704
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. https://forums.unraid.net/topic/103938-69x-lsi-controllers-ironwolf-disks-disabling-summary-fix/?do=getNewComment
  2. Thanks, I was also thinking of SMB access, in that case you use \\tower\cache\share instead of \\tower\share (disk shares must be enable).
  3. FYI this is the post where a user reported this helped, and the symptoms looked similar to yours: https://forums.unraid.net/topic/114827-lockups-when-parity-is-enabled/?do=findComment&comment=1047923
  4. -Tools -> New Config -> Retain current configuration: All -> Apply Then start the array to begin a parity sync or since parity should be mostly valid check "parity is already valid" next to array stat button before 1st array start and then run a correcting check.
  5. You can still write to cache using the correct share so that the mover will still work, just use a disk path, e.g: /mnt/cache/share instead of /mnt/share I found the post I was looking for: https://forums.unraid.net/bug-reports/prereleases/unraid-os-version-690-beta30-available-r1076/?do=findComment&comment=11014 From v6.7 to v6.9 it got 5x slower, so from 6.3 to 6.9 I expect a much higher loss.
  6. You have two disable disks with single parity, it's not possible for Unraid to emulated them, if the disks are OK you can do a new config and re-sync parity (or check parity is already valid and run a correcting check).
  7. Correct, formatting with UD is just for the device(s) to be partitioned. The other way around, old device first, new one after, also with NVMe devices it's a little different from what I posted since you need to add p1 for the partition, it will look like this: btrfs replace start -f /dev/nvme0n1p1 /dev/nvme3n1p1 /mnt/cache Just need to adjust the correct device number in red. I assume you mean unassign here, no need to physically remove them. Position is not important, you can add them in any order. btrfs pool info is saved in the devices metadata, after you start the array without devices in the pool and add them later Unraid will look for an existing pool and import if one exists.
  8. Please use the existing plugin support thread:
  9. Default is raid1, but it can be changed: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  10. It's logged as a disk issue, but the problem sectors were re-written, so it might be fine now, run an extended SMART test.
  11. It mirrors in the sense that the end result will be the same. This is for v6.9.x, while it should also work with v6.10.x I didn't test it, and don't like to assume. You need to have both the old and new replacement devices connected at the same time, if you can have all 4 you can do both replacements and then reset cache config, if not do one, reset cache config, do the other reset cache config again. First you need to partition the new device, to do that format it using the UD plugin, you can use any filesystem, then with the array started and using the console type: btrfs replace start -f /dev/sdX1 /dev/sdY1 /mnt/cache Replace X with source, Y with target, note the 1 in the end of both, you can check replacement progress with: btrfs replace status /mnt/cache When done and if you have enough SATA ports you can repeat the procedure for the second device, if not do the cache reset below and then start over for the other device. Pool config reset: stop the array, if Docker/VM services are using the cache pool disable them, unassign all cache devices, start array to make Unraid "forget" old cache config, stop array, reassign the current cache devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
  12. Devices are tracked by serial number, changing controllers is not a problem for Unraid, as long as no RAID controllers are involved.
  13. It is. Single as well. You can do it manually using the console, I can post the instructions if interested.
  14. It's logged as a disk issue, run an extended SMART test on parity.
  15. With the array started type: btrfs dev del /dev/sdX1 /mnt/cache Replace X with the correct letter and adjust mountpoint if needed, i.e., using a different named pool. When that's done, you'll get the cursor back, you need to reset the pool assignments, to do that: Stop the array, if Docker/VM services are using the cache pool disable them, unassign all pool devices, start array to make Unraid "forget" current cache config, stop array, reassign remaining devices (there can't be an "All existing data on this device will be OVERWRITTEN when array is Started" warning for any cache device), re-enable Docker/VMs if needed, start array.
  16. For raid0 you can't use the GUI, you can do it manually, if interested I can post the instructions.
  17. Not quite sure what you mean here, if it just doesn't mount you should run a filesystem check like mentioned above, what do you mean by unassigned? Or post diags after arrays start.
  18. Sorry, no idea why it's not working for you.
  19. Yes, it was a correcting check, run two consecutive correcting checks without rebooting, if the second one still finds errors there's likely a hardware issue, most commonly is bad RAM, but could also be controller/disc related.
  20. By looking at the syslog, any sync errors will be logged as "incorrect" for non correcting check, and "corrected" for a correcting check.
  21. Some (or a lot) of the data in disk3 is likely corrupt, if you have backups, yes delete everything and restore, so you know everything is OK, if you don't some corruption might be better than no data at all, you should really understand how parity in Unraid works, things would make more sense. https://wiki.unraid.net/Parity#How_parity_works
  22. Perfectly normal, up to a few thousand can be normal, depends if the array was writing during the shutdown. Auto-parity check after an unclean shutdown is always non correct.
  23. Parity would be still be (mostly) valid if there weren't any writes to the array after you started rebuilding disk5.
  24. Macvlan call traces are usually the result of having dockers with a custom IP address, upgrading to v6.10 and switching to ipvlan might fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info. https://forums.unraid.net/topic/70529-650-call-traces-when-assigning-ip-address-to-docker-containers/ See also here: https://forums.unraid.net/bug-reports/stable-releases/690691-kernel-panic-due-to-netfilter-nf_nat_setup_info-docker-static-ip-macvlan-r1356/
×
×
  • Create New...