JorgeB

Moderators
  • Posts

    60374
  • Joined

  • Last visited

  • Days Won

    630

Community Answers

  1. JorgeB's post in Docker will not start - docker.img is in-use, cannot mount was marked as the answer   
    See if this helps to reboot:
    https://forums.unraid.net/topic/141479-6122-array-stop-stuck-on-retry-unmounting-disk-shares/?do=findComment&comment=1281063
     
  2. JorgeB's post in Array has 1 failed device (and another probably coming soon) was marked as the answer   
    See here:
     
  3. JorgeB's post in Unraid 6.12.3 crashing, docker service is unavailable was marked as the answer   
    First issue, cache device is dropping offline:
     
    Sep 25 04:53:45 Tower kernel: ata2: hard resetting link Sep 25 04:53:50 Tower kernel: ata2: link is slow to respond, please be patient (ready=0) Sep 25 04:54:20 Tower kernel: ata2: COMRESET failed (errno=-16) Sep 25 04:54:20 Tower kernel: ata2: limiting SATA link speed to 1.5 Gbps Sep 25 04:54:20 Tower kernel: ata2: hard resetting link Sep 25 04:54:25 Tower kernel: ata2: COMRESET failed (errno=-16) Sep 25 04:54:25 Tower kernel: ata2: reset failed, giving up Sep 25 04:54:25 Tower kernel: ata2.00: disable device Sep 25 04:54:25 Tower kernel: ata2: EH complete  
    Check/replace cables and since it's an MX500 also see here.
  4. JorgeB's post in Mover not moving was marked as the answer   
    For your case I would suggest using send/receive with the replication option, it will replicate the filesystem together with all nested datasets, and you can send that directly to the final destination, assuming it's also zfs.
  5. JorgeB's post in move: mover: array devices not mounted was marked as the answer   
    It should not be needed, but glad it's resolved.
  6. JorgeB's post in Unmountable: unsupported or no file system was marked as the answer   
    If the log tree is the only problem this may help, type:
    btrfs rescue zero-log /dev/sdf1 Then re-start array.
  7. JorgeB's post in BTRFS Errors in Syslog was marked as the answer   
    Oct 3 08:44:48 forge kernel: BTRFS info (device nvme0n1p1): bdev /dev/sdb1 errs: wr 2966654, rd 386, flush 140173, corrupt 0, gen 0  
    This shows one of the pool devices dropped offline in the past, run a correcting scrub and see here for more info and better pool monitoring.
  8. JorgeB's post in UNRAID stuck on /bzroot . . .ok was marked as the answer   
    Try replacing only the bz* files using the ones from the download zip.
  9. JorgeB's post in Best way to Access Files not from Local Network was marked as the answer   
    This would be a good option:
    https://unraid.net/blog/wireguard-on-unraid
     
  10. JorgeB's post in Mechanical disk failure then separate disk file system corruption with one parity disk was marked as the answer   
    -Tools -> New Config -> Retain current configuration: All -> Apply
    -Check all assignments and assign any missing disk(s) if needed
    -IMPORTANT - Check both "parity is already valid" and "maintenance mode" and start the array (note that the GUI will still show that data on parity disk(s) will be overwritten, this is normal as it doesn't account for the checkbox, but it won't be as long as it's checked)
    -Stop array
    -Unassign disk2
    -Start array (in normal mode now) and post new diags
  11. JorgeB's post in Disk suddenly errored was marked as the answer   
    USe -L
  12. JorgeB's post in [6.10.3] Unraid repeatedly becomes unresponsive/unreachable, hard shutdown needed (Kernel error?) was marked as the answer   
    Jul 29 18:32:36 Andisvault kernel: macvlan_broadcast+0x116/0x144 [macvlan] Jul 29 18:32:36 Andisvault kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan]  
    Macvlan call traces are usually the result of having dockers with a custom IP address, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enable, top right)), or see below for more info.
  13. JorgeB's post in New Hardware locking up was marked as the answer   
    Start here:
    https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=819173
     
  14. JorgeB's post in Read only was marked as the answer   
    Sep 29 11:00:02 nas-mass kernel: ata1: SError: { UnrecovData 10B8B BadCRC }  
    This really looks like a bad SATA cable, I would try replacing it once more, if issues persist replace the device, could be making a bad connection.
  15. JorgeB's post in Unraid Server’s Web Interface Keeps Becoming Unresponsive was marked as the answer   
    Sep 16 13:39:07 Unraid-1 kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Sep 16 13:39:07 Unraid-1 kernel: ? _raw_spin_unlock+0x14/0x29 Sep 16 13:39:07 Unraid-1 kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan]  
    Macvlan call traces will usually end up crashing the server, switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).
  16. JorgeB's post in Advice on fixing Device Error was marked as the answer   
    Looks more like a power/connection issue, try again after replacing the cables, also check power, and make sure the emulated disk is still mounting before rebuilding on top.
  17. JorgeB's post in Installed Scrutiny and lost both my parity drives. was marked as the answer   
    You shoudl reboot now, the controller crashed, so the the devices are in an unknown state, a simple reboot should bring the data devices back.
  18. JorgeB's post in Zero HDD Attempt to Fix Bad Sector was marked as the answer   
    Use the Preclear plugin.
  19. JorgeB's post in [Solved] xfs repair -n results question was marked as the answer   
    I would recommend being on the latest stable, so 6.12.4
  20. JorgeB's post in Why I lost docker menu on unraid server was marked as the answer   
    Docker service is disabled, go to Settings and enable it.
  21. JorgeB's post in How to swap/ upgrade ZFS nvme drive? [SOLVED] was marked as the answer   
    I'm going to post an example to replicate the complete appdata dataset, including all all descendent file systems, up to the named snapshot:
     
    -stop docker service
    -create a new snapshot for appdata manually or using zfs master if you prefer
    -on the CLI type
    zfs send -R docker/appdata@last_snapshot_name | zfs receive destination_zpool_name/appdata  
    after this is done type
    zfs list -t all  to confirm all child datasets and snapshots are now also on the new pool, then you just need to change the paths, from /mnt/docker/appdata/container to /mnt/new_zpool/appdata/container, or if you are using /mnt/user delete/rename old appdata, then re-start docker service.
     
    For the VMs it would be the same, don't forget to stop the VM service before taking the last snaphot.
  22. JorgeB's post in How to add L2ARC/SLOG/SPECIAL/DEDUP vdev to LUKS encrypted ZPOOL was marked as the answer   
    Was able to test and it works, but it's quite involved:
     
    create new temp pool and assign all the devices you plan on adding
    set fs to zfs encrypted, any profile
    start array, format pool
    stop array
    unassign all devices from the temp pool and delete it
    open new LUKS devices:
     
    cryptsetup luksOpen /dev/sdX1 sdX1 --key-file=/root/keyfile Replace X with correct letter, for NVMe devices it will be '/dev/nvmeXn1p1 nvmeXn1p1', do this for all new devices, if using a passphrase omit --key-file=/root/keyfile and enter the passphrase
    Add the new vdevs as explained in the FAQ entry, just need to add mapper to every device, e.g.
    zpool add tank -f special mirror /dev/mapper/sdf1 /dev/mapper/nvme0n1p1
    stop array
    Close LUKS for all new devices:
    cryptsetup luksClose sdX1
    do the pool import procedure as detailed in the FAQ
  23. JorgeB's post in Please help with share configuration was marked as the answer   
    Since you have a single device pool, i.e., without redundancy, any share that has files on that pool will appear as unprotected, this is normal.
  24. JorgeB's post in Array disk in error state -> rebuild -> disk unmountable was marked as the answer   
    Looks like the fs is not recoverable, if I understood correctly you've rebuilt on top of the original disk? If yes best bet is using a file recovery util, like UFS explorer, there's a free trial you can use to see if it can find the data.
  25. JorgeB's post in Sporadic Unresponsiveness was marked as the answer   
    Disk3 appears to be failing, run an extended SMART test to confirm, that should not cause lockups, but this will:
     
    Sep 23 11:46:57 Odyssey kernel: macvlan_broadcast+0x10a/0x150 [macvlan] Sep 23 11:46:57 Odyssey kernel: ? _raw_spin_unlock+0x14/0x29 Sep 23 11:46:57 Odyssey kernel: macvlan_process_broadcast+0xbc/0x12f [macvlan]  
    Switching to ipvlan should fix it (Settings -> Docker Settings -> Docker custom network type -> ipvlan (advanced view must be enabled, top right)).