JorgeB

Moderators
  • Posts

    60012
  • Joined

  • Last visited

  • Days Won

    627

Community Answers

  1. JorgeB's post in Can't get the array to start. was marked as the answer   
    ZFS raidz1 is far from experimental, it has been considerable stable for a long time.
     
    That suggests the pool is crashing the server on mount, before starting the server type:
     
    zpool import -o readonly=on cache  
    If successful then start the array, the GUI will show the pool unmountable but the data should be under /mnt/cache, then backup and re-create the pool
     
  2. JorgeB's post in Btrfs error write time tree block corruption detected was marked as the answer   
    This usually means bad RAM or other kernel memory corruption, start by running memtest. 
  3. JorgeB's post in [SOLVED] Help With Hardware Warning PCIe error was marked as the answer   
    Try this first:
     
    https://forums.unraid.net/topic/118286-nvme-drives-throwing-errors-filling-logs-instantly-how-to-resolve/?do=findComment&comment=1165009
     
  4. JorgeB's post in Slow Veeam Backup Performance was marked as the answer   
    That will only happen if you set the share to use a pool as the primary storage, currently is set to array.
     
    Note that you can also get better performance writing directly to the array with turbo write.
     
     
  5. JorgeB's post in 6.12.6, Win 11 VM pausing randomly was marked as the answer   
    See if Windows is set to sleep/hibernate after a while.
  6. JorgeB's post in Unmountable: No file system was marked as the answer   
    Mar 13 16:16:45 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 13 16:16:45 Tower kernel: ata2: link is slow to respond, please be patient (ready=0) Mar 13 16:16:49 Tower kernel: ata1: COMRESET failed (errno=-16) Mar 13 16:16:49 Tower kernel: ata1: hard resetting link Mar 13 16:16:49 Tower kernel: ata2: COMRESET failed (errno=-16) Mar 13 16:16:49 Tower kernel: ata2: hard resetting link Mar 13 16:16:55 Tower kernel: ata2: link is slow to respond, please be patient (ready=0) Mar 13 16:16:55 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 13 16:16:59 Tower kernel: ata1: COMRESET failed (errno=-16) Mar 13 16:16:59 Tower kernel: ata1: hard resetting link Mar 13 16:16:59 Tower kernel: ata2: COMRESET failed (errno=-16) Mar 13 16:16:59 Tower kernel: ata2: hard resetting link Mar 13 16:17:03 Tower kernel: ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 13 16:17:03 Tower kernel: ata2.00: configured for UDMA/100  
    Constant ATA errors from parity and disk1, check/replace cables (power and SATA) for both and post new diags after array start.
  7. JorgeB's post in Fix Uncommon Problems reports Out of Memory Error - 6.12.8 was marked as the answer   
    Looks to me like Frigate is the problem, check the configuration or try limiting its RAM usage.
  8. JorgeB's post in Docker Service failed to start. was marked as the answer   
    This is why it's failing, but not sure what's causing it:
    failed to start containerd: timeout waiting for containerd to start  
    You can try recreating the docker image:
    https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file
    Also see below if you have any custom docker networks:
    https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks
  9. JorgeB's post in TrueNAS zfs import to 6.12.8 with zfs on partition 2 was marked as the answer   
    Should be perfectly safe, if you follow the steps correctly, I did it with my TrueNAS CORE pool, just for testing, since I want to keep TrueNAS on this server, booted with an Unraid flash drive, pool before the changes:
       pool: tank      id: 11986576849467638030   state: ONLINE status: The pool was last accessed by another system.  action: The pool can be imported using its name or numeric identifier and         the '-f' flag.    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY  config:         tank        ONLINE           raidz3-0  ONLINE             sdk2    ONLINE             sdg2    ONLINE             sdc2    ONLINE             sdd2    ONLINE             sdf2    ONLINE             sdi2    ONLINE             sde2    ONLINE             sdh2    ONLINE             sdm2    ONLINE             sdj2    ONLINE             sdl2    ONLINE  
    After running fdisk on each device to delete parttion1 and make partition2 > partition1:
       pool: tank      id: 11986576849467638030   state: ONLINE status: The pool was last accessed by another system.  action: The pool can be imported using its name or numeric identifier and         the '-f' flag.    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY  config:         tank        ONLINE           raidz3-0  ONLINE             sdk1    ONLINE             sdg1    ONLINE             sdc1    ONLINE             sdd1    ONLINE             sdf1    ONLINE             sdi1    ONLINE             sde1    ONLINE             sdh1    ONLINE             sdm1    ONLINE             sdj1    ONLINE             sdl1    ONLINE
    After doing this, the pool imported normally with Unraid 6.12.8:

     
    Rebooted the server and booted TrueNAS, pool imported as if nothing changed:

     
     
     
     
  10. JorgeB's post in Raid 10 ZFS was marked as the answer   
    Yes, it does support zfs striped mirrors.
  11. JorgeB's post in Can I upgrade from 6.7.2 to 6.12.8 directly? Any gotchas? was marked as the answer   
    Should be OK but recommend reading the release notes for any major release, 6.8.0, 6.9.0, etc
  12. JorgeB's post in ZFS Pool + TRIM was marked as the answer   
    Yes, it can be more complete, once a week should be fine.
  13. JorgeB's post in 6.12.8: both cache SSDs with btrfs show UNMOUNTABLE : UNSUPPORTABLE FILE SYSTEM was marked as the answer   
    If the log tree is the main issue this may help, type:
     
    btrfs rescue zero-log /dev/sdd1  
    Then restart the array
  14. JorgeB's post in Issues with displaying WebUI - Array devices, used resources, pop-ups was marked as the answer   
    Try booting in safe mode and/or using a different browser.
  15. JorgeB's post in UASP Support in Unraid v6.13 was marked as the answer   
    Yep, it will be supported.
  16. JorgeB's post in Cache Pool ZFS - 1 Disk Only. Wring pool size in Unraid GUI was marked as the answer   
    Stop the array, unassign the pool device, start array, stop array, set the pool to one slot, assign the pool device, start array, that should do it.
  17. JorgeB's post in Seamless cache pool upgrade? was marked as the answer   
    Not sure what you mean here, as long as the new devices are same size or large there won't be a problem.
  18. JorgeB's post in Cache Drive Recovery was marked as the answer   
    type
    zpool import -F cache if successful then type
    zpool export cache After that it should be mountable with UD or Unraid as a pool, and run a scrub.
  19. JorgeB's post in Share not switching to exclusive mode was marked as the answer   
    That should resolve the issue.
  20. JorgeB's post in NVME on UnRaid 6.12.8 having issues was marked as the answer   
    Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 879300256, 224 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 879300768, 256 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 886718400, 512 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 879300768 op 0x0:(READ) flags 0x80700 phys_seg 18 prio class 2 Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 879300256 op 0x0:(READ) flags 0x80700 phys_seg 16 prio class 2 Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 886718400 op 0x0:(READ) flags 0x80700 phys_seg 64 prio class 2 Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 879300512, 96 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 879300512 op 0x0:(READ) flags 0x80700 phys_seg 7 prio class 2 Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 879300480, 32 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 879300480 op 0x0:(READ) flags 0x80700 phys_seg 4 prio class 2 Mar 13 01:04:54 jlw-unRaid kernel: nvme1n1: I/O Cmd(0x2) @ LBA 879300000, 256 blocks, I/O Error (sct 0x3 / sc 0x71) Mar 13 01:04:54 jlw-unRaid kernel: I/O error, dev nvme1n1, sector 879300000 op 0x0:(READ) flags 0x80700 phys_seg 17 prio class 2
     
     
    After aborting the NVMe device is giving errors and losing writes, hence the btrfs errors after, I would suggest trying with a different one if possible
  21. JorgeB's post in Unmountable: Unsupported partition layout was marked as the answer   
    Looks like the new parition for disk4 requires a reboot to be updated, please reboot and post new diags after array start.
  22. JorgeB's post in Apallingly Slow Read/Write Speed within Array was marked as the answer   
    Parity drive is SMR, and model known to sometimes perform very slow, post new diags during a transfer
  23. JorgeB's post in Unraid will not boot was marked as the answer   
    Backup the current flash drive, install stock Unraid, confirm it boots, if yes restore the only /config folder from the backup.
  24. JorgeB's post in Boot USB Flash Died was marked as the answer   
    You must leave your last valid key file installed, and correctly named, and only that one.
  25. JorgeB's post in HP Elitedesk 800 G6 - Don´t see NVMe was marked as the answer   
    Mar 11 03:33:13 Tower kernel: ahci 0000:00:17.0: Found 1 remapped NVMe devices. Mar 11 03:33:13 Tower kernel: ahci 0000:00:17.0: Switch your BIOS from RAID to AHCI mode to use them.