JorgeB

Moderators
  • Posts

    61540
  • Joined

  • Last visited

  • Days Won

    649

Community Answers

  1. JorgeB's post in LSI / SAS Controller not visible from Unraid - was marked as the answer   
    Try a different PCIe slot if possible, you can also try the bellow, though not sure it helps with this error:
     
    https://forums.unraid.net/topic/132930-drives-and-usb-devices-visible-in-bios-not-available-once-booted-asus-wrx80-sage-5965wx/?do=findComment&comment=1208035
     
  2. JorgeB's post in "port forwarding from certain routers (Fritzbox)" - Link does not work was marked as the answer   
    https://docs.unraid.net/unraid-os/release-notes/6.12.4/#fix-for-macvlan-call-traces
  3. JorgeB's post in Server was unresponsive for 40 minutes - can someone please help me review the logs to find the source? was marked as the answer   
    The main issue I see was an OOM error:
     
    Out of memory: Killed process 32593 (jellyfin)  
    That container was using a lot of RAM, so check the config or limit its RAM usage.
  4. JorgeB's post in USB Failure - License transfer was marked as the answer   
    I would recommend using the form again, try a different brwoser, until you get an automatic reply, AFAIK there's no direct email you can use.
  5. JorgeB's post in Red X Drive Disabled was marked as the answer   
    Diags are after rebooting so we can't see what happened, but the disk looks healthy, since the emulated disk is amounting, and assuming contents look correct, you can rebuild on top, recommend replacing the cables first to rule that out, if it happens again to the same disk.
  6. JorgeB's post in Lost ALL array drives instantly, unmountable. (6.12.8) was marked as the answer   
    gdisk output confirms the partitions got clobbered, likely by the RAID controller, you can try to fix one with gdisk to see if it works after, to play it safer I would recommend cloning the disk first with dd and do it on the clone.
  7. JorgeB's post in My btrfs cache pool is unmontable. was marked as the answer   
    If the log tree is the only issue this may help:
     
    btrfs rescue zero-log /dev/nvme0n1p1  
    Then restart the array.
  8. JorgeB's post in Can't get the array to start. was marked as the answer   
    ZFS raidz1 is far from experimental, it has been considerable stable for a long time.
     
    That suggests the pool is crashing the server on mount, before starting the server type:
     
    zpool import -o readonly=on cache  
    If successful then start the array, the GUI will show the pool unmountable but the data should be under /mnt/cache, then backup and re-create the pool
     
  9. JorgeB's post in Btrfs error write time tree block corruption detected was marked as the answer   
    This usually means bad RAM or other kernel memory corruption, start by running memtest. 
  10. JorgeB's post in [SOLVED] Help With Hardware Warning PCIe error was marked as the answer   
    Try this first:
     
    https://forums.unraid.net/topic/118286-nvme-drives-throwing-errors-filling-logs-instantly-how-to-resolve/?do=findComment&comment=1165009
     
  11. JorgeB's post in Slow Veeam Backup Performance was marked as the answer   
    That will only happen if you set the share to use a pool as the primary storage, currently is set to array.
     
    Note that you can also get better performance writing directly to the array with turbo write.
     
     
  12. JorgeB's post in 6.12.6, Win 11 VM pausing randomly was marked as the answer   
    See if Windows is set to sleep/hibernate after a while.
  13. JorgeB's post in Unmountable: No file system was marked as the answer   
    Mar 13 16:16:45 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 13 16:16:45 Tower kernel: ata2: link is slow to respond, please be patient (ready=0) Mar 13 16:16:49 Tower kernel: ata1: COMRESET failed (errno=-16) Mar 13 16:16:49 Tower kernel: ata1: hard resetting link Mar 13 16:16:49 Tower kernel: ata2: COMRESET failed (errno=-16) Mar 13 16:16:49 Tower kernel: ata2: hard resetting link Mar 13 16:16:55 Tower kernel: ata2: link is slow to respond, please be patient (ready=0) Mar 13 16:16:55 Tower kernel: ata1: link is slow to respond, please be patient (ready=0) Mar 13 16:16:59 Tower kernel: ata1: COMRESET failed (errno=-16) Mar 13 16:16:59 Tower kernel: ata1: hard resetting link Mar 13 16:16:59 Tower kernel: ata2: COMRESET failed (errno=-16) Mar 13 16:16:59 Tower kernel: ata2: hard resetting link Mar 13 16:17:03 Tower kernel: ata2: SATA link up 1.5 Gbps (SStatus 113 SControl 310) Mar 13 16:17:03 Tower kernel: ata2.00: configured for UDMA/100  
    Constant ATA errors from parity and disk1, check/replace cables (power and SATA) for both and post new diags after array start.
  14. JorgeB's post in Fix Uncommon Problems reports Out of Memory Error - 6.12.8 was marked as the answer   
    Looks to me like Frigate is the problem, check the configuration or try limiting its RAM usage.
  15. JorgeB's post in Docker Service failed to start. was marked as the answer   
    This is why it's failing, but not sure what's causing it:
    failed to start containerd: timeout waiting for containerd to start  
    You can try recreating the docker image:
    https://docs.unraid.net/unraid-os/manual/docker-management/#re-create-the-docker-image-file
    Also see below if you have any custom docker networks:
    https://docs.unraid.net/unraid-os/manual/docker-management/#docker-custom-networks
  16. JorgeB's post in TrueNAS zfs import to 6.12.8 with zfs on partition 2 was marked as the answer   
    Should be perfectly safe, if you follow the steps correctly, I did it with my TrueNAS CORE pool, just for testing, since I want to keep TrueNAS on this server, booted with an Unraid flash drive, pool before the changes:
       pool: tank      id: 11986576849467638030   state: ONLINE status: The pool was last accessed by another system.  action: The pool can be imported using its name or numeric identifier and         the '-f' flag.    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY  config:         tank        ONLINE           raidz3-0  ONLINE             sdk2    ONLINE             sdg2    ONLINE             sdc2    ONLINE             sdd2    ONLINE             sdf2    ONLINE             sdi2    ONLINE             sde2    ONLINE             sdh2    ONLINE             sdm2    ONLINE             sdj2    ONLINE             sdl2    ONLINE  
    After running fdisk on each device to delete parttion1 and make partition2 > partition1:
       pool: tank      id: 11986576849467638030   state: ONLINE status: The pool was last accessed by another system.  action: The pool can be imported using its name or numeric identifier and         the '-f' flag.    see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-EY  config:         tank        ONLINE           raidz3-0  ONLINE             sdk1    ONLINE             sdg1    ONLINE             sdc1    ONLINE             sdd1    ONLINE             sdf1    ONLINE             sdi1    ONLINE             sde1    ONLINE             sdh1    ONLINE             sdm1    ONLINE             sdj1    ONLINE             sdl1    ONLINE
    After doing this, the pool imported normally with Unraid 6.12.8:

     
    Rebooted the server and booted TrueNAS, pool imported as if nothing changed:

     
     
     
     
  17. JorgeB's post in Raid 10 ZFS was marked as the answer   
    Yes, it does support zfs striped mirrors.
  18. JorgeB's post in Can I upgrade from 6.7.2 to 6.12.8 directly? Any gotchas? was marked as the answer   
    Should be OK but recommend reading the release notes for any major release, 6.8.0, 6.9.0, etc
  19. JorgeB's post in ZFS Pool + TRIM was marked as the answer   
    Yes, it can be more complete, once a week should be fine.
  20. JorgeB's post in 6.12.8: both cache SSDs with btrfs show UNMOUNTABLE : UNSUPPORTABLE FILE SYSTEM was marked as the answer   
    If the log tree is the main issue this may help, type:
     
    btrfs rescue zero-log /dev/sdd1  
    Then restart the array
  21. JorgeB's post in Issues with displaying WebUI - Array devices, used resources, pop-ups was marked as the answer   
    Try booting in safe mode and/or using a different browser.
  22. JorgeB's post in UASP Support in Unraid v6.13 was marked as the answer   
    Yep, it will be supported.
  23. JorgeB's post in Cache Pool ZFS - 1 Disk Only. Wring pool size in Unraid GUI was marked as the answer   
    Stop the array, unassign the pool device, start array, stop array, set the pool to one slot, assign the pool device, start array, that should do it.
  24. JorgeB's post in Seamless cache pool upgrade? was marked as the answer   
    Not sure what you mean here, as long as the new devices are same size or large there won't be a problem.
  25. JorgeB's post in Cache Drive Recovery was marked as the answer   
    type
    zpool import -F cache if successful then type
    zpool export cache After that it should be mountable with UD or Unraid as a pool, and run a scrub.