Jump to content

housewrecker

Members
  • Posts

    69
  • Joined

  • Last visited

Posts posted by housewrecker

  1. A little background, my goal is to set my dockers to static IP addresses so I can map them to a host file instead of using my Unraid server IP and a port.

     

    I switched the network type from bridge to Custom: br0. I set the IP of the containers to an unused IP on my LAN. The IP address updates correctly, but I can't set the port of the docker anymore. Is that expected or am I doing something wrong? The port always uses the default port of the application. I've tried with Tautulli, MakeMKV, and Handbrake all having the same problem.

     

    I have the port of Tautulli set to 2345 but docker is showing 8181.

     

    image.thumb.png.42f0a77b883a2f51e189c9ed572a984a.png

     

    image.thumb.png.2f86114f24efd239731a0777423c9154.png

  2. I did that. The shares disappeared. I rebooted and they came back. I re-downloaded the containers since they were gone. I needed to change some of the network settings, maybe because of the macvlan change. But I think everything is back up. The bigger question is if the two fixes will help resolve the locking up issue.

     

    Thanks for the help with everything. @JorgeB @dlandon

    • Like 1
  3. 12 minutes ago, JorgeB said:

    Regarding the cache issue:

     

    Nov 23 10:49:01 Blue kernel: XFS (sdc1): Metadata corruption detected at xfs_bmap_validate_extent+0x41/0x68 [xfs], inode 0x40000081 xfs_iread_extents(2)
    Nov 23 10:49:01 Blue kernel: XFS (sdc1): Unmount and run xfs_repair

     

    Check filesystem, run it without -n.

     

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

    This is what I got as a response. I'm not sure how to get the file system to replay the log.

  4. 11 minutes ago, dlandon said:

    Remove this from your go file:

    modprobe i915
    chmod -R 777 /dev/dri

    It's no longer needed and has been known to cause issues.

     

    I'm seeing a lot of these messages that might be filling your log:

    Nov 23 13:18:21 Blue nginx: 2023/11/23 13:18:21 [error] 9361#9361: *24185 limiting requests, excess: 20.016 by zone "authlimit", client: 192.168.1.221, server: hash.myunraid.net, request: "Nov 23 13:18:19 Blue nginx: 2023/11/23 13:18:19 [error] 9361#9361: *24185 limiting requests, excess: 20.216 by zone "authlimit", client: 192.168.1.221, server: hash.myunraid.net, request: GET /login HTTP/1.1", host: "hash.myunraid.net"
    Nov 23 13:18:24 Blue nginx: 2023/11/23 13:18:24 [error] 9361#9361: *24424 limiting requests, excess: 20.887 by zone "authlimit", client: 192.168.1.221, server: hash.myunraid.net, request: "GET /login HTTP/2.0", host: "hash.myunraid.net", referrer: "https://hash.myunraid.net/Main/Browse?dir=%2Fboot%2Flogs"

    I don't have an answer for that.

     

    I haven't done the /dev/dri work in a bit. I think it was for the Intel GPU for Plex. Will removing that file affect that? Which one is the go file?

  5. I have an array with two parity disks. One of the parity disks (14TB) has errored and I want to replace it. I also have a disk that I want to upgrade from smaller to bigger, 4TB -> 10TB. Can I replace both the bad parity and data disk to upgrade at the same time and rebuild? The information should all be there because the second parity and the other data disks would have everything but I'm unsure if Unraid supports this. Or even if it's a good idea. The idea was to prevent two rebuilds since it takes a day and a half at this point. 

×
×
  • Create New...