Jump to content

Caboose20

Members
  • Posts

    10
  • Joined

  • Last visited

Posts posted by Caboose20

  1. 5 minutes ago, itimpi said:

    The syslog in the diagnostics is the RAM copy and only shows what happened since the reboot.   It could be worth enabling the syslog server to can get a log that survives a reboot so we can see what happened prior to the reboot.

    I Rebooted yesterday when the server was working normally. This diagnostic is from the server right now which is still in a failed state. I haven't yet restarted it in order to preserve the logs.

  2. Hello,

     

     

    I have been having this problem for a while now and thought its finally time to see if I can get some assistance. For some reason, my cache drive is entering read-only. If I reboot the array, it triggers a parity check but seems to temporarily solve the issue for a few days. Unfortunately the issue keeps coming back and I cannot narrow down what is causing it.

     

    Running Unraid Version: 6.12.6

     

    Hoping someone can point me in the right direction

     

    Thanks,

    tower-diagnostics-20231207-0934.zip

  3. 35 minutes ago, johnnie.black said:

    I ran the test and this was the result:

    Phase 1 - find and verify superblock...
            - block cache size set to 544656 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 127307 tail block 127290
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    agi unlinked bucket 63 is 8831 in ag 0 (inode=8831)
    sb_ifree 2069, counted 2070
    sb_fdblocks 205844696, counted 206825776
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
    data fork in ino 117471321 claims free block 15577142
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 2
            - agno = 1
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - agno = 8
            - agno = 9
            - agno = 10
            - agno = 11
            - agno = 12
            - agno = 13
            - agno = 14
            - agno = 15
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    disconnected inode 8831, would move to lost+found
    Phase 7 - verify link counts...
    would have reset inode 8831 nlinks from 0 to 1
    Maximum metadata LSN (20:127391) is ahead of log (20:127307).
    Would format log to cycle 23.
    No modify flag set, skipping filesystem flush and exiting.
    
            XFS_REPAIR Summary    Thu Jun 11 10:42:02 2020
    
    Phase        Start        End        Duration
    Phase 1:    06/11 10:41:54    06/11 10:41:54
    Phase 2:    06/11 10:41:54    06/11 10:41:55    1 second
    Phase 3:    06/11 10:41:55    06/11 10:41:59    4 seconds
    Phase 4:    06/11 10:41:59    06/11 10:41:59
    Phase 5:    Skipped
    Phase 6:    06/11 10:41:59    06/11 10:42:02    3 seconds
    Phase 7:    06/11 10:42:02    06/11 10:42:02
    
    Total run time: 8 seconds

    It seems to indicate that I need to mount the disk to resolve the issues

     

  4. Hello everyone,

     

    So my server had some issues the other day. My son accidentally touched the drive bay on the front and turned off some of the disks while the server was running. He had also powered the server off. I already had one failed disk in the array at this point. I had restarted the server and some disks were missing when the array turned on. I reseated all the drives, and things seemed to be going okay. I decided to swap in a new drive for the failed disk and rebuilt the disk. After this completed I noticed that disk 4 started generating UDMA CRC error counts, the number has grown to 1652 now. I left it alone while I waited for replacement disks and since then Disk 6 says that it is unmountable. I have two replacement drives that I am looking to swap in, but I am just trying to determine the best process for doing so to preserve as much data as possible.

     

    Thanks

    tower-diagnostics-20200611-0900.zip

  5. On 12/30/2017 at 11:56 AM, Shayne said:

    I'm using prerelease 6.4. One of the features I like is the ability to assign an IP on my br0 network. This allowed me to get rid of the pipework container I used to assign IP addresses before 6.4.

     

    I've ran into an issue where I have containers that need both an IP on my main subnet (done via br0 network in Unraid Docker config), and also access to the bridge Docket network. An example of this is the nginx-proxy container I have which opens some web services up to the Internet. The containers it reverse proxys to just have bridge network and I'd rather not assign each one of those an IP on br0.

     

    A workaround I have is to manually call `docker network connect ...` to add the br0 network to my nginx-proxy container. This works, but when the container is recreated/updated it needs to be re-ran.

     

    It would be great to have options for multiple networks in the Docker configuration in Unraid or if anyone knows how br0 containers can access the docker bridge network that could work too.

    You ever get this working? I have the exact same use case.

  6. I am having issues getting the container to start. I had this working yesterday but I needed to remove and readd the container as I was no longer getting updates for it. Port forward is working. I know I ran into this issue when I first set this up but I am at a loss. Can someone take a look?

     

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing...
    
    -------------------------------------
    _ _ _
    | |___| (_) ___
    | / __| | |/ _ \
    | \__ \ | | (_) |
    |_|___/ |_|\___/
    |_|
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donations/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    Only subdomains, no URL in cert
    Sub-domains processed are: -d sonarr.mydomain.com -d nzbget.mydomain.com -d radarr.mydomain.com -d hydra.mydomain.com -d lazy.mydomain.com -d books.mydomain.com -d hass.mydomain.com
    E-mail address entered: [email protected]
    Different sub/domains entered than what was used before. Revoking and deleting existing certificate, and an updated one will be created
    usage:
    certbot [SUBCOMMAND] [options] [-d DOMAIN] [-d DOMAIN] ...
    
    Certbot can obtain and install HTTPS/TLS/SSL certificates. By default,
    it will attempt to use a webserver both for obtaining and installing the
    certificate.
    certbot: error: argument --cert-path: No such file or directory
    
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator standalone, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA.
    Client with the currently selected authenticator does not support any combination of challenges that will satisfy the CA.
    IMPORTANT NOTES:
    - Your account credentials have been saved in your Certbot
    configuration directory at /etc/letsencrypt. You should make a
    secure backup of this folder now. This configuration directory will
    also contain certificates and private keys obtained by Certbot so
    making regular backups of this folder is ideal.
    /var/run/s6/etc/cont-init.d/50-config: line 134: cd: /config/keys/letsencrypt: No such file or directory
    [cont-init.d] 50-config: exited 1.
    [cont-finish.d] executing container finish scripts...
    [cont-finish.d] done.
    [s6-finish] syncing disks.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the TERM signal.
    [s6-finish] sending all processes the KILL signal and exiting.

     

×
×
  • Create New...