kazanjig

Members
  • Posts

    15
  • Joined

  • Last visited

Posts posted by kazanjig

  1. Re-ran with -v. Results below. Hesitant to follow the instructions to destroy the log if there are alternatives but proceeding based on your suggestion.

     

    Phase 1 - find and verify superblock...
            - block cache size set to 6177104 entries
    Phase 2 - using internal log
            - zero log...
    zero_log: head block 500453 tail block 500449
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

  2. Not sure of the circumstances, but my server now reports its XFS SSD cache as "Unmountable: Unsupported or no file system".

     

    Here are the results of xfs_repair:

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ALERT: The filesystem has valuable metadata changes in a log which is being
    ignored because the -n option was used.  Expect spurious inconsistencies
    which may be resolved by first mounting the filesystem to replay the log.
            - scan filesystem freespace and inode maps...
    sb_fdblocks 210951268, counted 212661156
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
    inode 538653759 - bad extent starting block number 4503567550402429, offset 0
    correcting nextents for inode 538653759
    bad data fork in inode 538653759
    would have cleared inode 538653759
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 1
            - agno = 0
            - agno = 3
            - agno = 2
    inode 538653759 - bad extent starting block number 4503567550402429, offset 0
    correcting nextents for inode 538653759
    bad data fork in inode 538653759
    would have cleared inode 538653759
    entry "597feb7aedcf95e3cb07122f8d1e44b66d32bd73_1280x1024_fit.jpg" at block 0 offset 2504 in directory inode 540714325 references free inode 538653759
            would clear inode number in entry at offset 2504...
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
    entry "597feb7aedcf95e3cb07122f8d1e44b66d32bd73_1280x1024_fit.jpg" in directory inode 540714325 points to free inode 538653759, would junk entry
    bad hash table for directory inode 540714325 (no data entry): would rebuild
    would rebuild directory inode 540714325
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

     

    After a reboot, the SSD remains unmountable. Not sure where to go from here. Help!

  3. EDIT: Ugh, pulled a stoopid. I was putting Shinobi on 'proxynet' instead of br1.

     

    ---------------------------

     

    Hi all. I'm having a problem with SWAG/Shinobi using an IP address in a second subnet. It _was_ working a couple days ago but then a few things blew up with UnRAID/my network and now I cannot get Shinobi to start on the correct subnet. I've deleted and reinstalled SWAG and Shinobi, no luck. I've blown up my Docker file and rebuilt, no luck.

     

    Desired: Shinobi on 'proxynet' at 192.168.2.100:8080 (I have it set correctly in the manually created shinobi.subdomain.conf)

    Result: Shinobi starts on 'proxynet' at 192.168.1.100:8080 (which is a problem because I also run Ubiquiti equipment)

     

    No matter what I put in the conf file for an IP address, it is ignored. Even if I put an IP address in the *.*.1.* subnet it's still ignored and Shinobi fires up using my UnRAID server's IP address. As a point of reference,I can start Shinobi on br0 and br1 using IP addresses in their respective subnets and its works just fine bypassing 'proxynet'. So, good that it appears to not be a routing issue (outside of Docker's own routing).

     

     

    It's as if SWAG isn't seeing the shinobi.subdomain.conf file at all. Any thoughts on what could be causing SWAG to ignore the conf file?

  4. EDIT: Ugh, pulled a stoopid. I was putting Shinobi on 'proxynet' instead of br1.

     

    -----------------------------

     

    Hi all. I'm having a problem with SWAG/Shinobi using an IP address in a different subnet. It _was_ working a couple days ago but then a few things blew up with UnRAID/my network and now I cannot get Shinobi to start on the correct subnet.

     

    Desired: Shinobi on 'proxynet' at 192.168.2.100:8080 (I have it set correctly in the manually created shinobi.subdomain.conf)

    Result: Shinobi start on 'proxynet' at 192.168.1.100:8080 (which is a problem because I also run Ubiquiti equipment)

     

    Any thoughts on what could be causing SWAG to ignore the IP address/subnet in the conf file?

     

    EDIT: Shinobi will start at 192.168.2.100 if I change the network to br1 1 which is on the *.2.* subnet, so it's not a routing issue. It's as if SWAG isn't seeing the shinobi.subdomain.conf file at all. But it _is_ seeing the conf file because if I set it to an IP address/port on the *.1.* subnet it works. There's just something about the *.2.* subnet SWAG doesn't like...

     

    I deleted the SWAG and Shinobi containers, deleted the 'proxynet' network, and rebuilt it all. Same result. 

  5. So, I think it's actually a cache issue. My syslog is getting flooded with "shfs: cache disk full" and I believe that's causing the whole system to hang. I'm not currently using the server -- it was unstable and would only stay up for a couple days before crashing so I'm trying to 'rebuild' it. There is a bunch of data stored on it but no shares actively being used, no Dockers or VMs running. There's a bunch of stuff on my cache from previous Docker installs that I'll likely need to regain access to. I've attached my diagnostics. Any help would be greatly appreciated.

    ursus-diagnostics-20210908-1046.zip

  6. For the life of me I cannot get the Unifi Controller web UI to respond. I've tried 5.7, 5.8, 5.9, LTS... I've even deleted my Docker img and started everything from scratch. I've updated to the latest Unraid 6.9.2. Nothing works and I have no idea what I'm missing. 

     

    EDIT:

    In 5.9, the web UI responds but I get "UniFi Controller is starting up... Please wait a moment". Refreshing after several minutes reveals the same message.

  7. Here's what was on the display at the latest crash just before Docker started failing to start. I presume this led to the Docker img getting corrupted. The array (without Docker) has been up for ~36 hours now which is unheard of as of late. Not sure what "unexpected item end" is or if it's related to the img getting full. I'll look into it -- my kids were running a couple Minecraft servers but everything I was running is pretty standard fare.

     

    EDIT:

    Well, gee, look at that...

    -rw-rw-rw- 1 nobody users 68719476736 Sep  6 17:37 docker.img

    Is there any way to confirm Minecraft is the culprit?

     

    error3.thumb.jpg.c6869a9a5877fd03627c69bbe441e47e.jpg

     

     

  8. Thanks for the reply. I had the docker img get corrupted about a month ago and recreated it. I can’t remember exactly but I think the instability started around that time. It seems unusual to have two corruptions so close in time. Wondering if it’s a hardware (memory) issue. If not, I may just start from scratch. I saw a couple posts on what I would need to save to maintain my array. Makes me very anxious to blow it up though.

  9. Over the past couple weeks, my array has become unstable where about daily it becomes unresponsive and an error trace is displayed on the monitor. A forced restart is needed but usually twice as the first reboot the server isn't visible on the network for some reason. Today, even though it boots, Docker refuses to start. Diagnostics and pictures of the last couple error traces attached. I also have the syslog saved to flash but it's ~65MB -- can't upload so here's a link. Wondering if it's a hardware or software issue -- I'm lost. Any help would be greatly appreciated.

     

    error1.jpg

    error2.jpg

    ursus-diagnostics-20200906-1741.zip