Jump to content

curtis-r

Members
  • Posts

    151
  • Joined

  • Last visited

Posts posted by curtis-r

  1. 26 minutes ago, SimonF said:

    You should be able to passthru a usb dev just not passthru a pcie usb card.

    Good news, but though longtime unraid user, this is my first attempt at pass-through.   Should it just show up after a reboot, because it doesn't.   Thanks

  2. In case someone finds this thread with my same issue, I finally figured it out after tons of trial-and-error, and research.  Though my network.cfg file seemed normal and intact, I found other instances where users couldn't select the network bridge br0 until they renamed/deleted network.cfg, so I tried it.  I then could select br0 in the VM HA setup & all works.  Now I just need to get the proxy working...

  3. Hoping this is appropriate for this thread:

    Have HA Core Docker working along with ngnixproxymanager, and decided I want Supervisor.  The new HA Sup VM appears to be running per the VNC terminal, and I stopped the Docker's (even tried turning Docker off all-together), but I can't get http://homeassistant.local:8123/ or 192.168.0.112:8123 to connect.

    What is this newb doing wrong?

    chrome_JPmZU7QYgr.jpg

    HA_VM_settings.pdf tower-diagnostics-20210508-0819.zip

  4. That's exactly the info I was looking for.  I will continue to play with Core.  It's going well (after a deep dive to get it behind a proxy, and just general understanding and programming of HA) so once I decide to stick with HA long-term, I'll go the VM route with Supervisor.  Thanks.

  5. Great thread.  Just what I was looking for.

     

    New to HA but have the core Docker working well for a few days, but really would like Supervisor.  Have no interest in getting a Raspberry Pi when I already have unraid running.  No experience with unraid VM's, so I'd love to stick with Docker.  I'm considering the "unsupported" hassio_supervisor Docker, but am confused between the posts of people that like it, and others that say it's unstable...

     

    Is anyone happy with Docker's hassio_supervisor?

  6. Recently installed Docker, with appdata on disk1.  Disk1 is now always spun-up.  Also installed Plex but have it set to never auto-check for anything, but this happens even if Plex is stopped and set to not autostart.  If I manually spin-down, disk1 spins right up  So it appears Docker alone keeps the drive up.  Is that proper?  I searched around but couldn't find the answer.

    tower-diagnostics-20210315-1730.zip

  7. Before we close the book on this, I ran an extended SMART test on the disk last night (after the parity completed without errors on that disk).  SMART found no errors.  I had switched HD bays the other day after the 2nd unmountable fiasco.  Does it make sense it was the HD bay to SATA interface and the drive is fine?  I've had issues with this box's quality control.  I changed the SATA cable between the two fiascos, so it's not that.

     

    BTW, there were some parity errors on disk5, which trul noticed connection issues with earlier.  I'm going to switch that bay and cable later today & run a SMART.

  8. xfs _repair /dev/sdi1 -L ran correctly.  Restarted the array in normal mode.  I could then see disk1's data & all appears intact! I'm surprised how much is in lost+found, but I'll dig into that after the Parity is done rebuilding.

     

    Anyhow, you all are AWESOME!  Moving on, a new HD should arrive today.  Any opinion on whether the issue was the drive?  It was manf in 2013.  My inclination is to decommission disk1 as some air-gapped backup in my closet.

     

    Wiki that needs updating: The "XFS" code under Checking File System.

     

    EDIT: I forgot that drive already had a lost+found from the earlier issue.  I don't think there is anything new in that folder.

  9. Ha, guess the wiki needs updating :)

    Anyhow, though the command executed this time, it returned:

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_repair.  If you are unable to mount the filesystem, then use
    the -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

  10. I started in maintenance mode with all disks assigned, but when I click the unmounted disk, there is no Check Filesystem Status section in settings.  I tried 

    xfs_admin -U generate /dev/sdi1

    but it returns:

    ERROR: The filesystem has valuable metadata changes in a log which needs to
    be replayed.  Mount the filesystem to replay the log, and unmount it before
    re-running xfs_admin.  If you are unable to mount the filesystem, then use
    the xfs_repair -L option to destroy the log and attempt a repair.
    Note that destroying the log may cause corruption -- please attempt a mount
    of the filesystem before doing this.

     

    XFS_repair /dev/sdi1 

    Returns: XFS_repair: command not found

  11. I believe I have to start in maintenance mode with the disk unassigned to run a repair, correct?  If I unassign that disk, I cannot start the array (even in maintenance mode).  It says, "Too many wrong and/or missing disks!"  If I start maintenance mode with disk1 assigned, I don't see the repair section on that drives setting page.  Is there another way to repair the filesystem?  thanks.

     

    EDIT: Let me clarify, there is no Check Filesystem Status section for that disk.

  12. Well, this didn't take long.  I decided to just New Config (I understand I'd be unprotected).  I had to reassign all the drives, and I kept everything the same.  On starting the array, it said disk1 was unmountable & the notification said, "Disk 1 in error state...".  Good thing I backed the drive externally.  Any help would of course be appreciated.

     

    This rig has hot-swappable HD bays, and I've seen bad SATA interfaces.  Possible that is the issue?  If someone thinks so, I could try a different bay.

    tower-diagnostics-20210307-1113.zip

  13. Drive is SATA to mobo.  If memory serves, this drive isn't terribly old, replacing what I thought was another bad drive that has proven to so-far be fine as an external backup.  Changing the cable sounds like a good idea.

     

    EDIT: Ends up the drive was manf in 2013, so I must be thinking of a different drive.  Changed the cable.  Time will tell...

  14. Thought this saga was over...  Few days ago executed a New Config & the disk rebuilt.  I then populated the bit of missing data from some external backups & the lost+found.  All was running fine but I decided to do an additional external backup to a HD I had lying around.  It's a few-hundred gigs of data.  During the backup, the disk failed again and we're back to square one.  I'm finishing the external backup from the parity data before I do anything further, but what's going on?

     

    After my backup, here is the pair log:

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan (but don't clear) agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 1
            - agno = 3
            - agno = 2
    No modify flag set, skipping phase 5
    Phase 6 - check inode connectivity...
            - traversing filesystem ...
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify link counts...
    No modify flag set, skipping filesystem flush and exiting.

    tower-diagnostics-20210306-1731.zip

×
×
  • Create New...