josh1014

Members
  • Posts

    9
  • Joined

  • Last visited

Posts posted by josh1014

  1. I have been running pihole docker container on unraid for years.  When I upgraded to 6.12.4, my server began crashing regularly. I isolated the issue to pihole as it stopped crashing when I kept the pihole offline.  Upon further forum reading, I learned of the call trace issues.  Pihole is my only container that was running with a custom IP on br0.  I changed to ipvlan, and started the pihole as a custom IP on eth0.  It has been working fine in general, but periodically throughout the day, it will become unresponsive for 30 seconds to 2 minutes, rendering my devices unable to resolve external addresses and load websites.  During the downtime, the unraid UI is accessible, other docker containers are accessible, but the pihole webUI is not.  From the console inside the pihole container, I can access the log and nothing exciting is happening.  I have looked at the pihole logs and diagnostics and see nothing of interest.  The unraid syslog is also silent during these brief periods.  


    Any suggestions on how I can look deeper into what is causing this one container to temporarily become unresponsive periodically throughout the day?

  2. Hello, wondering if an expert can immediately identify the problem here to save me some time messing with my app subfolder conf. 
     

    I have my webapp accessible via https://mydomain.duckdns.org:444/appname/
     

    this successfully brings you to the login page for this webapp. Once you submit your credentials, you get sent to:


    https://mydomain.duckdns.org/appname/entrance/

     

    instead of 

     

    https://mydomain.duckdns.org:444/appname/entrance/

     

    if you go ahead and add the port back in then you’re fine the rest of the way, but that initial login causes the port to disappear from the URL. 

     

    location ^~ /appname {
    auth_basic “Restricted”;

    auth_basic_user_file /config/nginx/.htpasswd;

    include /config/nginx/proxy.conf;

    proxy_pass http://appname:80;

    }

     

    Any ideas what I need to add to solve this? Thanks!

  3. Hi, I am about to make some changes to my array and was wondering if someone could confirm best practice.  I currently have a 4TB parity disk, four 2TB data disks and one 4TB data disk.  All disks are healthy and parity is valid.  I have three new 10TB drives.  I am planning to add dual 10TB parity disks, convert the current 4TB parity disk into a data disk, and remove a full 2TB data disk and rebuild its data onto a new 10TB data disk.  

     

    I was planning on pre-clearing each of the three new 10TB disks before making any changes to the array.  Could someone please tell me the best practice order of operations to follow after I am done pre-clearing all the new disks?  Thanks!

  4. Hoping for some help with a new Q25B build:

     

    Upgrading by current unraid box in a Q08B which is using the SUPERMICRO MBD-X7SPA-HF-O Mini ITX with Atom D510 processor.

     

    Use case is for storage plus probably ~5 dockers, most intense being plex (max 2-3 streams).  No VMs. 

     

    Already have the drives from my current array, need help selecting cost-effective but ample motherboard/processor/memory and an SSD cache drive for the dockers.  All the examples of recent builds I've found seem like overkill for my purposes and I'm looking to not spend more than necessary to cover my bases.

     

    Any thoughts would be much appreciated, thanks!

  5. Just to preface, after reading many forum threads I am aware that I did not handle this troubleshoot process in the ideal way initially.  Hoping for some guidance at my current point.  So this started with noticing that my parity disk had thrown thousands of errors per the unraid main page.  I had not run a parity check in a few weeks, and all previous parity checks have been error free.  None of my 3 data drives at this point were showing any errors on the main page and everything was still green balled.  I unfortunately decided to reboot without grabbing a syslog, then ran a parity check which was progressing at an incredibly slow pace.  At this point I cancelled the parity check and I bought a new HDD to replace my failing parity drive.  After performing the swap, I started the parity-sync which has now finished finding 7 errors, all of which are on disk3 (a 2TB drive that is about half-full with data).  No other errors on any of the other drives.  At this point, I assume there is no real way to determine if I suffered any data loss.  Posted below is the syslog showing the errors during the parity-sync, as well as the SMART report for disk3.  My best interpretation is that I probably did not suffer any data loss, but I should replace disk3 with a new drive and let my new parity disk rebuild it.  Please advise, thanks!

     

    === START OF INFORMATION SECTION ===

    Device Model:    WDC WD20EARX-00PASB0

    Serial Number:    WD-WCAZAH474551

    Firmware Version: 51.0AB51

    User Capacity:    2,000,398,934,016 bytes

    Device is:        Not in smartctl database [for details use: -P showall]

    ATA Version is:  8

    ATA Standard is:  Exact ATA specification draft version not indicated

    Local Time is:    Mon Nov  3 20:49:25 2014 EST

    SMART support is: Available - device has SMART capability.

    SMART support is: Enabled

     

    === START OF READ SMART DATA SECTION ===

    SMART overall-health self-assessment test result: PASSED

     

    General SMART Values:

    Offline data collection status:  (0x82) Offline data collection activity

    was completed without error.

    Auto Offline Data Collection: Enabled.

    Self-test execution status:      ( 113) The previous self-test completed having

    the read element of the test failed.

    Total time to complete Offline

    data collection: (38880) seconds.

    Offline data collection

    capabilities: (0x7b) SMART execute Offline immediate.

    Auto Offline data collection on/off support.

    Suspend Offline collection upon new

    command.

    Offline surface scan supported.

    Self-test supported.

    Conveyance Self-test supported.

    Selective Self-test supported.

    SMART capabilities:            (0x0003) Saves SMART data before entering

    power-saving mode.

    Supports SMART auto save timer.

    Error logging capability:        (0x01) Error logging supported.

    General Purpose Logging supported.

    Short self-test routine

    recommended polling time: (  2) minutes.

    Extended self-test routine

    recommended polling time: ( 255) minutes.

    Conveyance self-test routine

    recommended polling time: (  5) minutes.

    SCT capabilities:       (0x3035) SCT Status supported.

    SCT Feature Control supported.

    SCT Data Table supported.

     

    SMART Attributes Data Structure revision number: 16

    Vendor Specific SMART Attributes with Thresholds:

    ID# ATTRIBUTE_NAME          FLAG    VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE

      1 Raw_Read_Error_Rate    0x002f  145  145  051    Pre-fail  Always      -      213849

      3 Spin_Up_Time            0x0027  213  174  021    Pre-fail  Always      -      4316

      4 Start_Stop_Count        0x0032  100  100  000    Old_age  Always      -      810

      5 Reallocated_Sector_Ct  0x0033  200  200  140    Pre-fail  Always      -      37

      7 Seek_Error_Rate        0x002e  200  200  000    Old_age  Always      -      0

      9 Power_On_Hours          0x0032  080  080  000    Old_age  Always      -      15027

    10 Spin_Retry_Count        0x0032  100  100  000    Old_age  Always      -      0

    11 Calibration_Retry_Count 0x0032  100  253  000    Old_age  Always      -      0

    12 Power_Cycle_Count      0x0032  100  100  000    Old_age  Always      -      14

    192 Power-Off_Retract_Count 0x0032  200  200  000    Old_age  Always      -      7

    193 Load_Cycle_Count        0x0032  198  198  000    Old_age  Always      -      6107

    194 Temperature_Celsius    0x0022  128  116  000    Old_age  Always      -      22

    196 Reallocated_Event_Count 0x0032  192  192  000    Old_age  Always      -      8

    197 Current_Pending_Sector  0x0032  200  200  000    Old_age  Always      -      0

    198 Offline_Uncorrectable  0x0030  200  200  000    Old_age  Offline      -      1

    199 UDMA_CRC_Error_Count    0x0032  200  200  000    Old_age  Always      -      0

    200 Multi_Zone_Error_Rate  0x0008  128  113  000    Old_age  Offline      -      19442

     

    SMART Error Log Version: 1

    No Errors Logged

     

    SMART Self-test log structure revision number 1

    Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error

    # 1  Short offline      Completed: read failure      10%    15024        1203316928

     

    SMART Selective self-test log data structure revision number 1

    SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS

        1        0        0  Not_testing

        2        0        0  Not_testing

        3        0        0  Not_testing

        4        0        0  Not_testing

        5        0        0  Not_testing

    Selective self-test flags (0x0):

      After scanning selected spans, do NOT read-scan remainder of disk.

    If Selective self-test is pending on power-up, resume after 0 minute delay.

    syslog.txt