Mason736

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by Mason736

  1. Does anyone have an update to know if the compatibility issue has been fixed with subsequent releases of Unraid? I’d love to go back to being able to spin down my array without dropping HDDs from the array.
  2. I recently installed NetData, and noticed none of my dockers are showing in the app. I mainly use a custom br0 for most of my dockers that I expose to NGINX reverse proxy, which I intended to do with Netdata. If I have Netdata on br0, rather than bridge, is that the reason its not seeing the docker containers?
  3. I got it fixed! This post was so helpful to figure out the issue. I was trying to cross br0 and bridge networks.
  4. So i made progress. I switched to the Official release of NGINX app. Now i'm getting the SSL cert to authorize and go through, however I'm getting the 502 Bad Gateway error now.
  5. Hello fellow unraiders. I decided to setup NGINX after debating it for a while. For some reason, I can't get the final piece to work. I followed many of the tutorials, setup duckdns, setup port forwarding, created a subdomain for overseer (trial app), etc... If i go to overseer.mydomain.com, get a "the site cannot be reached" overseer.mydomain.com refused to connect. However, if I put in my ISP IP address:8080, (port number I setup), I can get to the page showing "Congratulations! You've successfully started the Nginx Proxy Manager. If you're seeing this site then you're trying to access a host that isn't setup yet." I'm not sure what else to do to troubleshoot. Additionally, I keep getting "internal error" when trying to setup the SSL for the host (overseer).
  6. Yeah, i had set all of the ST8000VN004 drives to never spin down, and funny, not one of them have dropped. Its been only the VN0022 drives that have dropped. I did upgrade the firmware of the LSI controller, after reading that posting, maybe that is causing the VN0022 drives to drop now. I just did a global update to have none of them spin down. Lets see if that changes anything. Thanks for the help.
  7. Over the past several week, i've had numerous drives randomly drop from the array. Its not the same drive every time, its random drives, of the 11 that are in the array. All of them are Seagate 8TB drives. I'm wondering if my backplane is starting to possibly fail. How would I verify this, or troubleshoot? Diagnostics uploaded. Here are my specs: Supermicro 4U-X9DRi-LN4F+1.2 36Bay 3.5 SAS2 2PS, Includes CPU/MEM Performance Specs: CPU Family: E5-2690-V2 x 2 Memory: 128 GB of DDR3 RAM Controllers: Installed QTY 1x LSI 9211-8i IT Mode High Profile NIC: Integrated Onboard Quad 1GB Ethernet ports (4) Secondary Chassis/ Motherboard specs: Supermicro 4U 36 Bay 3.5 Single Node Server Chassis/ Case: SSG-6047R-E1R36N Motherboard: X9DRi-LN4F+ Rev. V1.2 *Integrated IPMI 2.0 Management Blackplane: 2x Backplane: -BPN-SAS2-846EL1 24-port 4U SAS2 6Gbps single-expander backplane -BPN-SAS2-826EL1 12-port 2U SAS2 6Gbps single-expander backplane PCI-Expansions slots: Low Profile 4 x16 PCI-E 3.0, 1 x8 PCI-E 3.0, 1 x4 PCI-E 3.0 (in x8) Front Bays: 24x3.5" drive bays Rear Bays: 12x3.5" drive bays PSU Slots: Dual 1280W Power Supply (PWS-1K28P-SQ) archie-diagnostics-20210629-1517.zip
  8. One of my HDDs has recently thown a couple errors in the array. I've removed it then re-added it twice so far. I've attached the SMART extended test logs, but I'm not sure how to read it. What should I be looking at? ST8000VN0022-2EL112_ZA1BS24K-20210615-0734.txt
  9. Hello fellow Unraiders. I recently added a new cache pool, single Crucial 2TB MX500 SSD, given the capability to handle multiple cache pools, and the wonderful tutorial by SpaceInvaderOne. I am using this cache drive for the sole purpose of my downloads from Sonarr, Radarr, etc....., rather than hitting my main cache pool that houses my VMs, Docker Containers, and other items. Ever since installing the drive last week, i've been getting temperature warnings whenever the drive is being hit from downloads, and NZBGET unpack/processing activities. The warnings are for temperatures around 117F to 125F. I've never received these warnings for my other SSDs, so I'm wondering if the Crucial drives just run hotter than Samsung EVO SSDs. These temperatures don't seem harmful, based on the research i've done, but I'm skeptical. Thoughts?
  10. I can confirm this. Over since I changed the ST8000VN004 drives (4 of them) to never spin down, always be spun up, they have been fine, and have not dropped from the array.
  11. awesome. i'll start the RMA process again, for a brand new drive. Starting to really not like Seagate. Separately, on a good note, I changed the spin-up of all of the ST8000VN004 drives to never spin down, and none of them have dropped from the array. The drop/error has something to do with the spin-up, but I'm not technically advanced enough to understand the mechanics behind it.
  12. Here is a copy of the latest smart extended test. Keep in mind, this drive is brand new. I'm not sure how to interpret this. archie-smart-20210423-0922.zip
  13. As of yesterday, all of the ST8000NV04 drives are dropping out of the array. The ST8000NV0022's are working perfect.
  14. Update again: the ST8000V04 drives are not continuing to drop out of the array. I'm unsure how to proceed at this point.
  15. Update: the updated firmware on the 9211-8i definitely helped. However, i did receive a couple new ST8000NV04 drives, and just stuck them in the array, without disabling EPC or Low Voltage Spin Up. They dropped from the array almost instantly. Once I used the seagate tools and disabled the features, they have been just fine. I'm looking forward to whatever the fix may be for the seagate drives.
  16. Here is the log dump: {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins-system"],"pid":8,"message":"Setting up [100] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsLegacy,kibanaLegacy,translations,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,canvas,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,tileMap,regionMap,visTypeXy,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,ml,securitySolution,case,infra,monitoring,logstash,apm,uptime]"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins","taskManager"],"pid":8,"message":"TaskManager is identified by the Kibana UUID: 5d53dbcc-8093-4b66-bdf1-2a395d8bac01"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":8,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","fleet"],"pid":8,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","actions","actions"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","plugins","monitoring","monitoring"],"pid":8,"message":"config sourced from: production cluster"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","savedobjects-service"],"pid":8,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","licensing"],"pid":8,"message":"License information could not be obtained from Elasticsearch due to [illegal_argument_exception] request [/_xpack] contains unrecognized parameter: [accept_enterprise] :: {\"path\":\"/_xpack?accept_enterprise=true\",\"statusCode\":400,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"}],\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"},\\\"status\\\":400}\"} error"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","monitoring","monitoring"],"pid":8,"message":"X-Pack Monitoring Cluster Alerts will not be available: X-Pack plugin is not installed on the Elasticsearch cluster."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["error","savedobjects-service"],"pid":8,"message":"This version of Kibana (v7.12.0) is incompatible with the following Elasticsearch nodes in your cluster: v6.6.2 @ 172.17.0.9:9200 (172.17.0.9)"}
  17. tiwing...I have your exact same box. I bought mine from the fine folks at UnixSurplus, and have since bought some other boxes for windows and esxi clusters. The 36 bay, Supermicro box is a beast, i'm running with dual e5-2690v2 and 128 gb of RAM (upgraded from when i bought it). For plex, i did buy a low profile Geforce 1050 TI that i use exclusively for transcoding, using the nvidia drivers. I do keep 2 libraries, one of 4K movies, and another of 1080p content. It works great, I don't have any complaints.
  18. Hi, I'm having an issue with the Kibana docker. I get both Elastisearch and Kibana setup. However when I try and use Kibana, i get a simple white page, with the message "Kibana server is not ready yet". I'm unsure how to fix this. Any help would be great. Thanks!
  19. LSI 92118i HBA has been updated to 20.00.07.00. See updated diagnostics. archie-diagnostics-20210413-1259.zip
  20. Here are the diagnostics. The array is currently being rebuilt. I can post additional ones in a couple hours once it has completed. archie-diagnostics-20210413-0857.zip
  21. Further update....I followed the steps and disabled EPC and low voltage spin up. However, BOTH ST8000NV004 drives are still dropping from the array with read errors.
  22. So I woke up this morning, and two of the ST8000VN04 drives had dropped from the array with read errors. They are both brand new drives from Seagate. I'm convinced is the issue with the ERC and low voltage spin up, so I followed the above instructions. We will see if they, or any of the other VN04 drives drop out of the array again.
  23. Thanks for the heads up. My drives aren't dropping off the system, they are getting a significant number of read errors, and SMART is failing. It happened before I upgraded to 6.9.x as well.