Mason736

Members
  • Posts

    40
  • Joined

  • Last visited

Everything posted by Mason736

  1. Yeah, i had set all of the ST8000VN004 drives to never spin down, and funny, not one of them have dropped. Its been only the VN0022 drives that have dropped. I did upgrade the firmware of the LSI controller, after reading that posting, maybe that is causing the VN0022 drives to drop now. I just did a global update to have none of them spin down. Lets see if that changes anything. Thanks for the help.
  2. Over the past several week, i've had numerous drives randomly drop from the array. Its not the same drive every time, its random drives, of the 11 that are in the array. All of them are Seagate 8TB drives. I'm wondering if my backplane is starting to possibly fail. How would I verify this, or troubleshoot? Diagnostics uploaded. Here are my specs: Supermicro 4U-X9DRi-LN4F+1.2 36Bay 3.5 SAS2 2PS, Includes CPU/MEM Performance Specs: CPU Family: E5-2690-V2 x 2 Memory: 128 GB of DDR3 RAM Controllers: Installed QTY 1x LSI 9211-8i IT Mode High Profile NIC: Integrated Onboard Quad 1GB Ethernet ports (4) Secondary Chassis/ Motherboard specs: Supermicro 4U 36 Bay 3.5 Single Node Server Chassis/ Case: SSG-6047R-E1R36N Motherboard: X9DRi-LN4F+ Rev. V1.2 *Integrated IPMI 2.0 Management Blackplane: 2x Backplane: -BPN-SAS2-846EL1 24-port 4U SAS2 6Gbps single-expander backplane -BPN-SAS2-826EL1 12-port 2U SAS2 6Gbps single-expander backplane PCI-Expansions slots: Low Profile 4 x16 PCI-E 3.0, 1 x8 PCI-E 3.0, 1 x4 PCI-E 3.0 (in x8) Front Bays: 24x3.5" drive bays Rear Bays: 12x3.5" drive bays PSU Slots: Dual 1280W Power Supply (PWS-1K28P-SQ) archie-diagnostics-20210629-1517.zip
  3. One of my HDDs has recently thown a couple errors in the array. I've removed it then re-added it twice so far. I've attached the SMART extended test logs, but I'm not sure how to read it. What should I be looking at? ST8000VN0022-2EL112_ZA1BS24K-20210615-0734.txt
  4. Hello fellow Unraiders. I recently added a new cache pool, single Crucial 2TB MX500 SSD, given the capability to handle multiple cache pools, and the wonderful tutorial by SpaceInvaderOne. I am using this cache drive for the sole purpose of my downloads from Sonarr, Radarr, etc....., rather than hitting my main cache pool that houses my VMs, Docker Containers, and other items. Ever since installing the drive last week, i've been getting temperature warnings whenever the drive is being hit from downloads, and NZBGET unpack/processing activities. The warnings are for temperatures around 117F to 125F. I've never received these warnings for my other SSDs, so I'm wondering if the Crucial drives just run hotter than Samsung EVO SSDs. These temperatures don't seem harmful, based on the research i've done, but I'm skeptical. Thoughts?
  5. I can confirm this. Over since I changed the ST8000VN004 drives (4 of them) to never spin down, always be spun up, they have been fine, and have not dropped from the array.
  6. awesome. i'll start the RMA process again, for a brand new drive. Starting to really not like Seagate. Separately, on a good note, I changed the spin-up of all of the ST8000VN004 drives to never spin down, and none of them have dropped from the array. The drop/error has something to do with the spin-up, but I'm not technically advanced enough to understand the mechanics behind it.
  7. Here is a copy of the latest smart extended test. Keep in mind, this drive is brand new. I'm not sure how to interpret this. archie-smart-20210423-0922.zip
  8. As of yesterday, all of the ST8000NV04 drives are dropping out of the array. The ST8000NV0022's are working perfect.
  9. Update again: the ST8000V04 drives are not continuing to drop out of the array. I'm unsure how to proceed at this point.
  10. Update: the updated firmware on the 9211-8i definitely helped. However, i did receive a couple new ST8000NV04 drives, and just stuck them in the array, without disabling EPC or Low Voltage Spin Up. They dropped from the array almost instantly. Once I used the seagate tools and disabled the features, they have been just fine. I'm looking forward to whatever the fix may be for the seagate drives.
  11. Here is the log dump: {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins-system"],"pid":8,"message":"Setting up [100] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsLegacy,kibanaLegacy,translations,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,canvas,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,tileMap,regionMap,visTypeXy,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,ml,securitySolution,case,infra,monitoring,logstash,apm,uptime]"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins","taskManager"],"pid":8,"message":"TaskManager is identified by the Kibana UUID: 5d53dbcc-8093-4b66-bdf1-2a395d8bac01"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":8,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","fleet"],"pid":8,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","actions","actions"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","plugins","monitoring","monitoring"],"pid":8,"message":"config sourced from: production cluster"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","savedobjects-service"],"pid":8,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","licensing"],"pid":8,"message":"License information could not be obtained from Elasticsearch due to [illegal_argument_exception] request [/_xpack] contains unrecognized parameter: [accept_enterprise] :: {\"path\":\"/_xpack?accept_enterprise=true\",\"statusCode\":400,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"}],\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"},\\\"status\\\":400}\"} error"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","monitoring","monitoring"],"pid":8,"message":"X-Pack Monitoring Cluster Alerts will not be available: X-Pack plugin is not installed on the Elasticsearch cluster."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["error","savedobjects-service"],"pid":8,"message":"This version of Kibana (v7.12.0) is incompatible with the following Elasticsearch nodes in your cluster: v6.6.2 @ 172.17.0.9:9200 (172.17.0.9)"}
  12. tiwing...I have your exact same box. I bought mine from the fine folks at UnixSurplus, and have since bought some other boxes for windows and esxi clusters. The 36 bay, Supermicro box is a beast, i'm running with dual e5-2690v2 and 128 gb of RAM (upgraded from when i bought it). For plex, i did buy a low profile Geforce 1050 TI that i use exclusively for transcoding, using the nvidia drivers. I do keep 2 libraries, one of 4K movies, and another of 1080p content. It works great, I don't have any complaints.
  13. Hi, I'm having an issue with the Kibana docker. I get both Elastisearch and Kibana setup. However when I try and use Kibana, i get a simple white page, with the message "Kibana server is not ready yet". I'm unsure how to fix this. Any help would be great. Thanks!
  14. LSI 92118i HBA has been updated to 20.00.07.00. See updated diagnostics. archie-diagnostics-20210413-1259.zip
  15. Here are the diagnostics. The array is currently being rebuilt. I can post additional ones in a couple hours once it has completed. archie-diagnostics-20210413-0857.zip
  16. Further update....I followed the steps and disabled EPC and low voltage spin up. However, BOTH ST8000NV004 drives are still dropping from the array with read errors.
  17. So I woke up this morning, and two of the ST8000VN04 drives had dropped from the array with read errors. They are both brand new drives from Seagate. I'm convinced is the issue with the ERC and low voltage spin up, so I followed the above instructions. We will see if they, or any of the other VN04 drives drop out of the array again.
  18. Thanks for the heads up. My drives aren't dropping off the system, they are getting a significant number of read errors, and SMART is failing. It happened before I upgraded to 6.9.x as well.
  19. Good morning, fellow UnRaiders! Recently, as early as this morning, I've had 3 drive failures, all of the same model drive. Seagate Ironwolf ST8000VN04. I've been using Seagate drives religiously for years, and have never had an issue. My array currently has over 10 8TB drives of the Ironwolf ST8000VN002 model. Seagate has since replaced the ST8000VN002 with the ST8000VN04 model, and all three drives i've purchased and put in service in the last year, have failed. Has anyone else been having issues with these new drives from Seagate? Thanks
  20. I was able to get a working copy of Catalina 15.5 up and running recently. However, when I benchmark the performance of the VM, with the same number of CPUs (E5-2690v2) against a similar Catalina 15.5 VM I have running in ESXi (same number of CPUs and same processor model), I'm getting significant performance deficiency vs the ESXi VM. Using Cinebench, the Unraid VM gets around 1500-1600 Cinebench score, vs 3300 with the ESXi VM. Any thoughts on what could be causing such a a large performance difference?
  21. Can you recommend a software? I'm not familiar with one. The HDMI 2K is a device called a "Headless Ghost" HDMI adapter. It essentially fakes the GPU into thinking a monitor is attached. Its the only way I could get the GPU to initialize with a display.
  22. First I want to thank everyone here for their work and contributions. I was FINALLY able to get my Catalina up and running stable. Everything works, including passed through GPU, except the audio. I have Lilu and WhateverGreen kexts installed, which helped with passing through the GPU. I have also set the code correctly, i think to allow multifuction to work, and changed the bus for the GPU/Audio to be on the same bus: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </hostdev> I also installed the AppleALC kext, but it seemed to have no effect. I am seeing the HDMI audio out in the sound preferences, but I am not getting any sound coming through the VM via NoMachine. The server running the VM is headless, so there is no monitor or HDMI hookup directly. I'm using a headless ghost HDMI device to initiate the graphics adapter to work. The GPU I'm using is an AMD WX 4100 4 GB, and is correctly seen by the system. No matter which audio output choice I select, HDMI2K, or NoMachine Audio, I do not get any audio.
  23. It has been working fine for me. Not sure why it failed that initial time, but VEEAM has been working flawlessly since then.