cschanot Posted April 9, 2021 Share Posted April 9, 2021 (edited) Summary: Support Thread for cschanot docker templates: ntopnrg kibana - This currently relies on the existing Elasticsearch docker by FoxxMD Edited April 10, 2021 by cschanot updated to include new template 1 Quote Link to comment
Ulvster Posted April 10, 2021 Share Posted April 10, 2021 Getting Ghost network warnings "Subnet 10.0.0.0/24 does not belong to the eth0 networks." This is the correct network. It doesn't see any flows. Quote Link to comment
Ulvster Posted April 10, 2021 Share Posted April 10, 2021 1 hour ago, Ulvster said: Getting Ghost network warnings "Subnet 10.0.0.0/24 does not belong to the eth0 networks." This is the correct network. It doesn't see any flows. Solved, had to use br0 not eth0 1 Quote Link to comment
cschanot Posted April 10, 2021 Author Share Posted April 10, 2021 31 minutes ago, Ulvster said: Solved, had to use br0 not eth0 Glad you were able to get it working! 1 Quote Link to comment
Ulvster Posted April 10, 2021 Share Posted April 10, 2021 Top tip: Drop the --community in the advanced view, Post Arguments: Ntopng will run as Pro/Enterprise trial for 10 minutes, then switch to community edition without all the nagging. Quote Link to comment
cschanot Posted April 10, 2021 Author Share Posted April 10, 2021 Good call, I will make the change when I get home later today. Thank you! Quote Link to comment
dukiethecorgi Posted April 10, 2021 Share Posted April 10, 2021 Port 3000 is already in use. How can I change that? Quote Link to comment
cschanot Posted April 10, 2021 Author Share Posted April 10, 2021 (edited) I believe you can add --http-port 0.0.0.0:port with port being one you have free. I can test it later tonight when I get back. You would most likely add this to the extra parameters post arguments in the advanced view. If you get a chance to test before I get to it let me know how you fair! Update: I just tested this and it worked. I will update the template with this as well as removing the --community from the post arguments shortly. Edited April 10, 2021 by cschanot Quote Link to comment
dukiethecorgi Posted April 11, 2021 Share Posted April 11, 2021 Yes, that fixed it for me too. One other issue I found is that the container will not run while the bitnami/redis container is running. Quote Link to comment
cschanot Posted April 11, 2021 Author Share Posted April 11, 2021 (edited) The ntopng docker runs a redis server as well, this would be causing the conflict. The easiest way to resolve would be if you run the bitnami redis server in host mode then add the following to the ntopng docker template in post arguments: -r <Your Server IP> You can specify port and db id if you need with the -r option as well. For quick testing mine looked like: -r 192.168.1.20:6379@2 (From the ntopng site the whole argument is [h[:port[:pwd]]][@db-id] if needed) I will leave notes in the template for anyone else who might experience this issue in the future. If you cannot run in host mode you would need to create a network and set ntopng and redis to use it. Let me know if this solves the issue! Edited April 11, 2021 by cschanot Quote Link to comment
dukiethecorgi Posted April 12, 2021 Share Posted April 12, 2021 Decided to use the built in redis. One thing I'm not understanding is that my network is bond0, a 802.3ad lag. Ntopnrg insists that this is a 'ghost network'. Aside from this, everything seems to working fine Quote Link to comment
cschanot Posted April 12, 2021 Author Share Posted April 12, 2021 It seems to figure it out based off of the inet addr assigned to the interface whether or not it is a local network. I actually just modified this on my container today. In post arguments I added a new flag: -m "192.168.1.0/24,172.17.0.0/16,172.18.0.0/16" Quote Link to comment
dukiethecorgi Posted April 12, 2021 Share Posted April 12, 2021 Worked perfectly! Thanks for your help Quote Link to comment
Mason736 Posted April 13, 2021 Share Posted April 13, 2021 Hi, I'm having an issue with the Kibana docker. I get both Elastisearch and Kibana setup. However when I try and use Kibana, i get a simple white page, with the message "Kibana server is not ready yet". I'm unsure how to fix this. Any help would be great. Thanks! Quote Link to comment
cschanot Posted April 13, 2021 Author Share Posted April 13, 2021 Would you mind posting a dump of the Kibana log? That might give me a clue so hopefully I can help. Quote Link to comment
Mason736 Posted April 13, 2021 Share Posted April 13, 2021 Here is the log dump: {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["info","plugins-service"],"pid":8,"message":"Plugin \"osquery\" is disabled."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Support for setting server.host to \"0\" in kibana.yml is deprecated and will be removed in Kibana version 8.0.0. Instead use \"0.0.0.0\" to bind to all interfaces."} {"type":"log","@timestamp":"2021-04-13T14:58:11-04:00","tags":["warning","config","deprecation"],"pid":8,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.\""} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins-system"],"pid":8,"message":"Setting up [100] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,securityOss,share,newsfeed,mapsLegacy,kibanaLegacy,translations,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,esUiShared,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVislib,visTypeVega,visTypeTimelion,features,licenseManagement,watcher,canvas,visTypeTagcloud,visTypeTable,visTypeMetric,visTypeMarkdown,tileMap,regionMap,visTypeXy,graph,timelion,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,dataEnhanced,dashboardMode,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,rollup,remoteClusters,crossClusterReplication,indexLifecycleManagement,enterpriseSearch,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,ml,securitySolution,case,infra,monitoring,logstash,apm,uptime]"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["info","plugins","taskManager"],"pid":8,"message":"TaskManager is identified by the Kibana UUID: 5d53dbcc-8093-4b66-bdf1-2a395d8bac01"} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","security","config"],"pid":8,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","reporting","config"],"pid":8,"message":"Chromium sandbox provides an additional layer of protection, but is not supported for Linux CentOS 8.3.2011\n OS. Automatically setting 'xpack.reporting.capture.browser.chromium.disableSandbox: true'."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":8,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","fleet"],"pid":8,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","actions","actions"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:12-04:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":8,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","plugins","monitoring","monitoring"],"pid":8,"message":"config sourced from: production cluster"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["info","savedobjects-service"],"pid":8,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","licensing"],"pid":8,"message":"License information could not be obtained from Elasticsearch due to [illegal_argument_exception] request [/_xpack] contains unrecognized parameter: [accept_enterprise] :: {\"path\":\"/_xpack?accept_enterprise=true\",\"statusCode\":400,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"}],\\\"type\\\":\\\"illegal_argument_exception\\\",\\\"reason\\\":\\\"request [/_xpack] contains unrecognized parameter: [accept_enterprise]\\\"},\\\"status\\\":400}\"} error"} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["warning","plugins","monitoring","monitoring"],"pid":8,"message":"X-Pack Monitoring Cluster Alerts will not be available: X-Pack plugin is not installed on the Elasticsearch cluster."} {"type":"log","@timestamp":"2021-04-13T14:58:13-04:00","tags":["error","savedobjects-service"],"pid":8,"message":"This version of Kibana (v7.12.0) is incompatible with the following Elasticsearch nodes in your cluster: v6.6.2 @ 172.17.0.9:9200 (172.17.0.9)"} Quote Link to comment
cschanot Posted April 13, 2021 Author Share Posted April 13, 2021 For your elasticsearch docker change the docker tag from 6.6.2 to 7.12.0 and apply. That will update you to the same version Kibana is running. Once Elasticsearch is up try running Kibana again. Let me know if you have any questions! Quote Link to comment
Mason736 Posted April 13, 2021 Share Posted April 13, 2021 That worked...thank you! Quote Link to comment
cschanot Posted April 13, 2021 Author Share Posted April 13, 2021 Happy to help! Glad it worked for you. Quote Link to comment
ghzgod Posted April 20, 2021 Share Posted April 20, 2021 (edited) On 4/13/2021 at 9:02 PM, cschanot said: For your elasticsearch docker change the docker tag from 6.6.2 to 7.12.0 and apply. That will update you to the same version Kibana is running. Once Elasticsearch is up try running Kibana again. Let me know if you have any questions! How do I change the docker tag? I tried d8sychain/elasticsearch:v7.12.0 and d8sychain/elasticsearch:version:7.12.0 but no dice. I only see the latest tag on the docker hub. Update: pulled from official repo using elasticsearch:7.12.0 in my docker repository field Edited April 20, 2021 by ghzgod Update Quote Link to comment
Kaldek Posted December 2, 2021 Share Posted December 2, 2021 Hi there, I've just started using this. It's working great, thanks for your efforts. Quote Link to comment
stayupthetree Posted February 5, 2022 Share Posted February 5, 2022 Works good for 10 minutes with a nice fancy dashboard, then that goes away unless I drop 300 Euros for a pro license. WTF Quote Link to comment
MPulse Posted February 8, 2022 Share Posted February 8, 2022 How do I retain my setting upon restart or reboot. When I restart the docker all my ntopng preferences and settings revert back to their default settings. 1 Quote Link to comment
MPulse Posted February 8, 2022 Share Posted February 8, 2022 On 2/5/2022 at 9:19 AM, stayupthetree said: Works good for 10 minutes with a nice fancy dashboard, then that goes away unless I drop 300 Euros for a pro license. WTF You can add the community tag to the post arguments and you don't have to worry about paying for the pro version. Of course the community edition doesn't have all the full features as the pro version. Check this site for a comparison: https://www.ntop.org/products/traffic-analysis/ntop/ Quote Link to comment
maxx8888 Posted July 10, 2022 Share Posted July 10, 2022 ERROR: MISCONF Redis is configured to save RDB snapshots, but it is currently not able to persist on disk. Commands that may modify the data set are disabled, because this instance is configured to report errors during writes if RDB snapshotting fails (stop-writes-on-bgsave-error option). Please check the Redis logs for details about the RDB error. I'm getting this Error few times a second, after the Docker is running around 5minutes. At that point also the Web Interface is becoming really slow. No idea what's wrong 😕 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.