[Support] Natcoso9955 - Loki


Recommended Posts

  • 4 months later...
  • 1 month later...

I'm having an issue with this where the container keeps writing to the docker.img, is there anything I can do to change the path? Or the retainment policy for logs?

 

Currently sitting at close to 9gb for this image which is just too damn high :)

 

EDIT: So here's what you do.. Change local.conf.yaml in appdata/loki/conf FROM

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s

to this:

table_manager:
  retention_deletes_enabled: true
  retention_period: 24h

 

Now it will rotate logs out of retention after 24 hours. You can change the period to whatever you like ofcourse.

 

The container has to be deleted and re-made it seems for the old data to be deleted. 

Edited by Fredrick
Link to comment
  • 1 month later...
  • 2 months later...

Can anyone help me?

Unfortunately, no data from Promtail and Loki arrive at Grafana.

Here are my configurations:

 

Promtail config.yml in /mnt/user/appdata/promtail/config.yml

Quote

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://[IP]:3100/loki/api/v1/push

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      agent: promtail
      __path__: /var/log/*log
- job_name: nginx
  static_configs:
  - targets:
      - localhost
    labels:
      job: nginx
      host: swag
      __path__: /mnt/user/appdata/swag/log/nginx/*log

 

Promtail.thumb.jpg.d3ffa734840f8e2144b894e4290723ec.jpg

 

Loki local-config.yaml in /mnt/user/appdata/loki/conf/local-config.yaml

Quote

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

ingester:
  wal:
    enabled: true
    dir: /tmp/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  boltdb_shipper:
    active_index_directory: /tmp/loki/boltdb-shipper-active
    cache_location: /tmp/loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /tmp/loki/chunks

compactor:
  working_directory: /tmp/loki/boltdb-shipper-compactor
  shared_store: filesystem

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h

chunk_store_config:
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s

ruler:
  storage:
    type: local
    local:
      directory: /tmp/loki/rules
  rule_path: /tmp/loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

 

Loki.thumb.jpg.131e0bf9c4c829af1891d05777fa05de.jpg

Link to comment
  • 2 weeks later...
On 5/12/2021 at 3:11 PM, Anym001 said:

Can anyone help me?

Unfortunately, no data from Promtail and Loki arrive at Grafana.

Here are my configurations:

 

Promtail config.yml in /mnt/user/appdata/promtail/config.yml

 

Promtail.thumb.jpg.d3ffa734840f8e2144b894e4290723ec.jpg

 

Loki local-config.yaml in /mnt/user/appdata/loki/conf/local-config.yaml

 

Loki.thumb.jpg.131e0bf9c4c829af1891d05777fa05de.jpg

So i want to confirm, 1) you have setup something to send snmp logs to promtail?
2) you have added the data source to grafana and grafana is able to detect the loki source?

Link to comment
  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.