[Support] Natcoso9955 - Loki


Recommended Posts

  • 4 months later...
  • 1 month later...

I'm having an issue with this where the container keeps writing to the docker.img, is there anything I can do to change the path? Or the retainment policy for logs?

 

Currently sitting at close to 9gb for this image which is just too damn high :)

 

EDIT: So here's what you do.. Change local.conf.yaml in appdata/loki/conf FROM

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s

to this:

table_manager:
  retention_deletes_enabled: true
  retention_period: 24h

 

Now it will rotate logs out of retention after 24 hours. You can change the period to whatever you like ofcourse.

 

The container has to be deleted and re-made it seems for the old data to be deleted. 

Edited by Fredrick
Link to comment
  • 1 month later...
  • 2 months later...

Can anyone help me?

Unfortunately, no data from Promtail and Loki arrive at Grafana.

Here are my configurations:

 

Promtail config.yml in /mnt/user/appdata/promtail/config.yml

Quote

server:
  http_listen_port: 9080
  grpc_listen_port: 0

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://[IP]:3100/loki/api/v1/push

scrape_configs:
- job_name: system
  static_configs:
  - targets:
      - localhost
    labels:
      job: varlogs
      agent: promtail
      __path__: /var/log/*log
- job_name: nginx
  static_configs:
  - targets:
      - localhost
    labels:
      job: nginx
      host: swag
      __path__: /mnt/user/appdata/swag/log/nginx/*log

 

Promtail.thumb.jpg.d3ffa734840f8e2144b894e4290723ec.jpg

 

Loki local-config.yaml in /mnt/user/appdata/loki/conf/local-config.yaml

Quote

auth_enabled: false

server:
  http_listen_port: 3100
  grpc_listen_port: 9096

ingester:
  wal:
    enabled: true
    dir: /tmp/wal
  lifecycler:
    address: 127.0.0.1
    ring:
      kvstore:
        store: inmemory
      replication_factor: 1
    final_sleep: 0s
  chunk_idle_period: 1h       # Any chunk not receiving new logs in this time will be flushed
  max_chunk_age: 1h           # All chunks will be flushed when they hit this age, default is 1h
  chunk_target_size: 1048576  # Loki will attempt to build chunks up to 1.5MB, flushing first if chunk_idle_period or max_chunk_age is reached first
  chunk_retain_period: 30s    # Must be greater than index read cache TTL if using an index cache (Default index read cache TTL is 5m)
  max_transfer_retries: 0     # Chunk transfers disabled

schema_config:
  configs:
    - from: 2020-10-24
      store: boltdb-shipper
      object_store: filesystem
      schema: v11
      index:
        prefix: index_
        period: 24h

storage_config:
  boltdb_shipper:
    active_index_directory: /tmp/loki/boltdb-shipper-active
    cache_location: /tmp/loki/boltdb-shipper-cache
    cache_ttl: 24h         # Can be increased for faster performance over longer query periods, uses more disk space
    shared_store: filesystem
  filesystem:
    directory: /tmp/loki/chunks

compactor:
  working_directory: /tmp/loki/boltdb-shipper-compactor
  shared_store: filesystem

limits_config:
  reject_old_samples: true
  reject_old_samples_max_age: 168h

chunk_store_config:
  max_look_back_period: 0s

table_manager:
  retention_deletes_enabled: false
  retention_period: 0s

ruler:
  storage:
    type: local
    local:
      directory: /tmp/loki/rules
  rule_path: /tmp/loki/rules-temp
  alertmanager_url: http://localhost:9093
  ring:
    kvstore:
      store: inmemory
  enable_api: true

 

Loki.thumb.jpg.131e0bf9c4c829af1891d05777fa05de.jpg

Link to comment
  • 2 weeks later...
On 5/12/2021 at 3:11 PM, Anym001 said:

Can anyone help me?

Unfortunately, no data from Promtail and Loki arrive at Grafana.

Here are my configurations:

 

Promtail config.yml in /mnt/user/appdata/promtail/config.yml

 

Promtail.thumb.jpg.d3ffa734840f8e2144b894e4290723ec.jpg

 

Loki local-config.yaml in /mnt/user/appdata/loki/conf/local-config.yaml

 

Loki.thumb.jpg.131e0bf9c4c829af1891d05777fa05de.jpg

So i want to confirm, 1) you have setup something to send snmp logs to promtail?
2) you have added the data source to grafana and grafana is able to detect the loki source?

Link to comment
  • 4 weeks later...
  • 4 months later...

So trying to add Loki + Promtail into my existing setup, Promtail is fine, but Loki keeps insisting that my local-config.yaml isn't there?

 

2021-11-08 20:24:06.763073 I | proto: duplicate proto type registered: purgeplan.DeletePlan
2021-11-08 20:24:06.763141 I | proto: duplicate proto type registered: purgeplan.ChunksGroup
2021-11-08 20:24:06.763143 I | proto: duplicate proto type registered: purgeplan.ChunkDetails
2021-11-08 20:24:06.763145 I | proto: duplicate proto type registered: purgeplan.Interval
2021-11-08 20:24:06.817986 I | proto: duplicate proto type registered: grpc.PutChunksRequest
2021-11-08 20:24:06.817993 I | proto: duplicate proto type registered: grpc.GetChunksRequest
2021-11-08 20:24:06.817995 I | proto: duplicate proto type registered: grpc.GetChunksResponse
2021-11-08 20:24:06.817997 I | proto: duplicate proto type registered: grpc.Chunk
2021-11-08 20:24:06.817999 I | proto: duplicate proto type registered: grpc.ChunkID
2021-11-08 20:24:06.818001 I | proto: duplicate proto type registered: grpc.DeleteTableRequest
2021-11-08 20:24:06.818003 I | proto: duplicate proto type registered: grpc.DescribeTableRequest
2021-11-08 20:24:06.818005 I | proto: duplicate proto type registered: grpc.WriteBatch
2021-11-08 20:24:06.818007 I | proto: duplicate proto type registered: grpc.WriteIndexRequest
2021-11-08 20:24:06.818008 I | proto: duplicate proto type registered: grpc.DeleteIndexRequest
2021-11-08 20:24:06.818012 I | proto: duplicate proto type registered: grpc.QueryIndexResponse
2021-11-08 20:24:06.818014 I | proto: duplicate proto type registered: grpc.Row
2021-11-08 20:24:06.818016 I | proto: duplicate proto type registered: grpc.IndexEntry
2021-11-08 20:24:06.818018 I | proto: duplicate proto type registered: grpc.QueryIndexRequest
2021-11-08 20:24:06.818020 I | proto: duplicate proto type registered: grpc.UpdateTableRequest
2021-11-08 20:24:06.818022 I | proto: duplicate proto type registered: grpc.DescribeTableResponse
2021-11-08 20:24:06.818025 I | proto: duplicate proto type registered: grpc.CreateTableRequest
2021-11-08 20:24:06.818029 I | proto: duplicate proto type registered: grpc.TableDesc
2021-11-08 20:24:06.818036 I | proto: duplicate proto type registered: grpc.TableDesc.TagsEntry
2021-11-08 20:24:06.818039 I | proto: duplicate proto type registered: grpc.ListTablesResponse
2021-11-08 20:24:06.818042 I | proto: duplicate proto type registered: grpc.Labels
2021-11-08 20:24:06.818133 I | proto: duplicate proto type registered: storage.Entry
2021-11-08 20:24:06.818138 I | proto: duplicate proto type registered: storage.ReadBatch
failed parsing config: open /etc/loki/local-config.yaml: no such file or directory

 

I have a screenshot of the config in unRAID attached, at a loss as to why the file's not showing up. I also have a screenshot of the /appdata/loki folder attached as viewed from Cyberduck. 

Screen Shot 2021-11-08 at 2.24.53 PM.png

Screen Shot 2021-11-08 at 2.27.56 PM.png

Link to comment
  • 4 weeks later...

After I used this container for a long time, it's stopped working.

I deleted the container and the appdata folder and made a fresh install, but still have the same error.

 

Quote

level=error ts=2021-12-05T23:45:06.725271944Z caller=log.go:106

msg="error running loki" err="mkdir wal: permission denied\nerror initialising module: ingester\ngithub.com/grafana/dskit/modules.(*Manager).initModule\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:108\ngithub.com/grafana/dskit/modules.(*Manager).InitModuleServices\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:78\ngithub.com/grafana/loki/pkg/loki.(*Loki).Run\n\t/src/loki/pkg/loki/loki.go:285\nmain.main\n\t/src/loki/cmd/loki/main.go:96\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:255\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1581"

 

Does anyone get this running?

  • Like 1
  • Upvote 1
Link to comment
  • 2 weeks later...
  • 4 weeks later...
  • 4 weeks later...
  • 2 weeks later...
  • 4 weeks later...
On 2/6/2022 at 8:40 PM, drsparks68 said:

Having the same issue.  

 

On 1/12/2022 at 11:32 AM, RedSpider said:

Same problem here

 

On 12/6/2021 at 12:48 AM, corgan said:

Does anyone get this running?

"mkdir wal: permission denied" tells you about the problem. Natcoso9955's config on GitHub does not specify a new folder for the wal (wal is the database for Loki to know where to continue in case the whole thing crashes). The default directory is not accessible by the container. You may fix this by specifying another folder in the loki config, e.g.:

 

ingester:
  wal:
    dir: /loki/wal
  lifecycler:
    ...

 

  • Like 3
  • Thanks 1
Link to comment
  • 2 months later...
On 3/8/2022 at 5:07 PM, MoonshineMagician said:

 

 

"mkdir wal: permission denied" tells you about the problem. Natcoso9955's config on GitHub does not specify a new folder for the wal (wal is the database for Loki to know where to continue in case the whole thing crashes). The default directory is not accessible by the container. You may fix this by specifying another folder in the loki config, e.g.:

 

ingester:
  wal:
    dir: /loki/wal
  lifecycler:
    ...

 

Thank you! This is the fix, if you've recently tried to install GrafanaLoki for the first time.

 

Now I just need to open a PR and get that busted "sample config" fixed.

Link to comment
  • 2 weeks later...
  • 6 months later...
On 3/8/2022 at 8:07 PM, MoonshineMagician said:

 

 

"mkdir wal: permission denied" tells you about the problem. Natcoso9955's config on GitHub does not specify a new folder for the wal (wal is the database for Loki to know where to continue in case the whole thing crashes). The default directory is not accessible by the container. You may fix this by specifying another folder in the loki config, e.g.:

 

ingester:
  wal:
    dir: /loki/wal
  lifecycler:
    ...

 

It didn't work for me :(

Link to comment
  • 6 months later...
  • 3 months later...

This App is not properly maintained ...

 

otherwise the link for the config would be pointing to

 

https://raw.githubusercontent.com/grafana/loki/v2.9.1/cmd/loki/loki-docker-config.yaml

 

and specifically tagging against latest is a bad idea advised against by the grafana documentation (even tough - at the current time of writing it works with latest too)

 

https://raw.githubusercontent.com/grafana/loki/main/cmd/loki/loki-docker-config.yaml

Edited by jit-010101
  • Like 2
Link to comment
  • 2 weeks later...
  • 2 months later...

I'm trying to install this docker from scratch (first time using Loki or Grafana) and even after following the instructions and uploading the yaml file, the docker doesn't start.

Upon checking the logs, here's what I'm getting:

 

mkdir /loki/chunks: permission denied
error creating object client
github.com/grafana/loki/pkg/storage.(*store).chunkClientForPeriod
        /src/loki/pkg/storage/store.go:187
github.com/grafana/loki/pkg/storage.(*store).init
        /src/loki/pkg/storage/store.go:155
github.com/grafana/loki/pkg/storage.NewStore
        /src/loki/pkg/storage/store.go:147
github.com/grafana/loki/pkg/loki.(*Loki).initStore
        /src/loki/pkg/loki/modules.go:656
github.com/grafana/dskit/modules.(*Manager).initModule
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:120
github.com/grafana/dskit/modules.(*Manager).InitModuleServices
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:92
github.com/grafana/loki/pkg/loki.(*Loki).Run
        /src/loki/pkg/loki/loki.go:458
main.main
        /src/loki/cmd/loki/main.go:110
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1598
error initialising module: store
github.com/grafana/dskit/modules.(*Manager).initModule
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:122
github.com/grafana/dskit/modules.(*Manager).InitModuleServices
        /src/loki/vendor/github.com/grafana/dskit/modules/modules.go:92
github.com/grafana/loki/pkg/loki.(*Loki).Run
        /src/loki/pkg/loki/loki.go:458
main.main
        /src/loki/cmd/loki/main.go:110
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1598
level=warn ts=2023-12-28T04:57:45.836853066Z caller=loki.go:286 msg="per-tenant timeout not configured, using default engine timeout (\"5m0s\"). This behavior will change in the next major to always use the default per-tenant timeout (\"5m\")."
level=info ts=2023-12-28T04:57:45.838308866Z caller=main.go:108 msg="Starting Loki" version="(version=2.8.7, branch=HEAD, revision=1dfdc432c)"
level=info ts=2023-12-28T04:57:45.838613841Z caller=modules.go:895 msg="Ruler storage is not configured; ruler will not be started."
level=info ts=2023-12-28T04:57:45.839282779Z caller=server.go:334 http=[::]:3100 grpc=[::]:9095 msg="server listening on addresses"
level=warn ts=2023-12-28T04:57:45.840004236Z caller=cache.go:114 msg="fifocache config is deprecated. use embedded-cache instead"
level=warn ts=2023-12-28T04:57:45.841471312Z caller=experimental.go:20 msg="experimental feature in use" feature="In-memory (FIFO) cache - chunksembedded-cache"
level=error ts=2023-12-28T04:57:45.842359094Z caller=log.go:171 msg="error running loki" err="mkdir /loki/chunks: permission denied\nerror creating object client\ngithub.com/grafana/loki/pkg/storage.(*store).chunkClientForPeriod\n\t/src/loki/pkg/storage/store.go:187\ngithub.com/grafana/loki/pkg/storage.(*store).init\n\t/src/loki/pkg/storage/store.go:155\ngithub.com/grafana/loki/pkg/storage.NewStore\n\t/src/loki/pkg/storage/store.go:147\ngithub.com/grafana/loki/pkg/loki.(*Loki).initStore\n\t/src/loki/pkg/loki/modules.go:656\ngithub.com/grafana/dskit/modules.(*Manager).initModule\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:120\ngithub.com/grafana/dskit/modules.(*Manager).InitModuleServices\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:92\ngithub.com/grafana/loki/pkg/loki.(*Loki).Run\n\t/src/loki/pkg/loki/loki.go:458\nmain.main\n\t/src/loki/cmd/loki/main.go:110\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598\nerror initialising module: store\ngithub.com/grafana/dskit/modules.(*Manager).initModule\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:122\ngithub.com/grafana/dskit/modules.(*Manager).InitModuleServices\n\t/src/loki/vendor/github.com/grafana/dskit/modules/modules.go:92\ngithub.com/grafana/loki/pkg/loki.(*Loki).Run\n\t/src/loki/pkg/loki/loki.go:458\nmain.main\n\t/src/loki/cmd/loki/main.go:110\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:250\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1598"

 

After manually creating the directories, it started running, but I'd like to know how to avoid this in the future.

What's that about 10001:10001 you mentioned above? Is this a permission that needs to be applied to the docker container or the directories themselves? How would I go about doing that?

I only know chmod commands or chown, but I have no idea what 10001 means.

 

Thanks!

Edited by Digaumspider
Link to comment
  • 1 month later...

Just a heads up, I noticed the default tag set in this template is "master", but Loki moved over to the more PC term "main" 2 years ago.
This means the default tag will load a 2 year old version of Loki, which is insecure and will likely have compatibility issues.image.thumb.png.71af997e5acf42e7eb2539964ce60379.png
 

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.